All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] arm64: compat: Add kuser helpers config option
@ 2019-04-01 11:20 ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

Currently on arm64 compat kuser helper are enabled by default.

To be on pair with arm32, this patchset makes it possible to disable
the kuser helpers by adding a CONFIG_KUSER_HELPERS option which is
enabled by default to avoid compatibility issues.

When the config option is disabled:
 - The kuser helpers-related code is not compiled with the kernel.
 - The kuser helpers mapping, for any compat process, at 0xffff0000
   is not done.
 - Any attempt to use a kuser helper from a compat process will result
   in a segfault.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

Vincenzo Frascino (4):
  arm64: compat: Alloc separate pages for vectors and sigpage
  arm64: compat: Split kuser32
  arm64: compat: Refactor aarch32_alloc_vdso_pages()
  arm64: compat: Add KUSER_HELPERS config option

 arch/arm64/Kconfig                 |  30 ++++++
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/Makefile         |   5 +-
 arch/arm64/kernel/kuser32.S        |  65 +------------
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/sigreturn32.S    |  46 +++++++++
 arch/arm64/kernel/vdso.c           | 150 +++++++++++++++++++++++------
 9 files changed, 211 insertions(+), 102 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

-- 
2.21.0

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 0/4] arm64: compat: Add kuser helpers config option
@ 2019-04-01 11:20 ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Catalin Marinas, Will Deacon, Mark Rutland

Currently on arm64 compat kuser helper are enabled by default.

To be on pair with arm32, this patchset makes it possible to disable
the kuser helpers by adding a CONFIG_KUSER_HELPERS option which is
enabled by default to avoid compatibility issues.

When the config option is disabled:
 - The kuser helpers-related code is not compiled with the kernel.
 - The kuser helpers mapping, for any compat process, at 0xffff0000
   is not done.
 - Any attempt to use a kuser helper from a compat process will result
   in a segfault.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

Vincenzo Frascino (4):
  arm64: compat: Alloc separate pages for vectors and sigpage
  arm64: compat: Split kuser32
  arm64: compat: Refactor aarch32_alloc_vdso_pages()
  arm64: compat: Add KUSER_HELPERS config option

 arch/arm64/Kconfig                 |  30 ++++++
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/Makefile         |   5 +-
 arch/arm64/kernel/kuser32.S        |  65 +------------
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/sigreturn32.S    |  46 +++++++++
 arch/arm64/kernel/vdso.c           | 150 +++++++++++++++++++++++------
 9 files changed, 211 insertions(+), 102 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

-- 
2.21.0

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 0/4] arm64: compat: Add kuser helpers config option
@ 2019-04-01 11:20 ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

Currently on arm64 compat kuser helper are enabled by default.

To be on pair with arm32, this patchset makes it possible to disable
the kuser helpers by adding a CONFIG_KUSER_HELPERS option which is
enabled by default to avoid compatibility issues.

When the config option is disabled:
 - The kuser helpers-related code is not compiled with the kernel.
 - The kuser helpers mapping, for any compat process, at 0xffff0000
   is not done.
 - Any attempt to use a kuser helper from a compat process will result
   in a segfault.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

Vincenzo Frascino (4):
  arm64: compat: Alloc separate pages for vectors and sigpage
  arm64: compat: Split kuser32
  arm64: compat: Refactor aarch32_alloc_vdso_pages()
  arm64: compat: Add KUSER_HELPERS config option

 arch/arm64/Kconfig                 |  30 ++++++
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/Makefile         |   5 +-
 arch/arm64/kernel/kuser32.S        |  65 +------------
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/sigreturn32.S    |  46 +++++++++
 arch/arm64/kernel/vdso.c           | 150 +++++++++++++++++++++++------
 9 files changed, 211 insertions(+), 102 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

In the current implementation AArch32 installs a special page called
"[vectors]" that contains sigreturn trampolines and kuser helpers,
and this is done at fixed address specified by the kuser helpers ABI.

Having sigreturn trampolines and kuser helpers in the same page, makes
difficult to maintain compatibility with arm because it makes not
possible to disable kuser helpers.

Address the problem creating separate pages for vectors and sigpage in
a similar fashion to what happens today on arm.

Change as well the meaning of mm->context.vdso for AArch32 compat since
it now points to sigpage and not to vectors anymore in order to make
simpler the implementation of the signal handling (the address of
sigpage is randomized).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/vdso.c           | 112 ++++++++++++++++++++++-------
 5 files changed, 93 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 6adc1a90e7e6..355d120b78cb 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -214,10 +214,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 	set_thread_flag(TIF_32BIT);					\
  })
 #define COMPAT_ARCH_DLINFO
-extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
-				      int uses_interp);
+extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
+					  int uses_interp);
 #define compat_arch_setup_additional_pages \
-					aarch32_setup_vectors_page
+					aarch32_setup_additional_pages
 
 #endif /* CONFIG_COMPAT */
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5d9ce62bdebd..07c873fce961 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -78,9 +78,9 @@
 #endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifdef CONFIG_COMPAT
-#define AARCH32_VECTORS_BASE	0xffff0000
+#define AARCH32_KUSER_BASE	0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
-				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
+				AARCH32_KUSER_BASE : STACK_TOP_MAX)
 #else
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 81abea0b7650..58e288aaf0ba 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,8 +20,6 @@
 #ifdef CONFIG_COMPAT
 #include <linux/compat.h>
 
-#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
-
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index cb7800acd19f..3846a1b710b5 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	compat_ulong_t retcode;
 	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
 	int thumb;
+	void *sigreturn_base;
 
 	/* Check if the handler is written for ARM or Thumb */
 	thumb = handler & 1;
@@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	} else {
 		/* Set up sigreturn pointer */
 		unsigned int idx = thumb << 1;
+		sigreturn_base = current->mm->context.vdso;
 
 		if (ka->sa.sa_flags & SA_SIGINFO)
 			idx += 3;
 
-		retcode = AARCH32_VECTORS_BASE +
-			  AARCH32_KERN_SIGRET_CODE_OFFSET +
+		retcode = ptr_to_compat(sigreturn_base) +
 			  (idx << 2) + thumb;
 	}
 
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 2d419006ad43..9556ad2036ef 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -1,5 +1,7 @@
 /*
- * VDSO implementation for AArch64 and vector page setup for AArch32.
+ * VDSO implementation for AArch64 and for AArch32:
+ * AArch64: vDSO implementation contains pages setup and data page update.
+ * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
  *
  * Copyright (C) 2012 ARM Limited
  *
@@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
 /*
  * Create and map the vectors page for AArch32 tasks.
  */
-static struct page *vectors_page[1] __ro_after_init;
+/*
+ * aarch32_vdso_pages:
+ * 0 - kuser helpers
+ * 1 - sigreturn code
+ */
+static struct page *aarch32_vdso_pages[2] __ro_after_init;
+static const struct vm_special_mapping aarch32_vdso_spec[2] = {
+	{
+		/* Must be named [vectors] for compatibility with arm. */
+		.name	= "[vectors]",
+		.pages	= &aarch32_vdso_pages[0],
+	},
+	{
+		/* Must be named [sigpage] for compatibility with arm. */
+		.name	= "[sigpage]",
+		.pages	= &aarch32_vdso_pages[1],
+	},
+};
 
-static int __init alloc_vectors_page(void)
+static int __init aarch32_alloc_vdso_pages(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
 	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
 
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
 	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vpage;
+	unsigned long vdso_pages[2];
 
-	vpage = get_zeroed_page(GFP_ATOMIC);
+	vdso_pages[0] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[0])
+		return -ENOMEM;
 
-	if (!vpage)
+	vdso_pages[1] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[1])
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
-		kuser_sz);
+	memcpy((void *)(vdso_pages[0] + 0x1000 - kuser_sz),
+	       __kuser_helper_start,
+	       kuser_sz);
 
 	/* sigreturn code */
-	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
-               __aarch32_sigret_code_start, sigret_sz);
+	memcpy((void *)vdso_pages[1],
+	       __aarch32_sigret_code_start,
+	       sigret_sz);
 
-	flush_icache_range(vpage, vpage + PAGE_SIZE);
-	vectors_page[0] = virt_to_page(vpage);
+	flush_icache_range(vdso_pages[0], vdso_pages[0] + PAGE_SIZE);
+	flush_icache_range(vdso_pages[1], vdso_pages[1] + PAGE_SIZE);
+
+	aarch32_vdso_pages[0] = virt_to_page(vdso_pages[0]);
+	aarch32_vdso_pages[1] = virt_to_page(vdso_pages[1]);
 
 	return 0;
 }
-arch_initcall(alloc_vectors_page);
+arch_initcall(aarch32_alloc_vdso_pages);
 
-int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
-	struct mm_struct *mm = current->mm;
-	unsigned long addr = AARCH32_VECTORS_BASE;
-	static const struct vm_special_mapping spec = {
-		.name	= "[vectors]",
-		.pages	= vectors_page,
+	void *ret;
+
+	/* The kuser helpers must be mapped at the ABI-defined high address */
+	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
+				       VM_READ | VM_EXEC |
+				       VM_MAYREAD | VM_MAYEXEC,
+				       &aarch32_vdso_spec[0]);
+
+	return PTR_ERR_OR_ZERO(ret);
+}
 
-	};
+static int aarch32_sigreturn_setup(struct mm_struct *mm)
+{
+	unsigned long addr;
 	void *ret;
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-	current->mm->context.vdso = (void *)addr;
+	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+	if (IS_ERR_VALUE(addr)) {
+		ret = ERR_PTR(addr);
+		goto out;
+	}
 
-	/* Map vectors page at the high address. */
 	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
-				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
-				       &spec);
+				       VM_READ | VM_EXEC | VM_MAYREAD |
+				       VM_MAYWRITE | VM_MAYEXEC,
+				       &aarch32_vdso_spec[1]);
+	if (IS_ERR(ret))
+		goto out;
 
-	up_write(&mm->mmap_sem);
+	mm->context.vdso = (void *)addr;
 
+out:
 	return PTR_ERR_OR_ZERO(ret);
 }
+
+int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	int ret;
+
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	ret = aarch32_kuser_helpers_setup(mm);
+	if (ret)
+		goto out;
+
+	ret = aarch32_sigreturn_setup(mm);
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
 #endif /* CONFIG_COMPAT */
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Catalin Marinas, Will Deacon, Mark Rutland

In the current implementation AArch32 installs a special page called
"[vectors]" that contains sigreturn trampolines and kuser helpers,
and this is done at fixed address specified by the kuser helpers ABI.

Having sigreturn trampolines and kuser helpers in the same page, makes
difficult to maintain compatibility with arm because it makes not
possible to disable kuser helpers.

Address the problem creating separate pages for vectors and sigpage in
a similar fashion to what happens today on arm.

Change as well the meaning of mm->context.vdso for AArch32 compat since
it now points to sigpage and not to vectors anymore in order to make
simpler the implementation of the signal handling (the address of
sigpage is randomized).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/vdso.c           | 112 ++++++++++++++++++++++-------
 5 files changed, 93 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 6adc1a90e7e6..355d120b78cb 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -214,10 +214,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 	set_thread_flag(TIF_32BIT);					\
  })
 #define COMPAT_ARCH_DLINFO
-extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
-				      int uses_interp);
+extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
+					  int uses_interp);
 #define compat_arch_setup_additional_pages \
-					aarch32_setup_vectors_page
+					aarch32_setup_additional_pages
 
 #endif /* CONFIG_COMPAT */
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5d9ce62bdebd..07c873fce961 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -78,9 +78,9 @@
 #endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifdef CONFIG_COMPAT
-#define AARCH32_VECTORS_BASE	0xffff0000
+#define AARCH32_KUSER_BASE	0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
-				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
+				AARCH32_KUSER_BASE : STACK_TOP_MAX)
 #else
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 81abea0b7650..58e288aaf0ba 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,8 +20,6 @@
 #ifdef CONFIG_COMPAT
 #include <linux/compat.h>
 
-#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
-
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index cb7800acd19f..3846a1b710b5 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	compat_ulong_t retcode;
 	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
 	int thumb;
+	void *sigreturn_base;
 
 	/* Check if the handler is written for ARM or Thumb */
 	thumb = handler & 1;
@@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	} else {
 		/* Set up sigreturn pointer */
 		unsigned int idx = thumb << 1;
+		sigreturn_base = current->mm->context.vdso;
 
 		if (ka->sa.sa_flags & SA_SIGINFO)
 			idx += 3;
 
-		retcode = AARCH32_VECTORS_BASE +
-			  AARCH32_KERN_SIGRET_CODE_OFFSET +
+		retcode = ptr_to_compat(sigreturn_base) +
 			  (idx << 2) + thumb;
 	}
 
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 2d419006ad43..9556ad2036ef 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -1,5 +1,7 @@
 /*
- * VDSO implementation for AArch64 and vector page setup for AArch32.
+ * VDSO implementation for AArch64 and for AArch32:
+ * AArch64: vDSO implementation contains pages setup and data page update.
+ * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
  *
  * Copyright (C) 2012 ARM Limited
  *
@@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
 /*
  * Create and map the vectors page for AArch32 tasks.
  */
-static struct page *vectors_page[1] __ro_after_init;
+/*
+ * aarch32_vdso_pages:
+ * 0 - kuser helpers
+ * 1 - sigreturn code
+ */
+static struct page *aarch32_vdso_pages[2] __ro_after_init;
+static const struct vm_special_mapping aarch32_vdso_spec[2] = {
+	{
+		/* Must be named [vectors] for compatibility with arm. */
+		.name	= "[vectors]",
+		.pages	= &aarch32_vdso_pages[0],
+	},
+	{
+		/* Must be named [sigpage] for compatibility with arm. */
+		.name	= "[sigpage]",
+		.pages	= &aarch32_vdso_pages[1],
+	},
+};
 
-static int __init alloc_vectors_page(void)
+static int __init aarch32_alloc_vdso_pages(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
 	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
 
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
 	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vpage;
+	unsigned long vdso_pages[2];
 
-	vpage = get_zeroed_page(GFP_ATOMIC);
+	vdso_pages[0] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[0])
+		return -ENOMEM;
 
-	if (!vpage)
+	vdso_pages[1] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[1])
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
-		kuser_sz);
+	memcpy((void *)(vdso_pages[0] + 0x1000 - kuser_sz),
+	       __kuser_helper_start,
+	       kuser_sz);
 
 	/* sigreturn code */
-	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
-               __aarch32_sigret_code_start, sigret_sz);
+	memcpy((void *)vdso_pages[1],
+	       __aarch32_sigret_code_start,
+	       sigret_sz);
 
-	flush_icache_range(vpage, vpage + PAGE_SIZE);
-	vectors_page[0] = virt_to_page(vpage);
+	flush_icache_range(vdso_pages[0], vdso_pages[0] + PAGE_SIZE);
+	flush_icache_range(vdso_pages[1], vdso_pages[1] + PAGE_SIZE);
+
+	aarch32_vdso_pages[0] = virt_to_page(vdso_pages[0]);
+	aarch32_vdso_pages[1] = virt_to_page(vdso_pages[1]);
 
 	return 0;
 }
-arch_initcall(alloc_vectors_page);
+arch_initcall(aarch32_alloc_vdso_pages);
 
-int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
-	struct mm_struct *mm = current->mm;
-	unsigned long addr = AARCH32_VECTORS_BASE;
-	static const struct vm_special_mapping spec = {
-		.name	= "[vectors]",
-		.pages	= vectors_page,
+	void *ret;
+
+	/* The kuser helpers must be mapped at the ABI-defined high address */
+	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
+				       VM_READ | VM_EXEC |
+				       VM_MAYREAD | VM_MAYEXEC,
+				       &aarch32_vdso_spec[0]);
+
+	return PTR_ERR_OR_ZERO(ret);
+}
 
-	};
+static int aarch32_sigreturn_setup(struct mm_struct *mm)
+{
+	unsigned long addr;
 	void *ret;
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-	current->mm->context.vdso = (void *)addr;
+	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+	if (IS_ERR_VALUE(addr)) {
+		ret = ERR_PTR(addr);
+		goto out;
+	}
 
-	/* Map vectors page at the high address. */
 	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
-				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
-				       &spec);
+				       VM_READ | VM_EXEC | VM_MAYREAD |
+				       VM_MAYWRITE | VM_MAYEXEC,
+				       &aarch32_vdso_spec[1]);
+	if (IS_ERR(ret))
+		goto out;
 
-	up_write(&mm->mmap_sem);
+	mm->context.vdso = (void *)addr;
 
+out:
 	return PTR_ERR_OR_ZERO(ret);
 }
+
+int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	int ret;
+
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	ret = aarch32_kuser_helpers_setup(mm);
+	if (ret)
+		goto out;
+
+	ret = aarch32_sigreturn_setup(mm);
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
 #endif /* CONFIG_COMPAT */
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

In the current implementation AArch32 installs a special page called
"[vectors]" that contains sigreturn trampolines and kuser helpers,
and this is done at fixed address specified by the kuser helpers ABI.

Having sigreturn trampolines and kuser helpers in the same page, makes
difficult to maintain compatibility with arm because it makes not
possible to disable kuser helpers.

Address the problem creating separate pages for vectors and sigpage in
a similar fashion to what happens today on arm.

Change as well the meaning of mm->context.vdso for AArch32 compat since
it now points to sigpage and not to vectors anymore in order to make
simpler the implementation of the signal handling (the address of
sigpage is randomized).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/vdso.c           | 112 ++++++++++++++++++++++-------
 5 files changed, 93 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 6adc1a90e7e6..355d120b78cb 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -214,10 +214,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 	set_thread_flag(TIF_32BIT);					\
  })
 #define COMPAT_ARCH_DLINFO
-extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
-				      int uses_interp);
+extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
+					  int uses_interp);
 #define compat_arch_setup_additional_pages \
-					aarch32_setup_vectors_page
+					aarch32_setup_additional_pages
 
 #endif /* CONFIG_COMPAT */
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5d9ce62bdebd..07c873fce961 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -78,9 +78,9 @@
 #endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifdef CONFIG_COMPAT
-#define AARCH32_VECTORS_BASE	0xffff0000
+#define AARCH32_KUSER_BASE	0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
-				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
+				AARCH32_KUSER_BASE : STACK_TOP_MAX)
 #else
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 81abea0b7650..58e288aaf0ba 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,8 +20,6 @@
 #ifdef CONFIG_COMPAT
 #include <linux/compat.h>
 
-#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
-
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index cb7800acd19f..3846a1b710b5 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	compat_ulong_t retcode;
 	compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT);
 	int thumb;
+	void *sigreturn_base;
 
 	/* Check if the handler is written for ARM or Thumb */
 	thumb = handler & 1;
@@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	} else {
 		/* Set up sigreturn pointer */
 		unsigned int idx = thumb << 1;
+		sigreturn_base = current->mm->context.vdso;
 
 		if (ka->sa.sa_flags & SA_SIGINFO)
 			idx += 3;
 
-		retcode = AARCH32_VECTORS_BASE +
-			  AARCH32_KERN_SIGRET_CODE_OFFSET +
+		retcode = ptr_to_compat(sigreturn_base) +
 			  (idx << 2) + thumb;
 	}
 
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 2d419006ad43..9556ad2036ef 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -1,5 +1,7 @@
 /*
- * VDSO implementation for AArch64 and vector page setup for AArch32.
+ * VDSO implementation for AArch64 and for AArch32:
+ * AArch64: vDSO implementation contains pages setup and data page update.
+ * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
  *
  * Copyright (C) 2012 ARM Limited
  *
@@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
 /*
  * Create and map the vectors page for AArch32 tasks.
  */
-static struct page *vectors_page[1] __ro_after_init;
+/*
+ * aarch32_vdso_pages:
+ * 0 - kuser helpers
+ * 1 - sigreturn code
+ */
+static struct page *aarch32_vdso_pages[2] __ro_after_init;
+static const struct vm_special_mapping aarch32_vdso_spec[2] = {
+	{
+		/* Must be named [vectors] for compatibility with arm. */
+		.name	= "[vectors]",
+		.pages	= &aarch32_vdso_pages[0],
+	},
+	{
+		/* Must be named [sigpage] for compatibility with arm. */
+		.name	= "[sigpage]",
+		.pages	= &aarch32_vdso_pages[1],
+	},
+};
 
-static int __init alloc_vectors_page(void)
+static int __init aarch32_alloc_vdso_pages(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
 	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
 
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
 	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vpage;
+	unsigned long vdso_pages[2];
 
-	vpage = get_zeroed_page(GFP_ATOMIC);
+	vdso_pages[0] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[0])
+		return -ENOMEM;
 
-	if (!vpage)
+	vdso_pages[1] = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_pages[1])
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
-		kuser_sz);
+	memcpy((void *)(vdso_pages[0] + 0x1000 - kuser_sz),
+	       __kuser_helper_start,
+	       kuser_sz);
 
 	/* sigreturn code */
-	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
-               __aarch32_sigret_code_start, sigret_sz);
+	memcpy((void *)vdso_pages[1],
+	       __aarch32_sigret_code_start,
+	       sigret_sz);
 
-	flush_icache_range(vpage, vpage + PAGE_SIZE);
-	vectors_page[0] = virt_to_page(vpage);
+	flush_icache_range(vdso_pages[0], vdso_pages[0] + PAGE_SIZE);
+	flush_icache_range(vdso_pages[1], vdso_pages[1] + PAGE_SIZE);
+
+	aarch32_vdso_pages[0] = virt_to_page(vdso_pages[0]);
+	aarch32_vdso_pages[1] = virt_to_page(vdso_pages[1]);
 
 	return 0;
 }
-arch_initcall(alloc_vectors_page);
+arch_initcall(aarch32_alloc_vdso_pages);
 
-int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
-	struct mm_struct *mm = current->mm;
-	unsigned long addr = AARCH32_VECTORS_BASE;
-	static const struct vm_special_mapping spec = {
-		.name	= "[vectors]",
-		.pages	= vectors_page,
+	void *ret;
+
+	/* The kuser helpers must be mapped at the ABI-defined high address */
+	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
+				       VM_READ | VM_EXEC |
+				       VM_MAYREAD | VM_MAYEXEC,
+				       &aarch32_vdso_spec[0]);
+
+	return PTR_ERR_OR_ZERO(ret);
+}
 
-	};
+static int aarch32_sigreturn_setup(struct mm_struct *mm)
+{
+	unsigned long addr;
 	void *ret;
 
-	if (down_write_killable(&mm->mmap_sem))
-		return -EINTR;
-	current->mm->context.vdso = (void *)addr;
+	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+	if (IS_ERR_VALUE(addr)) {
+		ret = ERR_PTR(addr);
+		goto out;
+	}
 
-	/* Map vectors page at the high address. */
 	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
-				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
-				       &spec);
+				       VM_READ | VM_EXEC | VM_MAYREAD |
+				       VM_MAYWRITE | VM_MAYEXEC,
+				       &aarch32_vdso_spec[1]);
+	if (IS_ERR(ret))
+		goto out;
 
-	up_write(&mm->mmap_sem);
+	mm->context.vdso = (void *)addr;
 
+out:
 	return PTR_ERR_OR_ZERO(ret);
 }
+
+int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	int ret;
+
+	if (down_write_killable(&mm->mmap_sem))
+		return -EINTR;
+
+	ret = aarch32_kuser_helpers_setup(mm);
+	if (ret)
+		goto out;
+
+	ret = aarch32_sigreturn_setup(mm);
+
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
+}
 #endif /* CONFIG_COMPAT */
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

To make it possible to disable kuser helpers in aarch32 we need to
divide the kuser and the sigreturn functionalities.

Split the current version of kuser32 in kuser32 (for kuser helpers)
and sigreturn32 (for sigreturn helpers).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/Makefile      |  2 +-
 arch/arm64/kernel/kuser32.S     | 58 ++-------------------------------
 arch/arm64/kernel/sigreturn32.S | 46 ++++++++++++++++++++++++++
 3 files changed, 49 insertions(+), 57 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index cd434d0719c1..50f76b88a967 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o
+					   sigreturn32.o sys_compat.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index 997e6b27ff6a..f19e2b015097 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -1,24 +1,9 @@
 /*
- * Low-level user helpers placed in the vectors page for AArch32.
+ * AArch32 user helpers.
  * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
  *
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- *
- *
- * AArch32 user helpers.
+ * Copyright (C) 2012-2018 ARM Ltd.
  *
  * Each segment is 32-byte aligned and will be moved to the top of the high
  * vector page.  New segments (if ever needed) must be added in front of
@@ -77,42 +62,3 @@ __kuser_helper_version:			// 0xffff0ffc
 	.word	((__kuser_helper_end - __kuser_helper_start) >> 5)
 	.globl	__kuser_helper_end
 __kuser_helper_end:
-
-/*
- * AArch32 sigreturn code
- *
- * For ARM syscalls, the syscall number has to be loaded into r7.
- * We do not support an OABI userspace.
- *
- * For Thumb syscalls, we also pass the syscall number via r7. We therefore
- * need two 16-bit instructions.
- */
-	.globl __aarch32_sigret_code_start
-__aarch32_sigret_code_start:
-
-	/*
-	 * ARM Code
-	 */
-	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
-
-	/*
-	 * ARM code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
-
-        .globl __aarch32_sigret_code_end
-__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S
new file mode 100644
index 000000000000..475d30d471ac
--- /dev/null
+++ b/arch/arm64/kernel/sigreturn32.S
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AArch32 sigreturn code.
+ * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
+ *
+ * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
+ * Copyright (C) 2012-2018 ARM Ltd.
+ *
+ * For ARM syscalls, the syscall number has to be loaded into r7.
+ * We do not support an OABI userspace.
+ *
+ * For Thumb syscalls, we also pass the syscall number via r7. We therefore
+ * need two 16-bit instructions.
+ */
+
+#include <asm/unistd.h>
+
+	.globl __aarch32_sigret_code_start
+__aarch32_sigret_code_start:
+
+	/*
+	 * ARM Code
+	 */
+	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3		// mov	r7, #__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef		// svc	#__NR_compat_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
+
+	/*
+	 * ARM code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
+
+        .globl __aarch32_sigret_code_end
+__aarch32_sigret_code_end:
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Catalin Marinas, Will Deacon, Mark Rutland

To make it possible to disable kuser helpers in aarch32 we need to
divide the kuser and the sigreturn functionalities.

Split the current version of kuser32 in kuser32 (for kuser helpers)
and sigreturn32 (for sigreturn helpers).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/Makefile      |  2 +-
 arch/arm64/kernel/kuser32.S     | 58 ++-------------------------------
 arch/arm64/kernel/sigreturn32.S | 46 ++++++++++++++++++++++++++
 3 files changed, 49 insertions(+), 57 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index cd434d0719c1..50f76b88a967 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o
+					   sigreturn32.o sys_compat.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index 997e6b27ff6a..f19e2b015097 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -1,24 +1,9 @@
 /*
- * Low-level user helpers placed in the vectors page for AArch32.
+ * AArch32 user helpers.
  * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
  *
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- *
- *
- * AArch32 user helpers.
+ * Copyright (C) 2012-2018 ARM Ltd.
  *
  * Each segment is 32-byte aligned and will be moved to the top of the high
  * vector page.  New segments (if ever needed) must be added in front of
@@ -77,42 +62,3 @@ __kuser_helper_version:			// 0xffff0ffc
 	.word	((__kuser_helper_end - __kuser_helper_start) >> 5)
 	.globl	__kuser_helper_end
 __kuser_helper_end:
-
-/*
- * AArch32 sigreturn code
- *
- * For ARM syscalls, the syscall number has to be loaded into r7.
- * We do not support an OABI userspace.
- *
- * For Thumb syscalls, we also pass the syscall number via r7. We therefore
- * need two 16-bit instructions.
- */
-	.globl __aarch32_sigret_code_start
-__aarch32_sigret_code_start:
-
-	/*
-	 * ARM Code
-	 */
-	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
-
-	/*
-	 * ARM code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
-
-        .globl __aarch32_sigret_code_end
-__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S
new file mode 100644
index 000000000000..475d30d471ac
--- /dev/null
+++ b/arch/arm64/kernel/sigreturn32.S
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AArch32 sigreturn code.
+ * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
+ *
+ * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
+ * Copyright (C) 2012-2018 ARM Ltd.
+ *
+ * For ARM syscalls, the syscall number has to be loaded into r7.
+ * We do not support an OABI userspace.
+ *
+ * For Thumb syscalls, we also pass the syscall number via r7. We therefore
+ * need two 16-bit instructions.
+ */
+
+#include <asm/unistd.h>
+
+	.globl __aarch32_sigret_code_start
+__aarch32_sigret_code_start:
+
+	/*
+	 * ARM Code
+	 */
+	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3		// mov	r7, #__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef		// svc	#__NR_compat_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
+
+	/*
+	 * ARM code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
+
+        .globl __aarch32_sigret_code_end
+__aarch32_sigret_code_end:
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

To make it possible to disable kuser helpers in aarch32 we need to
divide the kuser and the sigreturn functionalities.

Split the current version of kuser32 in kuser32 (for kuser helpers)
and sigreturn32 (for sigreturn helpers).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/Makefile      |  2 +-
 arch/arm64/kernel/kuser32.S     | 58 ++-------------------------------
 arch/arm64/kernel/sigreturn32.S | 46 ++++++++++++++++++++++++++
 3 files changed, 49 insertions(+), 57 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index cd434d0719c1..50f76b88a967 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o
+					   sigreturn32.o sys_compat.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index 997e6b27ff6a..f19e2b015097 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -1,24 +1,9 @@
 /*
- * Low-level user helpers placed in the vectors page for AArch32.
+ * AArch32 user helpers.
  * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
  *
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- *
- *
- * AArch32 user helpers.
+ * Copyright (C) 2012-2018 ARM Ltd.
  *
  * Each segment is 32-byte aligned and will be moved to the top of the high
  * vector page.  New segments (if ever needed) must be added in front of
@@ -77,42 +62,3 @@ __kuser_helper_version:			// 0xffff0ffc
 	.word	((__kuser_helper_end - __kuser_helper_start) >> 5)
 	.globl	__kuser_helper_end
 __kuser_helper_end:
-
-/*
- * AArch32 sigreturn code
- *
- * For ARM syscalls, the syscall number has to be loaded into r7.
- * We do not support an OABI userspace.
- *
- * For Thumb syscalls, we also pass the syscall number via r7. We therefore
- * need two 16-bit instructions.
- */
-	.globl __aarch32_sigret_code_start
-__aarch32_sigret_code_start:
-
-	/*
-	 * ARM Code
-	 */
-	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
-
-	/*
-	 * ARM code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
-
-        .globl __aarch32_sigret_code_end
-__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S
new file mode 100644
index 000000000000..475d30d471ac
--- /dev/null
+++ b/arch/arm64/kernel/sigreturn32.S
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AArch32 sigreturn code.
+ * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
+ *
+ * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
+ * Copyright (C) 2012-2018 ARM Ltd.
+ *
+ * For ARM syscalls, the syscall number has to be loaded into r7.
+ * We do not support an OABI userspace.
+ *
+ * For Thumb syscalls, we also pass the syscall number via r7. We therefore
+ * need two 16-bit instructions.
+ */
+
+#include <asm/unistd.h>
+
+	.globl __aarch32_sigret_code_start
+__aarch32_sigret_code_start:
+
+	/*
+	 * ARM Code
+	 */
+	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3		// mov	r7, #__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef		// svc	#__NR_compat_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
+
+	/*
+	 * ARM code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
+
+        .globl __aarch32_sigret_code_end
+__aarch32_sigret_code_end:
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

aarch32_alloc_vdso_pages() needs to the refactored to make it
easier to disable kuser helpers.

Divide the function in aarch32_alloc_kuser_vdso_page() and
aarch32_alloc_sigreturn_vdso_page().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/vdso.c | 49 ++++++++++++++++++++++++++--------------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 9556ad2036ef..afbbdccbf05b 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -74,40 +74,55 @@ static const struct vm_special_mapping aarch32_vdso_spec[2] = {
 	},
 };
 
-static int __init aarch32_alloc_vdso_pages(void)
+static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
-	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
-
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
-	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vdso_pages[2];
-
-	vdso_pages[0] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[0])
-		return -ENOMEM;
+	unsigned long vdso_page;
 
-	vdso_pages[1] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[1])
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)(vdso_pages[0] + 0x1000 - kuser_sz),
+	memcpy((void *)(vdso_page + 0x1000 - kuser_sz),
 	       __kuser_helper_start,
 	       kuser_sz);
 
+	aarch32_vdso_pages[0] = virt_to_page(vdso_page);
+
+	flush_dcache_page(aarch32_vdso_pages[0]);
+
+	return 0;
+}
+
+static int aarch32_alloc_sigreturn_vdso_page(void)
+{
+	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
+	unsigned long vdso_page;
+
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
+		return -ENOMEM;
+
 	/* sigreturn code */
-	memcpy((void *)vdso_pages[1],
+	memcpy((void *)vdso_page,
 	       __aarch32_sigret_code_start,
 	       sigret_sz);
 
-	flush_icache_range(vdso_pages[0], vdso_pages[0] + PAGE_SIZE);
-	flush_icache_range(vdso_pages[1], vdso_pages[1] + PAGE_SIZE);
+	aarch32_vdso_pages[1] = virt_to_page(vdso_page);
 
-	aarch32_vdso_pages[0] = virt_to_page(vdso_pages[0]);
-	aarch32_vdso_pages[1] = virt_to_page(vdso_pages[1]);
+	flush_dcache_page(aarch32_vdso_pages[1]);
 
 	return 0;
+
+}
+
+static int __init aarch32_alloc_vdso_pages(void)
+{
+	return aarch32_alloc_kuser_vdso_page() &
+	       aarch32_alloc_sigreturn_vdso_page();
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Catalin Marinas, Will Deacon, Mark Rutland

aarch32_alloc_vdso_pages() needs to the refactored to make it
easier to disable kuser helpers.

Divide the function in aarch32_alloc_kuser_vdso_page() and
aarch32_alloc_sigreturn_vdso_page().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/vdso.c | 49 ++++++++++++++++++++++++++--------------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 9556ad2036ef..afbbdccbf05b 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -74,40 +74,55 @@ static const struct vm_special_mapping aarch32_vdso_spec[2] = {
 	},
 };
 
-static int __init aarch32_alloc_vdso_pages(void)
+static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
-	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
-
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
-	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vdso_pages[2];
-
-	vdso_pages[0] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[0])
-		return -ENOMEM;
+	unsigned long vdso_page;
 
-	vdso_pages[1] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[1])
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)(vdso_pages[0] + 0x1000 - kuser_sz),
+	memcpy((void *)(vdso_page + 0x1000 - kuser_sz),
 	       __kuser_helper_start,
 	       kuser_sz);
 
+	aarch32_vdso_pages[0] = virt_to_page(vdso_page);
+
+	flush_dcache_page(aarch32_vdso_pages[0]);
+
+	return 0;
+}
+
+static int aarch32_alloc_sigreturn_vdso_page(void)
+{
+	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
+	unsigned long vdso_page;
+
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
+		return -ENOMEM;
+
 	/* sigreturn code */
-	memcpy((void *)vdso_pages[1],
+	memcpy((void *)vdso_page,
 	       __aarch32_sigret_code_start,
 	       sigret_sz);
 
-	flush_icache_range(vdso_pages[0], vdso_pages[0] + PAGE_SIZE);
-	flush_icache_range(vdso_pages[1], vdso_pages[1] + PAGE_SIZE);
+	aarch32_vdso_pages[1] = virt_to_page(vdso_page);
 
-	aarch32_vdso_pages[0] = virt_to_page(vdso_pages[0]);
-	aarch32_vdso_pages[1] = virt_to_page(vdso_pages[1]);
+	flush_dcache_page(aarch32_vdso_pages[1]);
 
 	return 0;
+
+}
+
+static int __init aarch32_alloc_vdso_pages(void)
+{
+	return aarch32_alloc_kuser_vdso_page() &
+	       aarch32_alloc_sigreturn_vdso_page();
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

aarch32_alloc_vdso_pages() needs to the refactored to make it
easier to disable kuser helpers.

Divide the function in aarch32_alloc_kuser_vdso_page() and
aarch32_alloc_sigreturn_vdso_page().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/kernel/vdso.c | 49 ++++++++++++++++++++++++++--------------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 9556ad2036ef..afbbdccbf05b 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -74,40 +74,55 @@ static const struct vm_special_mapping aarch32_vdso_spec[2] = {
 	},
 };
 
-static int __init aarch32_alloc_vdso_pages(void)
+static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
-	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
-
 	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
-	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vdso_pages[2];
-
-	vdso_pages[0] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[0])
-		return -ENOMEM;
+	unsigned long vdso_page;
 
-	vdso_pages[1] = get_zeroed_page(GFP_ATOMIC);
-	if (!vdso_pages[1])
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
 		return -ENOMEM;
 
 	/* kuser helpers */
-	memcpy((void *)(vdso_pages[0] + 0x1000 - kuser_sz),
+	memcpy((void *)(vdso_page + 0x1000 - kuser_sz),
 	       __kuser_helper_start,
 	       kuser_sz);
 
+	aarch32_vdso_pages[0] = virt_to_page(vdso_page);
+
+	flush_dcache_page(aarch32_vdso_pages[0]);
+
+	return 0;
+}
+
+static int aarch32_alloc_sigreturn_vdso_page(void)
+{
+	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
+	unsigned long vdso_page;
+
+	vdso_page = get_zeroed_page(GFP_ATOMIC);
+	if (!vdso_page)
+		return -ENOMEM;
+
 	/* sigreturn code */
-	memcpy((void *)vdso_pages[1],
+	memcpy((void *)vdso_page,
 	       __aarch32_sigret_code_start,
 	       sigret_sz);
 
-	flush_icache_range(vdso_pages[0], vdso_pages[0] + PAGE_SIZE);
-	flush_icache_range(vdso_pages[1], vdso_pages[1] + PAGE_SIZE);
+	aarch32_vdso_pages[1] = virt_to_page(vdso_page);
 
-	aarch32_vdso_pages[0] = virt_to_page(vdso_pages[0]);
-	aarch32_vdso_pages[1] = virt_to_page(vdso_pages[1]);
+	flush_dcache_page(aarch32_vdso_pages[1]);
 
 	return 0;
+
+}
+
+static int __init aarch32_alloc_vdso_pages(void)
+{
+	return aarch32_alloc_kuser_vdso_page() &
+	       aarch32_alloc_sigreturn_vdso_page();
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

When kuser helpers are enabled the kernel maps the relative code at
a fixed address (0xffff0000). Making configurable the option to disable
them means that the kernel can remove this mapping and any access to
this memory area results in a sigfault.

Add a KUSER_HELPERS config option that can be used to disable the
mapping when it is turned off.

This option can be turned off if and only if the applications are
designed specifically for the platform and they do not make use of the
kuser helpers code.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/Kconfig          | 30 ++++++++++++++++++++++++++++++
 arch/arm64/kernel/Makefile  |  3 ++-
 arch/arm64/kernel/kuser32.S |  7 +++----
 arch/arm64/kernel/vdso.c    | 15 +++++++++++++++
 4 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..35c98e91bfeb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1494,6 +1494,36 @@ config COMPAT
 
 	  If you want to execute 32-bit userspace applications, say Y.
 
+config KUSER_HELPERS
+	bool "Enable kuser helpers page for compatibility with 32 bit applications."
+	depends on COMPAT
+	default y
+	help
+	  Warning: disabling this option may break user programs.
+
+	  Provide kuser helpers to compat tasks. The kernel provides
+	  helper code to userspace in read only form at a fixed location
+	  to allow userspace to be independent of the CPU type fitted to
+	  the system. This permits binaries to be run on ARMv4 through
+	  to ARMv8 without modification.
+
+	  See Documentation/arm/kernel_user_helpers.txt for details.
+
+	  However, the fixed address nature of these helpers can be used
+	  by ROP (return orientated programming) authors when creating
+	  exploits.
+
+	  If all of the binaries and libraries which run on your platform
+	  are built specifically for your platform, and make no use of
+	  these helpers, then you can turn this option off to hinder
+	  such exploits. However, in that case, if a binary or library
+	  relying on those helpers is run, it will not function correctly.
+
+	  Note: kuser helpers is disabled by default with 64K pages.
+
+	  Say N here only if you are absolutely certain that you do not
+	  need these helpers; otherwise, the safe option is to say Y.
+
 config SYSVIPC_COMPAT
 	def_bool y
 	depends on COMPAT && SYSVIPC
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 50f76b88a967..c7bd0794855a 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
 $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
-obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
+obj-$(CONFIG_COMPAT)			+= sys32.o signal32.o			\
 					   sigreturn32.o sys_compat.o
+obj-$(CONFIG_KUSER_HELPERS)		+= kuser32.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index f19e2b015097..7d38633bf33f 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -5,10 +5,9 @@
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
  * Copyright (C) 2012-2018 ARM Ltd.
  *
- * Each segment is 32-byte aligned and will be moved to the top of the high
- * vector page.  New segments (if ever needed) must be added in front of
- * existing ones.  This mechanism should be used only for things that are
- * really small and justified, and not be abused freely.
+ * The kuser helpers below are mapped at a fixed address by
+ * aarch32_setup_additional_pages() ad are provided for compatibility
+ * reasons with 32 bit (aarch32) applications that need them.
  *
  * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
  */
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index afbbdccbf05b..b3f0c4ae28aa 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -74,6 +74,7 @@ static const struct vm_special_mapping aarch32_vdso_spec[2] = {
 	},
 };
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
@@ -95,6 +96,12 @@ static int aarch32_alloc_kuser_vdso_page(void)
 
 	return 0;
 }
+#else
+static int aarch32_alloc_kuser_vdso_page(void)
+{
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPER */
 
 static int aarch32_alloc_sigreturn_vdso_page(void)
 {
@@ -126,6 +133,7 @@ static int __init aarch32_alloc_vdso_pages(void)
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
 	void *ret;
@@ -138,6 +146,13 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 
 	return PTR_ERR_OR_ZERO(ret);
 }
+#else
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
+{
+	/* kuser helpers not enabled */
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPERS */
 
 static int aarch32_sigreturn_setup(struct mm_struct *mm)
 {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Catalin Marinas, Will Deacon, Mark Rutland

When kuser helpers are enabled the kernel maps the relative code at
a fixed address (0xffff0000). Making configurable the option to disable
them means that the kernel can remove this mapping and any access to
this memory area results in a sigfault.

Add a KUSER_HELPERS config option that can be used to disable the
mapping when it is turned off.

This option can be turned off if and only if the applications are
designed specifically for the platform and they do not make use of the
kuser helpers code.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/Kconfig          | 30 ++++++++++++++++++++++++++++++
 arch/arm64/kernel/Makefile  |  3 ++-
 arch/arm64/kernel/kuser32.S |  7 +++----
 arch/arm64/kernel/vdso.c    | 15 +++++++++++++++
 4 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..35c98e91bfeb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1494,6 +1494,36 @@ config COMPAT
 
 	  If you want to execute 32-bit userspace applications, say Y.
 
+config KUSER_HELPERS
+	bool "Enable kuser helpers page for compatibility with 32 bit applications."
+	depends on COMPAT
+	default y
+	help
+	  Warning: disabling this option may break user programs.
+
+	  Provide kuser helpers to compat tasks. The kernel provides
+	  helper code to userspace in read only form at a fixed location
+	  to allow userspace to be independent of the CPU type fitted to
+	  the system. This permits binaries to be run on ARMv4 through
+	  to ARMv8 without modification.
+
+	  See Documentation/arm/kernel_user_helpers.txt for details.
+
+	  However, the fixed address nature of these helpers can be used
+	  by ROP (return orientated programming) authors when creating
+	  exploits.
+
+	  If all of the binaries and libraries which run on your platform
+	  are built specifically for your platform, and make no use of
+	  these helpers, then you can turn this option off to hinder
+	  such exploits. However, in that case, if a binary or library
+	  relying on those helpers is run, it will not function correctly.
+
+	  Note: kuser helpers is disabled by default with 64K pages.
+
+	  Say N here only if you are absolutely certain that you do not
+	  need these helpers; otherwise, the safe option is to say Y.
+
 config SYSVIPC_COMPAT
 	def_bool y
 	depends on COMPAT && SYSVIPC
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 50f76b88a967..c7bd0794855a 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
 $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
-obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
+obj-$(CONFIG_COMPAT)			+= sys32.o signal32.o			\
 					   sigreturn32.o sys_compat.o
+obj-$(CONFIG_KUSER_HELPERS)		+= kuser32.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index f19e2b015097..7d38633bf33f 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -5,10 +5,9 @@
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
  * Copyright (C) 2012-2018 ARM Ltd.
  *
- * Each segment is 32-byte aligned and will be moved to the top of the high
- * vector page.  New segments (if ever needed) must be added in front of
- * existing ones.  This mechanism should be used only for things that are
- * really small and justified, and not be abused freely.
+ * The kuser helpers below are mapped at a fixed address by
+ * aarch32_setup_additional_pages() ad are provided for compatibility
+ * reasons with 32 bit (aarch32) applications that need them.
  *
  * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
  */
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index afbbdccbf05b..b3f0c4ae28aa 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -74,6 +74,7 @@ static const struct vm_special_mapping aarch32_vdso_spec[2] = {
 	},
 };
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
@@ -95,6 +96,12 @@ static int aarch32_alloc_kuser_vdso_page(void)
 
 	return 0;
 }
+#else
+static int aarch32_alloc_kuser_vdso_page(void)
+{
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPER */
 
 static int aarch32_alloc_sigreturn_vdso_page(void)
 {
@@ -126,6 +133,7 @@ static int __init aarch32_alloc_vdso_pages(void)
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
 	void *ret;
@@ -138,6 +146,13 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 
 	return PTR_ERR_OR_ZERO(ret);
 }
+#else
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
+{
+	/* kuser helpers not enabled */
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPERS */
 
 static int aarch32_sigreturn_setup(struct mm_struct *mm)
 {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-01 11:20   ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-01 11:20 UTC (permalink / raw)
  To: linux-arch, linux-arm-kernel; +Cc: Mark Rutland, Catalin Marinas, Will Deacon

When kuser helpers are enabled the kernel maps the relative code at
a fixed address (0xffff0000). Making configurable the option to disable
them means that the kernel can remove this mapping and any access to
this memory area results in a sigfault.

Add a KUSER_HELPERS config option that can be used to disable the
mapping when it is turned off.

This option can be turned off if and only if the applications are
designed specifically for the platform and they do not make use of the
kuser helpers code.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/Kconfig          | 30 ++++++++++++++++++++++++++++++
 arch/arm64/kernel/Makefile  |  3 ++-
 arch/arm64/kernel/kuser32.S |  7 +++----
 arch/arm64/kernel/vdso.c    | 15 +++++++++++++++
 4 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..35c98e91bfeb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1494,6 +1494,36 @@ config COMPAT
 
 	  If you want to execute 32-bit userspace applications, say Y.
 
+config KUSER_HELPERS
+	bool "Enable kuser helpers page for compatibility with 32 bit applications."
+	depends on COMPAT
+	default y
+	help
+	  Warning: disabling this option may break user programs.
+
+	  Provide kuser helpers to compat tasks. The kernel provides
+	  helper code to userspace in read only form at a fixed location
+	  to allow userspace to be independent of the CPU type fitted to
+	  the system. This permits binaries to be run on ARMv4 through
+	  to ARMv8 without modification.
+
+	  See Documentation/arm/kernel_user_helpers.txt for details.
+
+	  However, the fixed address nature of these helpers can be used
+	  by ROP (return orientated programming) authors when creating
+	  exploits.
+
+	  If all of the binaries and libraries which run on your platform
+	  are built specifically for your platform, and make no use of
+	  these helpers, then you can turn this option off to hinder
+	  such exploits. However, in that case, if a binary or library
+	  relying on those helpers is run, it will not function correctly.
+
+	  Note: kuser helpers is disabled by default with 64K pages.
+
+	  Say N here only if you are absolutely certain that you do not
+	  need these helpers; otherwise, the safe option is to say Y.
+
 config SYSVIPC_COMPAT
 	def_bool y
 	depends on COMPAT && SYSVIPC
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 50f76b88a967..c7bd0794855a 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
 $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
-obj-$(CONFIG_COMPAT)			+= sys32.o kuser32.o signal32.o 	\
+obj-$(CONFIG_COMPAT)			+= sys32.o signal32.o			\
 					   sigreturn32.o sys_compat.o
+obj-$(CONFIG_KUSER_HELPERS)		+= kuser32.o
 obj-$(CONFIG_FUNCTION_TRACER)		+= ftrace.o entry-ftrace.o
 obj-$(CONFIG_MODULES)			+= module.o
 obj-$(CONFIG_ARM64_MODULE_PLTS)		+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index f19e2b015097..7d38633bf33f 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -5,10 +5,9 @@
  * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
  * Copyright (C) 2012-2018 ARM Ltd.
  *
- * Each segment is 32-byte aligned and will be moved to the top of the high
- * vector page.  New segments (if ever needed) must be added in front of
- * existing ones.  This mechanism should be used only for things that are
- * really small and justified, and not be abused freely.
+ * The kuser helpers below are mapped at a fixed address by
+ * aarch32_setup_additional_pages() ad are provided for compatibility
+ * reasons with 32 bit (aarch32) applications that need them.
  *
  * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
  */
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index afbbdccbf05b..b3f0c4ae28aa 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -74,6 +74,7 @@ static const struct vm_special_mapping aarch32_vdso_spec[2] = {
 	},
 };
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_alloc_kuser_vdso_page(void)
 {
 	extern char __kuser_helper_start[], __kuser_helper_end[];
@@ -95,6 +96,12 @@ static int aarch32_alloc_kuser_vdso_page(void)
 
 	return 0;
 }
+#else
+static int aarch32_alloc_kuser_vdso_page(void)
+{
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPER */
 
 static int aarch32_alloc_sigreturn_vdso_page(void)
 {
@@ -126,6 +133,7 @@ static int __init aarch32_alloc_vdso_pages(void)
 }
 arch_initcall(aarch32_alloc_vdso_pages);
 
+#ifdef CONFIG_KUSER_HELPERS
 static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 {
 	void *ret;
@@ -138,6 +146,13 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
 
 	return PTR_ERR_OR_ZERO(ret);
 }
+#else
+static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
+{
+	/* kuser helpers not enabled */
+	return 0;
+}
+#endif /* CONFIG_KUSER_HELPERS */
 
 static int aarch32_sigreturn_setup(struct mm_struct *mm)
 {
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-01 14:27     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:27 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 2d419006ad43..9556ad2036ef 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -1,5 +1,7 @@
>  /*
> - * VDSO implementation for AArch64 and vector page setup for AArch32.
> + * VDSO implementation for AArch64 and for AArch32:
> + * AArch64: vDSO implementation contains pages setup and data page update.
> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>   *
>   * Copyright (C) 2012 ARM Limited
>   *
> @@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>  /*
>   * Create and map the vectors page for AArch32 tasks.
>   */
> -static struct page *vectors_page[1] __ro_after_init;
> +/*
> + * aarch32_vdso_pages:
> + * 0 - kuser helpers
> + * 1 - sigreturn code
> + */
> +static struct page *aarch32_vdso_pages[2] __ro_after_init;

More of a nitpick, the code may be easier to follow if we had two
separate variables. Does the array buy us anything?

> +static const struct vm_special_mapping aarch32_vdso_spec[2] = {
> +	{
> +		/* Must be named [vectors] for compatibility with arm. */
> +		.name	= "[vectors]",
> +		.pages	= &aarch32_vdso_pages[0],
> +	},
> +	{
> +		/* Must be named [sigpage] for compatibility with arm. */
> +		.name	= "[sigpage]",
> +		.pages	= &aarch32_vdso_pages[1],
> +	},
> +};
[...]
> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>  {
> -	struct mm_struct *mm = current->mm;
> -	unsigned long addr = AARCH32_VECTORS_BASE;
> -	static const struct vm_special_mapping spec = {
> -		.name	= "[vectors]",
> -		.pages	= vectors_page,
> +	void *ret;
> +
> +	/* The kuser helpers must be mapped at the ABI-defined high address */
> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
> +				       VM_READ | VM_EXEC |
> +				       VM_MAYREAD | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[0]);
> +
> +	return PTR_ERR_OR_ZERO(ret);
> +}
>  
> -	};
> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
> +{
> +	unsigned long addr;
>  	void *ret;
>  
> -	if (down_write_killable(&mm->mmap_sem))
> -		return -EINTR;
> -	current->mm->context.vdso = (void *)addr;
> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
> +	if (IS_ERR_VALUE(addr)) {
> +		ret = ERR_PTR(addr);
> +		goto out;
> +	}
>  
> -	/* Map vectors page at the high address. */
>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> -				       &spec);
> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> +				       VM_MAYWRITE | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[1]);

Any reason for setting VM_MAYWRITE here?

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-01 14:27     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:27 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 2d419006ad43..9556ad2036ef 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -1,5 +1,7 @@
>  /*
> - * VDSO implementation for AArch64 and vector page setup for AArch32.
> + * VDSO implementation for AArch64 and for AArch32:
> + * AArch64: vDSO implementation contains pages setup and data page update.
> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>   *
>   * Copyright (C) 2012 ARM Limited
>   *
> @@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>  /*
>   * Create and map the vectors page for AArch32 tasks.
>   */
> -static struct page *vectors_page[1] __ro_after_init;
> +/*
> + * aarch32_vdso_pages:
> + * 0 - kuser helpers
> + * 1 - sigreturn code
> + */
> +static struct page *aarch32_vdso_pages[2] __ro_after_init;

More of a nitpick, the code may be easier to follow if we had two
separate variables. Does the array buy us anything?

> +static const struct vm_special_mapping aarch32_vdso_spec[2] = {
> +	{
> +		/* Must be named [vectors] for compatibility with arm. */
> +		.name	= "[vectors]",
> +		.pages	= &aarch32_vdso_pages[0],
> +	},
> +	{
> +		/* Must be named [sigpage] for compatibility with arm. */
> +		.name	= "[sigpage]",
> +		.pages	= &aarch32_vdso_pages[1],
> +	},
> +};
[...]
> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>  {
> -	struct mm_struct *mm = current->mm;
> -	unsigned long addr = AARCH32_VECTORS_BASE;
> -	static const struct vm_special_mapping spec = {
> -		.name	= "[vectors]",
> -		.pages	= vectors_page,
> +	void *ret;
> +
> +	/* The kuser helpers must be mapped at the ABI-defined high address */
> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
> +				       VM_READ | VM_EXEC |
> +				       VM_MAYREAD | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[0]);
> +
> +	return PTR_ERR_OR_ZERO(ret);
> +}
>  
> -	};
> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
> +{
> +	unsigned long addr;
>  	void *ret;
>  
> -	if (down_write_killable(&mm->mmap_sem))
> -		return -EINTR;
> -	current->mm->context.vdso = (void *)addr;
> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
> +	if (IS_ERR_VALUE(addr)) {
> +		ret = ERR_PTR(addr);
> +		goto out;
> +	}
>  
> -	/* Map vectors page at the high address. */
>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> -				       &spec);
> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> +				       VM_MAYWRITE | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[1]);

Any reason for setting VM_MAYWRITE here?

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-01 14:27     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:27 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 2d419006ad43..9556ad2036ef 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -1,5 +1,7 @@
>  /*
> - * VDSO implementation for AArch64 and vector page setup for AArch32.
> + * VDSO implementation for AArch64 and for AArch32:
> + * AArch64: vDSO implementation contains pages setup and data page update.
> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>   *
>   * Copyright (C) 2012 ARM Limited
>   *
> @@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>  /*
>   * Create and map the vectors page for AArch32 tasks.
>   */
> -static struct page *vectors_page[1] __ro_after_init;
> +/*
> + * aarch32_vdso_pages:
> + * 0 - kuser helpers
> + * 1 - sigreturn code
> + */
> +static struct page *aarch32_vdso_pages[2] __ro_after_init;

More of a nitpick, the code may be easier to follow if we had two
separate variables. Does the array buy us anything?

> +static const struct vm_special_mapping aarch32_vdso_spec[2] = {
> +	{
> +		/* Must be named [vectors] for compatibility with arm. */
> +		.name	= "[vectors]",
> +		.pages	= &aarch32_vdso_pages[0],
> +	},
> +	{
> +		/* Must be named [sigpage] for compatibility with arm. */
> +		.name	= "[sigpage]",
> +		.pages	= &aarch32_vdso_pages[1],
> +	},
> +};
[...]
> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>  {
> -	struct mm_struct *mm = current->mm;
> -	unsigned long addr = AARCH32_VECTORS_BASE;
> -	static const struct vm_special_mapping spec = {
> -		.name	= "[vectors]",
> -		.pages	= vectors_page,
> +	void *ret;
> +
> +	/* The kuser helpers must be mapped at the ABI-defined high address */
> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
> +				       VM_READ | VM_EXEC |
> +				       VM_MAYREAD | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[0]);
> +
> +	return PTR_ERR_OR_ZERO(ret);
> +}
>  
> -	};
> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
> +{
> +	unsigned long addr;
>  	void *ret;
>  
> -	if (down_write_killable(&mm->mmap_sem))
> -		return -EINTR;
> -	current->mm->context.vdso = (void *)addr;
> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
> +	if (IS_ERR_VALUE(addr)) {
> +		ret = ERR_PTR(addr);
> +		goto out;
> +	}
>  
> -	/* Map vectors page at the high address. */
>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> -				       &spec);
> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> +				       VM_MAYWRITE | VM_MAYEXEC,
> +				       &aarch32_vdso_spec[1]);

Any reason for setting VM_MAYWRITE here?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-01 14:30     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:30 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:23PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
> index 997e6b27ff6a..f19e2b015097 100644
> --- a/arch/arm64/kernel/kuser32.S
> +++ b/arch/arm64/kernel/kuser32.S
> @@ -1,24 +1,9 @@
>  /*
> - * Low-level user helpers placed in the vectors page for AArch32.
> + * AArch32 user helpers.
>   * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
>   *
>   * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
> - * Copyright (C) 2012 ARM Ltd.
> - *
> - * This program is free software; you can redistribute it and/or modify
> - * it under the terms of the GNU General Public License version 2 as
> - * published by the Free Software Foundation.
> - *
> - * This program is distributed in the hope that it will be useful,
> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> - * GNU General Public License for more details.
> - *
> - * You should have received a copy of the GNU General Public License
> - * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> - *
> - *
> - * AArch32 user helpers.
> + * Copyright (C) 2012-2018 ARM Ltd.

If you remove the license text, please add the SPDX header.

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-01 14:30     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:30 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On Mon, Apr 01, 2019 at 12:20:23PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
> index 997e6b27ff6a..f19e2b015097 100644
> --- a/arch/arm64/kernel/kuser32.S
> +++ b/arch/arm64/kernel/kuser32.S
> @@ -1,24 +1,9 @@
>  /*
> - * Low-level user helpers placed in the vectors page for AArch32.
> + * AArch32 user helpers.
>   * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
>   *
>   * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
> - * Copyright (C) 2012 ARM Ltd.
> - *
> - * This program is free software; you can redistribute it and/or modify
> - * it under the terms of the GNU General Public License version 2 as
> - * published by the Free Software Foundation.
> - *
> - * This program is distributed in the hope that it will be useful,
> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> - * GNU General Public License for more details.
> - *
> - * You should have received a copy of the GNU General Public License
> - * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> - *
> - *
> - * AArch32 user helpers.
> + * Copyright (C) 2012-2018 ARM Ltd.

If you remove the license text, please add the SPDX header.

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-01 14:30     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:30 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:23PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
> index 997e6b27ff6a..f19e2b015097 100644
> --- a/arch/arm64/kernel/kuser32.S
> +++ b/arch/arm64/kernel/kuser32.S
> @@ -1,24 +1,9 @@
>  /*
> - * Low-level user helpers placed in the vectors page for AArch32.
> + * AArch32 user helpers.
>   * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
>   *
>   * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
> - * Copyright (C) 2012 ARM Ltd.
> - *
> - * This program is free software; you can redistribute it and/or modify
> - * it under the terms of the GNU General Public License version 2 as
> - * published by the Free Software Foundation.
> - *
> - * This program is distributed in the hope that it will be useful,
> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> - * GNU General Public License for more details.
> - *
> - * You should have received a copy of the GNU General Public License
> - * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> - *
> - *
> - * AArch32 user helpers.
> + * Copyright (C) 2012-2018 ARM Ltd.

If you remove the license text, please add the SPDX header.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-01 14:43     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:43 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:24PM +0100, Vincenzo Frascino wrote:
> +static int __init aarch32_alloc_vdso_pages(void)
> +{
> +	return aarch32_alloc_kuser_vdso_page() &
> +	       aarch32_alloc_sigreturn_vdso_page();
>  }
>  arch_initcall(aarch32_alloc_vdso_pages);

It probably doesn't matter much but I'd rather not bit-and two error
codes. Just return the non-zero one or pick the first (your choice) if
both are wrong.

(normally, if you want a non-zero random value if any of them failed,
you'd use bit-or)

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-01 14:43     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:43 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On Mon, Apr 01, 2019 at 12:20:24PM +0100, Vincenzo Frascino wrote:
> +static int __init aarch32_alloc_vdso_pages(void)
> +{
> +	return aarch32_alloc_kuser_vdso_page() &
> +	       aarch32_alloc_sigreturn_vdso_page();
>  }
>  arch_initcall(aarch32_alloc_vdso_pages);

It probably doesn't matter much but I'd rather not bit-and two error
codes. Just return the non-zero one or pick the first (your choice) if
both are wrong.

(normally, if you want a non-zero random value if any of them failed,
you'd use bit-or)

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-01 14:43     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:43 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:24PM +0100, Vincenzo Frascino wrote:
> +static int __init aarch32_alloc_vdso_pages(void)
> +{
> +	return aarch32_alloc_kuser_vdso_page() &
> +	       aarch32_alloc_sigreturn_vdso_page();
>  }
>  arch_initcall(aarch32_alloc_vdso_pages);

It probably doesn't matter much but I'd rather not bit-and two error
codes. Just return the non-zero one or pick the first (your choice) if
both are wrong.

(normally, if you want a non-zero random value if any of them failed,
you'd use bit-or)

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-01 14:48     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:48 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:25PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9eba5de..35c98e91bfeb 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1494,6 +1494,36 @@ config COMPAT
>  
>  	  If you want to execute 32-bit userspace applications, say Y.
>  
> +config KUSER_HELPERS
> +	bool "Enable kuser helpers page for compatibility with 32 bit applications."

I'd say only "Enable kuser helpers page for 32-bit applications" (my
first reading of this sounded like it would be enabled for 64-bit apps
to be on par with 32-bit ones).

> +	depends on COMPAT
> +	default y
> +	help
> +	  Warning: disabling this option may break user programs.

"may break 32-bit user programs."

> +
> +	  Provide kuser helpers to compat tasks. The kernel provides
> +	  helper code to userspace in read only form at a fixed location
> +	  to allow userspace to be independent of the CPU type fitted to
> +	  the system. This permits binaries to be run on ARMv4 through
> +	  to ARMv8 without modification.
> +
> +	  See Documentation/arm/kernel_user_helpers.txt for details.
> +
> +	  However, the fixed address nature of these helpers can be used
> +	  by ROP (return orientated programming) authors when creating
> +	  exploits.
> +
> +	  If all of the binaries and libraries which run on your platform
> +	  are built specifically for your platform, and make no use of
> +	  these helpers, then you can turn this option off to hinder
> +	  such exploits. However, in that case, if a binary or library
> +	  relying on those helpers is run, it will not function correctly.
> +
> +	  Note: kuser helpers is disabled by default with 64K pages.

Is it?

> +
> +	  Say N here only if you are absolutely certain that you do not
> +	  need these helpers; otherwise, the safe option is to say Y.
> +

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-01 14:48     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:48 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On Mon, Apr 01, 2019 at 12:20:25PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9eba5de..35c98e91bfeb 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1494,6 +1494,36 @@ config COMPAT
>  
>  	  If you want to execute 32-bit userspace applications, say Y.
>  
> +config KUSER_HELPERS
> +	bool "Enable kuser helpers page for compatibility with 32 bit applications."

I'd say only "Enable kuser helpers page for 32-bit applications" (my
first reading of this sounded like it would be enabled for 64-bit apps
to be on par with 32-bit ones).

> +	depends on COMPAT
> +	default y
> +	help
> +	  Warning: disabling this option may break user programs.

"may break 32-bit user programs."

> +
> +	  Provide kuser helpers to compat tasks. The kernel provides
> +	  helper code to userspace in read only form at a fixed location
> +	  to allow userspace to be independent of the CPU type fitted to
> +	  the system. This permits binaries to be run on ARMv4 through
> +	  to ARMv8 without modification.
> +
> +	  See Documentation/arm/kernel_user_helpers.txt for details.
> +
> +	  However, the fixed address nature of these helpers can be used
> +	  by ROP (return orientated programming) authors when creating
> +	  exploits.
> +
> +	  If all of the binaries and libraries which run on your platform
> +	  are built specifically for your platform, and make no use of
> +	  these helpers, then you can turn this option off to hinder
> +	  such exploits. However, in that case, if a binary or library
> +	  relying on those helpers is run, it will not function correctly.
> +
> +	  Note: kuser helpers is disabled by default with 64K pages.

Is it?

> +
> +	  Say N here only if you are absolutely certain that you do not
> +	  need these helpers; otherwise, the safe option is to say Y.
> +

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-01 14:48     ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-01 14:48 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Mon, Apr 01, 2019 at 12:20:25PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9eba5de..35c98e91bfeb 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1494,6 +1494,36 @@ config COMPAT
>  
>  	  If you want to execute 32-bit userspace applications, say Y.
>  
> +config KUSER_HELPERS
> +	bool "Enable kuser helpers page for compatibility with 32 bit applications."

I'd say only "Enable kuser helpers page for 32-bit applications" (my
first reading of this sounded like it would be enabled for 64-bit apps
to be on par with 32-bit ones).

> +	depends on COMPAT
> +	default y
> +	help
> +	  Warning: disabling this option may break user programs.

"may break 32-bit user programs."

> +
> +	  Provide kuser helpers to compat tasks. The kernel provides
> +	  helper code to userspace in read only form at a fixed location
> +	  to allow userspace to be independent of the CPU type fitted to
> +	  the system. This permits binaries to be run on ARMv4 through
> +	  to ARMv8 without modification.
> +
> +	  See Documentation/arm/kernel_user_helpers.txt for details.
> +
> +	  However, the fixed address nature of these helpers can be used
> +	  by ROP (return orientated programming) authors when creating
> +	  exploits.
> +
> +	  If all of the binaries and libraries which run on your platform
> +	  are built specifically for your platform, and make no use of
> +	  these helpers, then you can turn this option off to hinder
> +	  such exploits. However, in that case, if a binary or library
> +	  relying on those helpers is run, it will not function correctly.
> +
> +	  Note: kuser helpers is disabled by default with 64K pages.

Is it?

> +
> +	  Say N here only if you are absolutely certain that you do not
> +	  need these helpers; otherwise, the safe option is to say Y.
> +

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-02  9:47       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02  9:47 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 01/04/2019 15:30, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:23PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
>> index 997e6b27ff6a..f19e2b015097 100644
>> --- a/arch/arm64/kernel/kuser32.S
>> +++ b/arch/arm64/kernel/kuser32.S
>> @@ -1,24 +1,9 @@
>>  /*
>> - * Low-level user helpers placed in the vectors page for AArch32.
>> + * AArch32 user helpers.
>>   * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
>>   *
>>   * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
>> - * Copyright (C) 2012 ARM Ltd.
>> - *
>> - * This program is free software; you can redistribute it and/or modify
>> - * it under the terms of the GNU General Public License version 2 as
>> - * published by the Free Software Foundation.
>> - *
>> - * This program is distributed in the hope that it will be useful,
>> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> - * GNU General Public License for more details.
>> - *
>> - * You should have received a copy of the GNU General Public License
>> - * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> - *
>> - *
>> - * AArch32 user helpers.
>> + * Copyright (C) 2012-2018 ARM Ltd.
> 
> If you remove the license text, please add the SPDX header.
> 

Oops... I will fix it in v2.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-02  9:47       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02  9:47 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On 01/04/2019 15:30, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:23PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
>> index 997e6b27ff6a..f19e2b015097 100644
>> --- a/arch/arm64/kernel/kuser32.S
>> +++ b/arch/arm64/kernel/kuser32.S
>> @@ -1,24 +1,9 @@
>>  /*
>> - * Low-level user helpers placed in the vectors page for AArch32.
>> + * AArch32 user helpers.
>>   * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
>>   *
>>   * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
>> - * Copyright (C) 2012 ARM Ltd.
>> - *
>> - * This program is free software; you can redistribute it and/or modify
>> - * it under the terms of the GNU General Public License version 2 as
>> - * published by the Free Software Foundation.
>> - *
>> - * This program is distributed in the hope that it will be useful,
>> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> - * GNU General Public License for more details.
>> - *
>> - * You should have received a copy of the GNU General Public License
>> - * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> - *
>> - *
>> - * AArch32 user helpers.
>> + * Copyright (C) 2012-2018 ARM Ltd.
> 
> If you remove the license text, please add the SPDX header.
> 

Oops... I will fix it in v2.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 2/4] arm64: compat: Split kuser32
@ 2019-04-02  9:47       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02  9:47 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 01/04/2019 15:30, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:23PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
>> index 997e6b27ff6a..f19e2b015097 100644
>> --- a/arch/arm64/kernel/kuser32.S
>> +++ b/arch/arm64/kernel/kuser32.S
>> @@ -1,24 +1,9 @@
>>  /*
>> - * Low-level user helpers placed in the vectors page for AArch32.
>> + * AArch32 user helpers.
>>   * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
>>   *
>>   * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
>> - * Copyright (C) 2012 ARM Ltd.
>> - *
>> - * This program is free software; you can redistribute it and/or modify
>> - * it under the terms of the GNU General Public License version 2 as
>> - * published by the Free Software Foundation.
>> - *
>> - * This program is distributed in the hope that it will be useful,
>> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> - * GNU General Public License for more details.
>> - *
>> - * You should have received a copy of the GNU General Public License
>> - * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> - *
>> - *
>> - * AArch32 user helpers.
>> + * Copyright (C) 2012-2018 ARM Ltd.
> 
> If you remove the license text, please add the SPDX header.
> 

Oops... I will fix it in v2.

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:01       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:01 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 01/04/2019 15:27, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index 2d419006ad43..9556ad2036ef 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -1,5 +1,7 @@
>>  /*
>> - * VDSO implementation for AArch64 and vector page setup for AArch32.
>> + * VDSO implementation for AArch64 and for AArch32:
>> + * AArch64: vDSO implementation contains pages setup and data page update.
>> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>>   *
>>   * Copyright (C) 2012 ARM Limited
>>   *
>> @@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>>  /*
>>   * Create and map the vectors page for AArch32 tasks.
>>   */
>> -static struct page *vectors_page[1] __ro_after_init;
>> +/*
>> + * aarch32_vdso_pages:
>> + * 0 - kuser helpers
>> + * 1 - sigreturn code
>> + */
>> +static struct page *aarch32_vdso_pages[2] __ro_after_init;
> 
> More of a nitpick, the code may be easier to follow if we had two
> separate variables. Does the array buy us anything?
>

Even though it does not make much difference right now, it simplifies the
implementation of the compat vdso going forward.

But I agree with you, we can always make the code more readable hence I will
introduce some meaningful defines in v2 (instead of 0 and 1 indexes).

>> +static const struct vm_special_mapping aarch32_vdso_spec[2] = {
>> +	{
>> +		/* Must be named [vectors] for compatibility with arm. */
>> +		.name	= "[vectors]",
>> +		.pages	= &aarch32_vdso_pages[0],
>> +	},
>> +	{
>> +		/* Must be named [sigpage] for compatibility with arm. */
>> +		.name	= "[sigpage]",
>> +		.pages	= &aarch32_vdso_pages[1],
>> +	},
>> +};
> [...]
>> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
>> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>>  {
>> -	struct mm_struct *mm = current->mm;
>> -	unsigned long addr = AARCH32_VECTORS_BASE;
>> -	static const struct vm_special_mapping spec = {
>> -		.name	= "[vectors]",
>> -		.pages	= vectors_page,
>> +	void *ret;
>> +
>> +	/* The kuser helpers must be mapped at the ABI-defined high address */
>> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
>> +				       VM_READ | VM_EXEC |
>> +				       VM_MAYREAD | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[0]);
>> +
>> +	return PTR_ERR_OR_ZERO(ret);
>> +}
>>  
>> -	};
>> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
>> +{
>> +	unsigned long addr;
>>  	void *ret;
>>  
>> -	if (down_write_killable(&mm->mmap_sem))
>> -		return -EINTR;
>> -	current->mm->context.vdso = (void *)addr;
>> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
>> +	if (IS_ERR_VALUE(addr)) {
>> +		ret = ERR_PTR(addr);
>> +		goto out;
>> +	}
>>  
>> -	/* Map vectors page at the high address. */
>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>> -				       &spec);
>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>> +				       VM_MAYWRITE | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[1]);
> 
> Any reason for setting VM_MAYWRITE here?
> 

VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:01       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:01 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On 01/04/2019 15:27, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index 2d419006ad43..9556ad2036ef 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -1,5 +1,7 @@
>>  /*
>> - * VDSO implementation for AArch64 and vector page setup for AArch32.
>> + * VDSO implementation for AArch64 and for AArch32:
>> + * AArch64: vDSO implementation contains pages setup and data page update.
>> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>>   *
>>   * Copyright (C) 2012 ARM Limited
>>   *
>> @@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>>  /*
>>   * Create and map the vectors page for AArch32 tasks.
>>   */
>> -static struct page *vectors_page[1] __ro_after_init;
>> +/*
>> + * aarch32_vdso_pages:
>> + * 0 - kuser helpers
>> + * 1 - sigreturn code
>> + */
>> +static struct page *aarch32_vdso_pages[2] __ro_after_init;
> 
> More of a nitpick, the code may be easier to follow if we had two
> separate variables. Does the array buy us anything?
>

Even though it does not make much difference right now, it simplifies the
implementation of the compat vdso going forward.

But I agree with you, we can always make the code more readable hence I will
introduce some meaningful defines in v2 (instead of 0 and 1 indexes).

>> +static const struct vm_special_mapping aarch32_vdso_spec[2] = {
>> +	{
>> +		/* Must be named [vectors] for compatibility with arm. */
>> +		.name	= "[vectors]",
>> +		.pages	= &aarch32_vdso_pages[0],
>> +	},
>> +	{
>> +		/* Must be named [sigpage] for compatibility with arm. */
>> +		.name	= "[sigpage]",
>> +		.pages	= &aarch32_vdso_pages[1],
>> +	},
>> +};
> [...]
>> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
>> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>>  {
>> -	struct mm_struct *mm = current->mm;
>> -	unsigned long addr = AARCH32_VECTORS_BASE;
>> -	static const struct vm_special_mapping spec = {
>> -		.name	= "[vectors]",
>> -		.pages	= vectors_page,
>> +	void *ret;
>> +
>> +	/* The kuser helpers must be mapped at the ABI-defined high address */
>> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
>> +				       VM_READ | VM_EXEC |
>> +				       VM_MAYREAD | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[0]);
>> +
>> +	return PTR_ERR_OR_ZERO(ret);
>> +}
>>  
>> -	};
>> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
>> +{
>> +	unsigned long addr;
>>  	void *ret;
>>  
>> -	if (down_write_killable(&mm->mmap_sem))
>> -		return -EINTR;
>> -	current->mm->context.vdso = (void *)addr;
>> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
>> +	if (IS_ERR_VALUE(addr)) {
>> +		ret = ERR_PTR(addr);
>> +		goto out;
>> +	}
>>  
>> -	/* Map vectors page at the high address. */
>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>> -				       &spec);
>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>> +				       VM_MAYWRITE | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[1]);
> 
> Any reason for setting VM_MAYWRITE here?
> 

VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:01       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:01 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 01/04/2019 15:27, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index 2d419006ad43..9556ad2036ef 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -1,5 +1,7 @@
>>  /*
>> - * VDSO implementation for AArch64 and vector page setup for AArch32.
>> + * VDSO implementation for AArch64 and for AArch32:
>> + * AArch64: vDSO implementation contains pages setup and data page update.
>> + * AArch32: vDSO implementation contains sigreturn and kuser pages setup.
>>   *
>>   * Copyright (C) 2012 ARM Limited
>>   *
>> @@ -53,61 +55,117 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
>>  /*
>>   * Create and map the vectors page for AArch32 tasks.
>>   */
>> -static struct page *vectors_page[1] __ro_after_init;
>> +/*
>> + * aarch32_vdso_pages:
>> + * 0 - kuser helpers
>> + * 1 - sigreturn code
>> + */
>> +static struct page *aarch32_vdso_pages[2] __ro_after_init;
> 
> More of a nitpick, the code may be easier to follow if we had two
> separate variables. Does the array buy us anything?
>

Even though it does not make much difference right now, it simplifies the
implementation of the compat vdso going forward.

But I agree with you, we can always make the code more readable hence I will
introduce some meaningful defines in v2 (instead of 0 and 1 indexes).

>> +static const struct vm_special_mapping aarch32_vdso_spec[2] = {
>> +	{
>> +		/* Must be named [vectors] for compatibility with arm. */
>> +		.name	= "[vectors]",
>> +		.pages	= &aarch32_vdso_pages[0],
>> +	},
>> +	{
>> +		/* Must be named [sigpage] for compatibility with arm. */
>> +		.name	= "[sigpage]",
>> +		.pages	= &aarch32_vdso_pages[1],
>> +	},
>> +};
> [...]
>> -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
>> +static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
>>  {
>> -	struct mm_struct *mm = current->mm;
>> -	unsigned long addr = AARCH32_VECTORS_BASE;
>> -	static const struct vm_special_mapping spec = {
>> -		.name	= "[vectors]",
>> -		.pages	= vectors_page,
>> +	void *ret;
>> +
>> +	/* The kuser helpers must be mapped at the ABI-defined high address */
>> +	ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE,
>> +				       VM_READ | VM_EXEC |
>> +				       VM_MAYREAD | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[0]);
>> +
>> +	return PTR_ERR_OR_ZERO(ret);
>> +}
>>  
>> -	};
>> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
>> +{
>> +	unsigned long addr;
>>  	void *ret;
>>  
>> -	if (down_write_killable(&mm->mmap_sem))
>> -		return -EINTR;
>> -	current->mm->context.vdso = (void *)addr;
>> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
>> +	if (IS_ERR_VALUE(addr)) {
>> +		ret = ERR_PTR(addr);
>> +		goto out;
>> +	}
>>  
>> -	/* Map vectors page at the high address. */
>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>> -				       &spec);
>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>> +				       VM_MAYWRITE | VM_MAYEXEC,
>> +				       &aarch32_vdso_spec[1]);
> 
> Any reason for setting VM_MAYWRITE here?
> 

VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-02 10:06       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:06 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel


On 01/04/2019 15:43, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:24PM +0100, Vincenzo Frascino wrote:
>> +static int __init aarch32_alloc_vdso_pages(void)
>> +{
>> +	return aarch32_alloc_kuser_vdso_page() &
>> +	       aarch32_alloc_sigreturn_vdso_page();
>>  }
>>  arch_initcall(aarch32_alloc_vdso_pages);
> 
> It probably doesn't matter much but I'd rather not bit-and two error
> codes. Just return the non-zero one or pick the first (your choice) if
> both are wrong.
> 
> (normally, if you want a non-zero random value if any of them failed,
> you'd use bit-or)
> 

Actually this is a good advice, I will change the code accordingly in v2.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-02 10:06       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:06 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland


On 01/04/2019 15:43, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:24PM +0100, Vincenzo Frascino wrote:
>> +static int __init aarch32_alloc_vdso_pages(void)
>> +{
>> +	return aarch32_alloc_kuser_vdso_page() &
>> +	       aarch32_alloc_sigreturn_vdso_page();
>>  }
>>  arch_initcall(aarch32_alloc_vdso_pages);
> 
> It probably doesn't matter much but I'd rather not bit-and two error
> codes. Just return the non-zero one or pick the first (your choice) if
> both are wrong.
> 
> (normally, if you want a non-zero random value if any of them failed,
> you'd use bit-or)
> 

Actually this is a good advice, I will change the code accordingly in v2.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages()
@ 2019-04-02 10:06       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:06 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel


On 01/04/2019 15:43, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:24PM +0100, Vincenzo Frascino wrote:
>> +static int __init aarch32_alloc_vdso_pages(void)
>> +{
>> +	return aarch32_alloc_kuser_vdso_page() &
>> +	       aarch32_alloc_sigreturn_vdso_page();
>>  }
>>  arch_initcall(aarch32_alloc_vdso_pages);
> 
> It probably doesn't matter much but I'd rather not bit-and two error
> codes. Just return the non-zero one or pick the first (your choice) if
> both are wrong.
> 
> (normally, if you want a non-zero random value if any of them failed,
> you'd use bit-or)
> 

Actually this is a good advice, I will change the code accordingly in v2.

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:06         ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-02 10:06 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Tue, Apr 02, 2019 at 11:01:04AM +0100, Vincenzo Frascino wrote:
> On 01/04/2019 15:27, Catalin Marinas wrote:
> > On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
> >> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
> >> +{
> >> +	unsigned long addr;
> >>  	void *ret;
> >>  
> >> -	if (down_write_killable(&mm->mmap_sem))
> >> -		return -EINTR;
> >> -	current->mm->context.vdso = (void *)addr;
> >> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
> >> +	if (IS_ERR_VALUE(addr)) {
> >> +		ret = ERR_PTR(addr);
> >> +		goto out;
> >> +	}
> >>  
> >> -	/* Map vectors page at the high address. */
> >>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> >> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> >> -				       &spec);
> >> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> >> +				       VM_MAYWRITE | VM_MAYEXEC,
> >> +				       &aarch32_vdso_spec[1]);
> > 
> > Any reason for setting VM_MAYWRITE here?
> 
> VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.

Thanks. Please add a comment to the code so I don't ask again ;).

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:06         ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-02 10:06 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On Tue, Apr 02, 2019 at 11:01:04AM +0100, Vincenzo Frascino wrote:
> On 01/04/2019 15:27, Catalin Marinas wrote:
> > On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
> >> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
> >> +{
> >> +	unsigned long addr;
> >>  	void *ret;
> >>  
> >> -	if (down_write_killable(&mm->mmap_sem))
> >> -		return -EINTR;
> >> -	current->mm->context.vdso = (void *)addr;
> >> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
> >> +	if (IS_ERR_VALUE(addr)) {
> >> +		ret = ERR_PTR(addr);
> >> +		goto out;
> >> +	}
> >>  
> >> -	/* Map vectors page at the high address. */
> >>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> >> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> >> -				       &spec);
> >> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> >> +				       VM_MAYWRITE | VM_MAYEXEC,
> >> +				       &aarch32_vdso_spec[1]);
> > 
> > Any reason for setting VM_MAYWRITE here?
> 
> VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.

Thanks. Please add a comment to the code so I don't ask again ;).

-- 
Catalin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:06         ` Catalin Marinas
  0 siblings, 0 replies; 45+ messages in thread
From: Catalin Marinas @ 2019-04-02 10:06 UTC (permalink / raw)
  To: Vincenzo Frascino; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On Tue, Apr 02, 2019 at 11:01:04AM +0100, Vincenzo Frascino wrote:
> On 01/04/2019 15:27, Catalin Marinas wrote:
> > On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
> >> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
> >> +{
> >> +	unsigned long addr;
> >>  	void *ret;
> >>  
> >> -	if (down_write_killable(&mm->mmap_sem))
> >> -		return -EINTR;
> >> -	current->mm->context.vdso = (void *)addr;
> >> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
> >> +	if (IS_ERR_VALUE(addr)) {
> >> +		ret = ERR_PTR(addr);
> >> +		goto out;
> >> +	}
> >>  
> >> -	/* Map vectors page at the high address. */
> >>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
> >> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
> >> -				       &spec);
> >> +				       VM_READ | VM_EXEC | VM_MAYREAD |
> >> +				       VM_MAYWRITE | VM_MAYEXEC,
> >> +				       &aarch32_vdso_spec[1]);
> > 
> > Any reason for setting VM_MAYWRITE here?
> 
> VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.

Thanks. Please add a comment to the code so I don't ask again ;).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:08           ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:08 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 02/04/2019 11:06, Catalin Marinas wrote:
> On Tue, Apr 02, 2019 at 11:01:04AM +0100, Vincenzo Frascino wrote:
>> On 01/04/2019 15:27, Catalin Marinas wrote:
>>> On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
>>>> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
>>>> +{
>>>> +	unsigned long addr;
>>>>  	void *ret;
>>>>  
>>>> -	if (down_write_killable(&mm->mmap_sem))
>>>> -		return -EINTR;
>>>> -	current->mm->context.vdso = (void *)addr;
>>>> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
>>>> +	if (IS_ERR_VALUE(addr)) {
>>>> +		ret = ERR_PTR(addr);
>>>> +		goto out;
>>>> +	}
>>>>  
>>>> -	/* Map vectors page at the high address. */
>>>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>>>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>>>> -				       &spec);
>>>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>>>> +				       VM_MAYWRITE | VM_MAYEXEC,
>>>> +				       &aarch32_vdso_spec[1]);
>>>
>>> Any reason for setting VM_MAYWRITE here?
>>
>> VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.
> 
> Thanks. Please add a comment to the code so I don't ask again ;).
> 

No problem, my bad for not adding it in the first place ;).

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:08           ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:08 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On 02/04/2019 11:06, Catalin Marinas wrote:
> On Tue, Apr 02, 2019 at 11:01:04AM +0100, Vincenzo Frascino wrote:
>> On 01/04/2019 15:27, Catalin Marinas wrote:
>>> On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
>>>> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
>>>> +{
>>>> +	unsigned long addr;
>>>>  	void *ret;
>>>>  
>>>> -	if (down_write_killable(&mm->mmap_sem))
>>>> -		return -EINTR;
>>>> -	current->mm->context.vdso = (void *)addr;
>>>> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
>>>> +	if (IS_ERR_VALUE(addr)) {
>>>> +		ret = ERR_PTR(addr);
>>>> +		goto out;
>>>> +	}
>>>>  
>>>> -	/* Map vectors page at the high address. */
>>>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>>>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>>>> -				       &spec);
>>>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>>>> +				       VM_MAYWRITE | VM_MAYEXEC,
>>>> +				       &aarch32_vdso_spec[1]);
>>>
>>> Any reason for setting VM_MAYWRITE here?
>>
>> VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.
> 
> Thanks. Please add a comment to the code so I don't ask again ;).
> 

No problem, my bad for not adding it in the first place ;).

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage
@ 2019-04-02 10:08           ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:08 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 02/04/2019 11:06, Catalin Marinas wrote:
> On Tue, Apr 02, 2019 at 11:01:04AM +0100, Vincenzo Frascino wrote:
>> On 01/04/2019 15:27, Catalin Marinas wrote:
>>> On Mon, Apr 01, 2019 at 12:20:22PM +0100, Vincenzo Frascino wrote:
>>>> +static int aarch32_sigreturn_setup(struct mm_struct *mm)
>>>> +{
>>>> +	unsigned long addr;
>>>>  	void *ret;
>>>>  
>>>> -	if (down_write_killable(&mm->mmap_sem))
>>>> -		return -EINTR;
>>>> -	current->mm->context.vdso = (void *)addr;
>>>> +	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
>>>> +	if (IS_ERR_VALUE(addr)) {
>>>> +		ret = ERR_PTR(addr);
>>>> +		goto out;
>>>> +	}
>>>>  
>>>> -	/* Map vectors page at the high address. */
>>>>  	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
>>>> -				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
>>>> -				       &spec);
>>>> +				       VM_READ | VM_EXEC | VM_MAYREAD |
>>>> +				       VM_MAYWRITE | VM_MAYEXEC,
>>>> +				       &aarch32_vdso_spec[1]);
>>>
>>> Any reason for setting VM_MAYWRITE here?
>>
>> VM_MAYWRITE is required to allow gdb to Copy-on-Write and set breakpoints.
> 
> Thanks. Please add a comment to the code so I don't ask again ;).
> 

No problem, my bad for not adding it in the first place ;).

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-02 10:12       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:12 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 01/04/2019 15:48, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:25PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9eba5de..35c98e91bfeb 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1494,6 +1494,36 @@ config COMPAT
>>  
>>  	  If you want to execute 32-bit userspace applications, say Y.
>>  
>> +config KUSER_HELPERS
>> +	bool "Enable kuser helpers page for compatibility with 32 bit applications."
> 
> I'd say only "Enable kuser helpers page for 32-bit applications" (my
> first reading of this sounded like it would be enabled for 64-bit apps
> to be on par with 32-bit ones).
> 

Ok, I agree, it can be misleading.

>> +	depends on COMPAT
>> +	default y
>> +	help
>> +	  Warning: disabling this option may break user programs.
> 
> "may break 32-bit user programs."
> 

Ok.

>> +
>> +	  Provide kuser helpers to compat tasks. The kernel provides
>> +	  helper code to userspace in read only form at a fixed location
>> +	  to allow userspace to be independent of the CPU type fitted to
>> +	  the system. This permits binaries to be run on ARMv4 through
>> +	  to ARMv8 without modification.
>> +
>> +	  See Documentation/arm/kernel_user_helpers.txt for details.
>> +
>> +	  However, the fixed address nature of these helpers can be used
>> +	  by ROP (return orientated programming) authors when creating
>> +	  exploits.
>> +
>> +	  If all of the binaries and libraries which run on your platform
>> +	  are built specifically for your platform, and make no use of
>> +	  these helpers, then you can turn this option off to hinder
>> +	  such exploits. However, in that case, if a binary or library
>> +	  relying on those helpers is run, it will not function correctly.
>> +
>> +	  Note: kuser helpers is disabled by default with 64K pages.
> 
> Is it?
> 

Oops... I removed it from all the places except here. Will fix in v2.

>> +
>> +	  Say N here only if you are absolutely certain that you do not
>> +	  need these helpers; otherwise, the safe option is to say Y.
>> +
> 

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-02 10:12       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:12 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, linux-arm-kernel, Will Deacon, Mark Rutland

On 01/04/2019 15:48, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:25PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9eba5de..35c98e91bfeb 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1494,6 +1494,36 @@ config COMPAT
>>  
>>  	  If you want to execute 32-bit userspace applications, say Y.
>>  
>> +config KUSER_HELPERS
>> +	bool "Enable kuser helpers page for compatibility with 32 bit applications."
> 
> I'd say only "Enable kuser helpers page for 32-bit applications" (my
> first reading of this sounded like it would be enabled for 64-bit apps
> to be on par with 32-bit ones).
> 

Ok, I agree, it can be misleading.

>> +	depends on COMPAT
>> +	default y
>> +	help
>> +	  Warning: disabling this option may break user programs.
> 
> "may break 32-bit user programs."
> 

Ok.

>> +
>> +	  Provide kuser helpers to compat tasks. The kernel provides
>> +	  helper code to userspace in read only form at a fixed location
>> +	  to allow userspace to be independent of the CPU type fitted to
>> +	  the system. This permits binaries to be run on ARMv4 through
>> +	  to ARMv8 without modification.
>> +
>> +	  See Documentation/arm/kernel_user_helpers.txt for details.
>> +
>> +	  However, the fixed address nature of these helpers can be used
>> +	  by ROP (return orientated programming) authors when creating
>> +	  exploits.
>> +
>> +	  If all of the binaries and libraries which run on your platform
>> +	  are built specifically for your platform, and make no use of
>> +	  these helpers, then you can turn this option off to hinder
>> +	  such exploits. However, in that case, if a binary or library
>> +	  relying on those helpers is run, it will not function correctly.
>> +
>> +	  Note: kuser helpers is disabled by default with 64K pages.
> 
> Is it?
> 

Oops... I removed it from all the places except here. Will fix in v2.

>> +
>> +	  Say N here only if you are absolutely certain that you do not
>> +	  need these helpers; otherwise, the safe option is to say Y.
>> +
> 

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option
@ 2019-04-02 10:12       ` Vincenzo Frascino
  0 siblings, 0 replies; 45+ messages in thread
From: Vincenzo Frascino @ 2019-04-02 10:12 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arch, Mark Rutland, Will Deacon, linux-arm-kernel

On 01/04/2019 15:48, Catalin Marinas wrote:
> On Mon, Apr 01, 2019 at 12:20:25PM +0100, Vincenzo Frascino wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9eba5de..35c98e91bfeb 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1494,6 +1494,36 @@ config COMPAT
>>  
>>  	  If you want to execute 32-bit userspace applications, say Y.
>>  
>> +config KUSER_HELPERS
>> +	bool "Enable kuser helpers page for compatibility with 32 bit applications."
> 
> I'd say only "Enable kuser helpers page for 32-bit applications" (my
> first reading of this sounded like it would be enabled for 64-bit apps
> to be on par with 32-bit ones).
> 

Ok, I agree, it can be misleading.

>> +	depends on COMPAT
>> +	default y
>> +	help
>> +	  Warning: disabling this option may break user programs.
> 
> "may break 32-bit user programs."
> 

Ok.

>> +
>> +	  Provide kuser helpers to compat tasks. The kernel provides
>> +	  helper code to userspace in read only form at a fixed location
>> +	  to allow userspace to be independent of the CPU type fitted to
>> +	  the system. This permits binaries to be run on ARMv4 through
>> +	  to ARMv8 without modification.
>> +
>> +	  See Documentation/arm/kernel_user_helpers.txt for details.
>> +
>> +	  However, the fixed address nature of these helpers can be used
>> +	  by ROP (return orientated programming) authors when creating
>> +	  exploits.
>> +
>> +	  If all of the binaries and libraries which run on your platform
>> +	  are built specifically for your platform, and make no use of
>> +	  these helpers, then you can turn this option off to hinder
>> +	  such exploits. However, in that case, if a binary or library
>> +	  relying on those helpers is run, it will not function correctly.
>> +
>> +	  Note: kuser helpers is disabled by default with 64K pages.
> 
> Is it?
> 

Oops... I removed it from all the places except here. Will fix in v2.

>> +
>> +	  Say N here only if you are absolutely certain that you do not
>> +	  need these helpers; otherwise, the safe option is to say Y.
>> +
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2019-04-02 10:12 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-01 11:20 [PATCH 0/4] arm64: compat: Add kuser helpers config option Vincenzo Frascino
2019-04-01 11:20 ` Vincenzo Frascino
2019-04-01 11:20 ` Vincenzo Frascino
2019-04-01 11:20 ` [PATCH 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 14:27   ` Catalin Marinas
2019-04-01 14:27     ` Catalin Marinas
2019-04-01 14:27     ` Catalin Marinas
2019-04-02 10:01     ` Vincenzo Frascino
2019-04-02 10:01       ` Vincenzo Frascino
2019-04-02 10:01       ` Vincenzo Frascino
2019-04-02 10:06       ` Catalin Marinas
2019-04-02 10:06         ` Catalin Marinas
2019-04-02 10:06         ` Catalin Marinas
2019-04-02 10:08         ` Vincenzo Frascino
2019-04-02 10:08           ` Vincenzo Frascino
2019-04-02 10:08           ` Vincenzo Frascino
2019-04-01 11:20 ` [PATCH 2/4] arm64: compat: Split kuser32 Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 14:30   ` Catalin Marinas
2019-04-01 14:30     ` Catalin Marinas
2019-04-01 14:30     ` Catalin Marinas
2019-04-02  9:47     ` Vincenzo Frascino
2019-04-02  9:47       ` Vincenzo Frascino
2019-04-02  9:47       ` Vincenzo Frascino
2019-04-01 11:20 ` [PATCH 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages() Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 14:43   ` Catalin Marinas
2019-04-01 14:43     ` Catalin Marinas
2019-04-01 14:43     ` Catalin Marinas
2019-04-02 10:06     ` Vincenzo Frascino
2019-04-02 10:06       ` Vincenzo Frascino
2019-04-02 10:06       ` Vincenzo Frascino
2019-04-01 11:20 ` [PATCH 4/4] arm64: compat: Add KUSER_HELPERS config option Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 11:20   ` Vincenzo Frascino
2019-04-01 14:48   ` Catalin Marinas
2019-04-01 14:48     ` Catalin Marinas
2019-04-01 14:48     ` Catalin Marinas
2019-04-02 10:12     ` Vincenzo Frascino
2019-04-02 10:12       ` Vincenzo Frascino
2019-04-02 10:12       ` Vincenzo Frascino

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.