All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: Allow vmalloc regions to be set with set_memory_*
@ 2016-01-12 21:46 ` Laura Abbott
  0 siblings, 0 replies; 37+ messages in thread
From: Laura Abbott @ 2016-01-12 21:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland
  Cc: Laura Abbott, Kees Cook, linux-arm-kernel, linux-kernel


The range of set_memory_* is currently restricted to the module address
range because of difficulties in breaking down larger block sizes.
vmalloc maps PAGE_SIZE pages so it is safe to use as well. Update the
function ranges and add a comment explaining why the range is restricted
the way it is.

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
This should let the protections for the eBPF work as expected, I don't
know if there is some sort of self test for thatL.
---
 arch/arm64/mm/pageattr.c | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 3571c73..274208e 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -36,6 +36,26 @@ static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr,
 	return 0;
 }
 
+static bool validate_addr(unsigned long start, unsigned long end)
+{
+	/*
+	 * This check explicitly excludes most kernel memory. Most kernel
+	 * memory is mapped with a larger page size and breaking down the
+	 * larger page size without causing TLB conflicts is very difficult.
+	 *
+	 * If you need to call set_memory_* on a range, the recommendation is
+	 * to use vmalloc since that range is mapped with pages.
+	 */
+	if (start >= MODULES_VADDR && start < MODULES_END &&
+	    end >= MODULES_VADDR && end < MODULES_END)
+		return true;
+
+	if (is_vmalloc_addr(start) && is_vmalloc_addr(end))
+		return true;
+
+	return false;
+}
+
 static int change_memory_common(unsigned long addr, int numpages,
 				pgprot_t set_mask, pgprot_t clear_mask)
 {
@@ -51,10 +71,7 @@ static int change_memory_common(unsigned long addr, int numpages,
 		WARN_ON_ONCE(1);
 	}
 
-	if (start < MODULES_VADDR || start >= MODULES_END)
-		return -EINVAL;
-
-	if (end < MODULES_VADDR || end >= MODULES_END)
+	if (!validate_addr(start, end))
 		return -EINVAL;
 
 	data.set_mask = set_mask;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread
* [PATCH] arm64: allow vmalloc regions to be set with set_memory_*
@ 2016-01-18 14:01 Ard Biesheuvel
  2016-01-18 15:05 ` Mark Rutland
  2016-01-28 15:08 ` Will Deacon
  0 siblings, 2 replies; 37+ messages in thread
From: Ard Biesheuvel @ 2016-01-18 14:01 UTC (permalink / raw)
  To: linux-arm-kernel

The range of set_memory_* is currently restricted to the module address
range because of difficulties in breaking down larger block sizes.
vmalloc maps PAGE_SIZE pages so it is safe to use as well. Update the
function ranges and add a comment explaining why the range is restricted
the way it is.

Suggested-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/pageattr.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 3571c7309c5e..1360a02d88b7 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -13,6 +13,7 @@
 #include <linux/kernel.h>
 #include <linux/mm.h>
 #include <linux/module.h>
+#include <linux/vmalloc.h>
 #include <linux/sched.h>
 
 #include <asm/pgtable.h>
@@ -44,6 +45,7 @@ static int change_memory_common(unsigned long addr, int numpages,
 	unsigned long end = start + size;
 	int ret;
 	struct page_change_data data;
+	struct vm_struct *area;
 
 	if (!PAGE_ALIGNED(addr)) {
 		start &= PAGE_MASK;
@@ -51,10 +53,23 @@ static int change_memory_common(unsigned long addr, int numpages,
 		WARN_ON_ONCE(1);
 	}
 
-	if (start < MODULES_VADDR || start >= MODULES_END)
-		return -EINVAL;
-
-	if (end < MODULES_VADDR || end >= MODULES_END)
+	/*
+	 * Kernel VA mappings are always live, and splitting live section
+	 * mappings into page mappings may cause TLB conflicts. This means
+	 * we have to ensure that changing the permission bits of the range
+	 * we are operating on does not result in such splitting.
+	 *
+	 * Let's restrict ourselves to mappings created by vmalloc (or vmap).
+	 * Those are guaranteed to consist entirely of page mappings, and
+	 * splitting is never needed.
+	 *
+	 * So check whether the [addr, addr + size) interval is entirely
+	 * covered by precisely one VM area that has the VM_ALLOC flag set.
+	 */
+	area = find_vm_area((void *)addr);
+	if (!area ||
+	    end > (unsigned long)area->addr + area->size ||
+	    !(area->flags & VM_ALLOC))
 		return -EINVAL;
 
 	data.set_mask = set_mask;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2016-02-03 13:44 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-12 21:46 [PATCH] arm64: Allow vmalloc regions to be set with set_memory_* Laura Abbott
2016-01-12 21:46 ` Laura Abbott
2016-01-13  0:01 ` Alexei Starovoitov
2016-01-13  0:01   ` Alexei Starovoitov
2016-01-13  0:31   ` Daniel Borkmann
2016-01-13  0:31     ` Daniel Borkmann
2016-01-13 14:03 ` Ard Biesheuvel
2016-01-13 14:03   ` Ard Biesheuvel
2016-01-13 16:10   ` Ard Biesheuvel
2016-01-13 16:10     ` Ard Biesheuvel
2016-01-14 23:01     ` Laura Abbott
2016-01-14 23:01       ` Laura Abbott
2016-01-18 11:56     ` Mark Rutland
2016-01-18 11:56       ` Mark Rutland
2016-01-28  1:47       ` Xishi Qiu
2016-01-28  1:47         ` Xishi Qiu
2016-01-28 10:51         ` Mark Rutland
2016-01-28 10:51           ` Mark Rutland
2016-01-28 11:47           ` Xishi Qiu
2016-01-28 11:47             ` Xishi Qiu
2016-01-28 14:27             ` Mark Rutland
2016-01-28 14:27               ` Mark Rutland
2016-01-29  1:21               ` Xishi Qiu
2016-01-29  1:21                 ` Xishi Qiu
2016-01-29 11:02                 ` Mark Rutland
2016-01-29 11:02                   ` Mark Rutland
2016-01-30  2:48                   ` Xishi Qiu
2016-01-30  2:48                     ` Xishi Qiu
2016-02-03 13:43                     ` Mark Rutland
2016-02-03 13:43                       ` Mark Rutland
2016-01-18 14:01 [PATCH] arm64: allow " Ard Biesheuvel
2016-01-18 15:05 ` Mark Rutland
2016-01-28 15:08 ` Will Deacon
2016-01-28 16:40   ` Ard Biesheuvel
2016-01-28 16:43   ` Ard Biesheuvel
2016-01-28 18:10     ` Will Deacon
2016-01-28 19:07       ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.