All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] arm: support CONFIG_RODATA
@ 2014-08-06 19:32 ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Liu hua, Mark Salter, Rabin Vincent, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

This is a series of patches to support CONFIG_RODATA on ARM, so that
the kernel text is RO, and non-text sections default to NX. To support
on-the-fly kernel text patching (via ftrace, kprobes, etc), fixmap
support has been finalized based on several versions of various patches
that are floating around on the mailing list. This series attempts to
include the least intrusive version, so that others can build on it for
future fixmap work.

The series has been heavily tested, and appears to be working correctly:

With CONFIG_ARM_PTDUMP, expected page table permissions are seen in
/sys/kernel/debug/kernel_page_tables.

Using CONFIG_LKDTM, the kernel now correctly detects bad accesses for
for the following lkdtm tests via /sys/kernel/debug/provoke-crash/DIRECT:
        EXEC_DATA
        WRITE_RO
        WRITE_KERN

ftrace works:
	CONFIG_FTRACE_STARTUP_TEST passes
	Enabling tracing works:
	        echo function > /sys/kernel/debug/tracing/current_tracer

kprobes works:
	CONFIG_ARM_KPROBES_TEST passes

kexec works:
	kexec will load and start a new kernel

Thanks to everyone who has been testing this series and working on its
various pieces!

-Kees


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 0/7] arm: support CONFIG_RODATA
@ 2014-08-06 19:32 ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

This is a series of patches to support CONFIG_RODATA on ARM, so that
the kernel text is RO, and non-text sections default to NX. To support
on-the-fly kernel text patching (via ftrace, kprobes, etc), fixmap
support has been finalized based on several versions of various patches
that are floating around on the mailing list. This series attempts to
include the least intrusive version, so that others can build on it for
future fixmap work.

The series has been heavily tested, and appears to be working correctly:

With CONFIG_ARM_PTDUMP, expected page table permissions are seen in
/sys/kernel/debug/kernel_page_tables.

Using CONFIG_LKDTM, the kernel now correctly detects bad accesses for
for the following lkdtm tests via /sys/kernel/debug/provoke-crash/DIRECT:
        EXEC_DATA
        WRITE_RO
        WRITE_KERN

ftrace works:
	CONFIG_FTRACE_STARTUP_TEST passes
	Enabling tracing works:
	        echo function > /sys/kernel/debug/tracing/current_tracer

kprobes works:
	CONFIG_ARM_KPROBES_TEST passes

kexec works:
	kexec will load and start a new kernel

Thanks to everyone who has been testing this series and working on its
various pieces!

-Kees

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Mark Salter, Liu hua, Rabin Vincent, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

From: Mark Salter <msalter@redhat.com>

ARM is different from other architectures in that fixmap pages are indexed
with a positive offset from FIXADDR_START.  Other architectures index with
a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
definitions, this patch redefines FIXADDR_TOP to be inclusive of the
useable range.  That is, FIXADDR_TOP is the virtual address of the topmost
fixed page.  The newly defined FIXADDR_END is the first virtual address
past the fixed mappings.

Signed-off-by: Mark Salter <msalter@redhat.com>
Reviewed-by: Doug Anderson <dianders@chromium.org>
[update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/fixmap.h | 27 +++++++++------------------
 arch/arm/mm/init.c            |  2 +-
 2 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 74124b0d0d79..190142d174ee 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -2,27 +2,18 @@
 #define _ASM_FIXMAP_H
 
 #define FIXADDR_START		0xffc00000UL
-#define FIXADDR_TOP		0xffe00000UL
-#define FIXADDR_SIZE		(FIXADDR_TOP - FIXADDR_START)
+#define FIXADDR_END		0xffe00000UL
+#define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
+#define FIXADDR_SIZE		(FIXADDR_END - FIXADDR_START)
 
 #define FIX_KMAP_NR_PTES	(FIXADDR_SIZE >> PAGE_SHIFT)
 
-#define __fix_to_virt(x)	(FIXADDR_START + ((x) << PAGE_SHIFT))
-#define __virt_to_fix(x)	(((x) - FIXADDR_START) >> PAGE_SHIFT)
+enum fixed_addresses {
+	FIX_KMAP_BEGIN,
+	FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
+	__end_of_fixed_addresses
+};
 
-extern void __this_fixmap_does_not_exist(void);
-
-static inline unsigned long fix_to_virt(const unsigned int idx)
-{
-	if (idx >= FIX_KMAP_NR_PTES)
-		__this_fixmap_does_not_exist();
-	return __fix_to_virt(idx);
-}
-
-static inline unsigned int virt_to_fix(const unsigned long vaddr)
-{
-	BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
-	return __virt_to_fix(vaddr);
-}
+#include <asm-generic/fixmap.h>
 
 #endif
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d808dc..ad82c05bfc3a 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -570,7 +570,7 @@ void __init mem_init(void)
 			MLK(DTCM_OFFSET, (unsigned long) dtcm_end),
 			MLK(ITCM_OFFSET, (unsigned long) itcm_end),
 #endif
-			MLK(FIXADDR_START, FIXADDR_TOP),
+			MLK(FIXADDR_START, FIXADDR_END),
 			MLM(VMALLOC_START, VMALLOC_END),
 			MLM(PAGE_OFFSET, (unsigned long)high_memory),
 #ifdef CONFIG_HIGHMEM
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

From: Mark Salter <msalter@redhat.com>

ARM is different from other architectures in that fixmap pages are indexed
with a positive offset from FIXADDR_START.  Other architectures index with
a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
definitions, this patch redefines FIXADDR_TOP to be inclusive of the
useable range.  That is, FIXADDR_TOP is the virtual address of the topmost
fixed page.  The newly defined FIXADDR_END is the first virtual address
past the fixed mappings.

Signed-off-by: Mark Salter <msalter@redhat.com>
Reviewed-by: Doug Anderson <dianders@chromium.org>
[update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/fixmap.h | 27 +++++++++------------------
 arch/arm/mm/init.c            |  2 +-
 2 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 74124b0d0d79..190142d174ee 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -2,27 +2,18 @@
 #define _ASM_FIXMAP_H
 
 #define FIXADDR_START		0xffc00000UL
-#define FIXADDR_TOP		0xffe00000UL
-#define FIXADDR_SIZE		(FIXADDR_TOP - FIXADDR_START)
+#define FIXADDR_END		0xffe00000UL
+#define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
+#define FIXADDR_SIZE		(FIXADDR_END - FIXADDR_START)
 
 #define FIX_KMAP_NR_PTES	(FIXADDR_SIZE >> PAGE_SHIFT)
 
-#define __fix_to_virt(x)	(FIXADDR_START + ((x) << PAGE_SHIFT))
-#define __virt_to_fix(x)	(((x) - FIXADDR_START) >> PAGE_SHIFT)
+enum fixed_addresses {
+	FIX_KMAP_BEGIN,
+	FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
+	__end_of_fixed_addresses
+};
 
-extern void __this_fixmap_does_not_exist(void);
-
-static inline unsigned long fix_to_virt(const unsigned int idx)
-{
-	if (idx >= FIX_KMAP_NR_PTES)
-		__this_fixmap_does_not_exist();
-	return __fix_to_virt(idx);
-}
-
-static inline unsigned int virt_to_fix(const unsigned long vaddr)
-{
-	BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
-	return __virt_to_fix(vaddr);
-}
+#include <asm-generic/fixmap.h>
 
 #endif
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d808dc..ad82c05bfc3a 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -570,7 +570,7 @@ void __init mem_init(void)
 			MLK(DTCM_OFFSET, (unsigned long) dtcm_end),
 			MLK(ITCM_OFFSET, (unsigned long) itcm_end),
 #endif
-			MLK(FIXADDR_START, FIXADDR_TOP),
+			MLK(FIXADDR_START, FIXADDR_END),
 			MLM(VMALLOC_START, VMALLOC_END),
 			MLM(PAGE_OFFSET, (unsigned long)high_memory),
 #ifdef CONFIG_HIGHMEM
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/7] arm: fixmap: implement __set_fixmap()
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rabin Vincent, Liu hua, Mark Salter, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

This is used from set_fixmap() and clear_fixmap() via
asm-generic/fixmap.h.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Rabin Vincent <rabin@rab.in>
---
 arch/arm/include/asm/fixmap.h |  2 ++
 arch/arm/mm/mmu.c             | 16 ++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 190142d174ee..8ee7cb4f62ca 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -14,6 +14,8 @@ enum fixed_addresses {
 	__end_of_fixed_addresses
 };
 
+void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot);
+
 #include <asm-generic/fixmap.h>
 
 #endif
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 6e3ba8d112a2..b005a3337bc1 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -22,6 +22,7 @@
 #include <asm/cputype.h>
 #include <asm/sections.h>
 #include <asm/cachetype.h>
+#include <asm/fixmap.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
 #include <asm/smp_plat.h>
@@ -392,6 +393,21 @@ SET_MEMORY_FN(rw, pte_set_rw)
 SET_MEMORY_FN(x, pte_set_x)
 SET_MEMORY_FN(nx, pte_set_nx)
 
+void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
+{
+	unsigned long vaddr = __fix_to_virt(idx);
+	pte_t *pte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
+
+	BUG_ON(idx >= __end_of_fixed_addresses);
+
+	if (pgprot_val(prot))
+		set_pte_at(NULL, vaddr, pte,
+			pfn_pte(phys >> PAGE_SHIFT, prot));
+	else
+		pte_clear(NULL, vaddr, pte);
+	flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
+}
+
 /*
  * Adjust the PMD section entries according to the CPU in use.
  */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/7] arm: fixmap: implement __set_fixmap()
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

This is used from set_fixmap() and clear_fixmap() via
asm-generic/fixmap.h.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Rabin Vincent <rabin@rab.in>
---
 arch/arm/include/asm/fixmap.h |  2 ++
 arch/arm/mm/mmu.c             | 16 ++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 190142d174ee..8ee7cb4f62ca 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -14,6 +14,8 @@ enum fixed_addresses {
 	__end_of_fixed_addresses
 };
 
+void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot);
+
 #include <asm-generic/fixmap.h>
 
 #endif
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 6e3ba8d112a2..b005a3337bc1 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -22,6 +22,7 @@
 #include <asm/cputype.h>
 #include <asm/sections.h>
 #include <asm/cachetype.h>
+#include <asm/fixmap.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
 #include <asm/smp_plat.h>
@@ -392,6 +393,21 @@ SET_MEMORY_FN(rw, pte_set_rw)
 SET_MEMORY_FN(x, pte_set_x)
 SET_MEMORY_FN(nx, pte_set_nx)
 
+void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
+{
+	unsigned long vaddr = __fix_to_virt(idx);
+	pte_t *pte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
+
+	BUG_ON(idx >= __end_of_fixed_addresses);
+
+	if (pgprot_val(prot))
+		set_pte_at(NULL, vaddr, pte,
+			pfn_pte(phys >> PAGE_SHIFT, prot));
+	else
+		pte_clear(NULL, vaddr, pte);
+	flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
+}
+
 /*
  * Adjust the PMD section entries according to the CPU in use.
  */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/7] arm: mm: reduce fixmap kmap from 32 to 16 CPUS
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Liu hua, Mark Salter, Rabin Vincent, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

More room is needed in the fixmap range for non-kmap fixmap entries. This
reduces the kmap range from 32 to 16 CPUs. Additionally, add PTE entry for
fixmap regardless of CONFIG_HIGHMEM.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/fixmap.h | 12 ++++++++++--
 arch/arm/mm/highmem.c         |  2 --
 arch/arm/mm/mm.h              |  3 +++
 arch/arm/mm/mmu.c             |  5 ++++-
 4 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 8ee7cb4f62ca..3ed08232be55 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -1,16 +1,24 @@
 #ifndef _ASM_FIXMAP_H
 #define _ASM_FIXMAP_H
 
+/*
+ * The fixmap uses 2MB. The KMAP fixmap needs 64k per CPU, so make room for
+ * 16 CPUs (taking 1MB) and leave the rest for additional fixmap areas.
+ */
 #define FIXADDR_START		0xffc00000UL
 #define FIXADDR_END		0xffe00000UL
 #define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
 #define FIXADDR_SIZE		(FIXADDR_END - FIXADDR_START)
 
-#define FIX_KMAP_NR_PTES	(FIXADDR_SIZE >> PAGE_SHIFT)
+/* 16 PTEs per CPU (64k of 4k pages). */
+#define FIX_KMAP_NR_PTES	16
+#define FIX_KMAP_NR_CPUS	16
 
 enum fixed_addresses {
+	/* Support 16 CPUs for kmap as the first region of fixmap entries. */
 	FIX_KMAP_BEGIN,
-	FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
+	FIX_KMAP_END = (FIX_KMAP_NR_PTES * FIX_KMAP_NR_CPUS) - 1,
+
 	__end_of_fixed_addresses
 };
 
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index 45aeaaca9052..cbbef0b533d6 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -18,8 +18,6 @@
 #include <asm/tlbflush.h>
 #include "mm.h"
 
-pte_t *fixmap_page_table;
-
 static inline void set_fixmap_pte(int idx, pte_t pte)
 {
 	unsigned long vaddr = __fix_to_virt(idx);
diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
index ce727d47275c..c8b5b2d05b55 100644
--- a/arch/arm/mm/mm.h
+++ b/arch/arm/mm/mm.h
@@ -7,6 +7,9 @@
 /* the upper-most page table pointer */
 extern pmd_t *top_pmd;
 
+/* The fixmap PTE. */
+extern pte_t *fixmap_page_table;
+
 /*
  * 0xffff8000 to 0xffffffff is reserved for any ARM architecture
  * specific hacks for copying pages efficiently, while 0xffff4000
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index b005a3337bc1..8dbdadc42d75 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -53,6 +53,9 @@ EXPORT_SYMBOL(empty_zero_page);
  */
 pmd_t *top_pmd;
 
+/* The fixmap PTE. */
+pte_t *fixmap_page_table;
+
 #define CPOLICY_UNCACHED	0
 #define CPOLICY_BUFFERED	1
 #define CPOLICY_WRITETHROUGH	2
@@ -1342,10 +1345,10 @@ static void __init kmap_init(void)
 #ifdef CONFIG_HIGHMEM
 	pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE),
 		PKMAP_BASE, _PAGE_KERNEL_TABLE);
+#endif
 
 	fixmap_page_table = early_pte_alloc(pmd_off_k(FIXADDR_START),
 		FIXADDR_START, _PAGE_KERNEL_TABLE);
-#endif
 }
 
 static void __init map_lowmem(void)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/7] arm: mm: reduce fixmap kmap from 32 to 16 CPUS
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

More room is needed in the fixmap range for non-kmap fixmap entries. This
reduces the kmap range from 32 to 16 CPUs. Additionally, add PTE entry for
fixmap regardless of CONFIG_HIGHMEM.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/fixmap.h | 12 ++++++++++--
 arch/arm/mm/highmem.c         |  2 --
 arch/arm/mm/mm.h              |  3 +++
 arch/arm/mm/mmu.c             |  5 ++++-
 4 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 8ee7cb4f62ca..3ed08232be55 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -1,16 +1,24 @@
 #ifndef _ASM_FIXMAP_H
 #define _ASM_FIXMAP_H
 
+/*
+ * The fixmap uses 2MB. The KMAP fixmap needs 64k per CPU, so make room for
+ * 16 CPUs (taking 1MB) and leave the rest for additional fixmap areas.
+ */
 #define FIXADDR_START		0xffc00000UL
 #define FIXADDR_END		0xffe00000UL
 #define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
 #define FIXADDR_SIZE		(FIXADDR_END - FIXADDR_START)
 
-#define FIX_KMAP_NR_PTES	(FIXADDR_SIZE >> PAGE_SHIFT)
+/* 16 PTEs per CPU (64k of 4k pages). */
+#define FIX_KMAP_NR_PTES	16
+#define FIX_KMAP_NR_CPUS	16
 
 enum fixed_addresses {
+	/* Support 16 CPUs for kmap as the first region of fixmap entries. */
 	FIX_KMAP_BEGIN,
-	FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
+	FIX_KMAP_END = (FIX_KMAP_NR_PTES * FIX_KMAP_NR_CPUS) - 1,
+
 	__end_of_fixed_addresses
 };
 
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index 45aeaaca9052..cbbef0b533d6 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -18,8 +18,6 @@
 #include <asm/tlbflush.h>
 #include "mm.h"
 
-pte_t *fixmap_page_table;
-
 static inline void set_fixmap_pte(int idx, pte_t pte)
 {
 	unsigned long vaddr = __fix_to_virt(idx);
diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
index ce727d47275c..c8b5b2d05b55 100644
--- a/arch/arm/mm/mm.h
+++ b/arch/arm/mm/mm.h
@@ -7,6 +7,9 @@
 /* the upper-most page table pointer */
 extern pmd_t *top_pmd;
 
+/* The fixmap PTE. */
+extern pte_t *fixmap_page_table;
+
 /*
  * 0xffff8000 to 0xffffffff is reserved for any ARM architecture
  * specific hacks for copying pages efficiently, while 0xffff4000
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index b005a3337bc1..8dbdadc42d75 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -53,6 +53,9 @@ EXPORT_SYMBOL(empty_zero_page);
  */
 pmd_t *top_pmd;
 
+/* The fixmap PTE. */
+pte_t *fixmap_page_table;
+
 #define CPOLICY_UNCACHED	0
 #define CPOLICY_BUFFERED	1
 #define CPOLICY_WRITETHROUGH	2
@@ -1342,10 +1345,10 @@ static void __init kmap_init(void)
 #ifdef CONFIG_HIGHMEM
 	pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE),
 		PKMAP_BASE, _PAGE_KERNEL_TABLE);
+#endif
 
 	fixmap_page_table = early_pte_alloc(pmd_off_k(FIXADDR_START),
 		FIXADDR_START, _PAGE_KERNEL_TABLE);
-#endif
 }
 
 static void __init map_lowmem(void)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/7] arm: use fixmap for text patching when text is RO
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rabin Vincent, Liu hua, Mark Salter, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

From: Rabin Vincent <rabin@rab.in>

Use fixmaps for text patching when the kernel text is read-only,
inspired by x86.  This makes jump labels and kprobes work with the
currently available CONFIG_DEBUG_SET_MODULE_RONX and the upcoming
CONFIG_DEBUG_RODATA options.

Signed-off-by: Rabin Vincent <rabin@rab.in>
[update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/fixmap.h |  4 +++
 arch/arm/kernel/jump_label.c  |  2 +-
 arch/arm/kernel/patch.c       | 70 ++++++++++++++++++++++++++++++++++++++-----
 arch/arm/kernel/patch.h       | 12 +++++++-
 4 files changed, 79 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 3ed08232be55..056f2be273a3 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -19,6 +19,10 @@ enum fixed_addresses {
 	FIX_KMAP_BEGIN,
 	FIX_KMAP_END = (FIX_KMAP_NR_PTES * FIX_KMAP_NR_CPUS) - 1,
 
+	/* Support writing RO kernel text via kprobes, jump labels, etc. */
+	FIX_TEXT_POKE0,
+	FIX_TEXT_POKE1,
+
 	__end_of_fixed_addresses
 };
 
diff --git a/arch/arm/kernel/jump_label.c b/arch/arm/kernel/jump_label.c
index 4ce4f789446d..afeeb9ea6f43 100644
--- a/arch/arm/kernel/jump_label.c
+++ b/arch/arm/kernel/jump_label.c
@@ -19,7 +19,7 @@ static void __arch_jump_label_transform(struct jump_entry *entry,
 		insn = arm_gen_nop();
 
 	if (is_static)
-		__patch_text(addr, insn);
+		__patch_text_early(addr, insn);
 	else
 		patch_text(addr, insn);
 }
diff --git a/arch/arm/kernel/patch.c b/arch/arm/kernel/patch.c
index 07314af47733..03dd4e39c833 100644
--- a/arch/arm/kernel/patch.c
+++ b/arch/arm/kernel/patch.c
@@ -1,8 +1,11 @@
 #include <linux/kernel.h>
+#include <linux/spinlock.h>
 #include <linux/kprobes.h>
+#include <linux/mm.h>
 #include <linux/stop_machine.h>
 
 #include <asm/cacheflush.h>
+#include <asm/fixmap.h>
 #include <asm/smp_plat.h>
 #include <asm/opcodes.h>
 
@@ -13,21 +16,69 @@ struct patch {
 	unsigned int insn;
 };
 
-void __kprobes __patch_text(void *addr, unsigned int insn)
+static DEFINE_SPINLOCK(patch_lock);
+
+static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
+{
+	unsigned int uintaddr = (uintptr_t) addr;
+	bool module = !core_kernel_text(uintaddr);
+	struct page *page;
+
+	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
+		page = vmalloc_to_page(addr);
+	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
+		page = virt_to_page(addr);
+	else
+		return addr;
+
+	if (flags)
+		spin_lock_irqsave(&patch_lock, *flags);
+
+	set_fixmap(fixmap, page_to_phys(page));
+
+	return (void *) (__fix_to_virt(fixmap) + (uintaddr & ~PAGE_MASK));
+}
+
+static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+{
+	clear_fixmap(fixmap);
+
+	if (flags)
+		spin_unlock_irqrestore(&patch_lock, *flags);
+}
+
+void __kprobes __patch_text_real(void *addr, unsigned int insn, bool remap)
 {
 	bool thumb2 = IS_ENABLED(CONFIG_THUMB2_KERNEL);
+	unsigned int uintaddr = (uintptr_t) addr;
+	bool twopage = false;
+	unsigned long flags;
+	void *waddr = addr;
 	int size;
 
+	if (remap)
+		waddr = patch_map(addr, FIX_TEXT_POKE0, &flags);
+
 	if (thumb2 && __opcode_is_thumb16(insn)) {
-		*(u16 *)addr = __opcode_to_mem_thumb16(insn);
+		*(u16 *)waddr = __opcode_to_mem_thumb16(insn);
 		size = sizeof(u16);
-	} else if (thumb2 && ((uintptr_t)addr & 2)) {
+	} else if (thumb2 && (uintaddr & 2)) {
 		u16 first = __opcode_thumb32_first(insn);
 		u16 second = __opcode_thumb32_second(insn);
-		u16 *addrh = addr;
+		u16 *addrh0 = waddr;
+		u16 *addrh1 = waddr + 2;
 
-		addrh[0] = __opcode_to_mem_thumb16(first);
-		addrh[1] = __opcode_to_mem_thumb16(second);
+		twopage = (uintaddr & ~PAGE_MASK) == PAGE_SIZE - 2;
+		if (twopage && remap)
+			addrh1 = patch_map(addr + 2, FIX_TEXT_POKE1, NULL);
+
+		*addrh0 = __opcode_to_mem_thumb16(first);
+		*addrh1 = __opcode_to_mem_thumb16(second);
+
+		if (twopage && addrh1 != addr + 2) {
+			flush_kernel_vmap_range(addrh1, 2);
+			patch_unmap(FIX_TEXT_POKE1, NULL);
+		}
 
 		size = sizeof(u32);
 	} else {
@@ -36,10 +87,15 @@ void __kprobes __patch_text(void *addr, unsigned int insn)
 		else
 			insn = __opcode_to_mem_arm(insn);
 
-		*(u32 *)addr = insn;
+		*(u32 *)waddr = insn;
 		size = sizeof(u32);
 	}
 
+	if (waddr != addr) {
+		flush_kernel_vmap_range(waddr, twopage ? size / 2 : size);
+		patch_unmap(FIX_TEXT_POKE0, &flags);
+	}
+
 	flush_icache_range((uintptr_t)(addr),
 			   (uintptr_t)(addr) + size);
 }
diff --git a/arch/arm/kernel/patch.h b/arch/arm/kernel/patch.h
index b4731f2dac38..77e054c2f6cd 100644
--- a/arch/arm/kernel/patch.h
+++ b/arch/arm/kernel/patch.h
@@ -2,6 +2,16 @@
 #define _ARM_KERNEL_PATCH_H
 
 void patch_text(void *addr, unsigned int insn);
-void __patch_text(void *addr, unsigned int insn);
+void __patch_text_real(void *addr, unsigned int insn, bool remap);
+
+static inline void __patch_text(void *addr, unsigned int insn)
+{
+	__patch_text_real(addr, insn, true);
+}
+
+static inline void __patch_text_early(void *addr, unsigned int insn)
+{
+	__patch_text_real(addr, insn, false);
+}
 
 #endif
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/7] arm: use fixmap for text patching when text is RO
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

From: Rabin Vincent <rabin@rab.in>

Use fixmaps for text patching when the kernel text is read-only,
inspired by x86.  This makes jump labels and kprobes work with the
currently available CONFIG_DEBUG_SET_MODULE_RONX and the upcoming
CONFIG_DEBUG_RODATA options.

Signed-off-by: Rabin Vincent <rabin@rab.in>
[update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/fixmap.h |  4 +++
 arch/arm/kernel/jump_label.c  |  2 +-
 arch/arm/kernel/patch.c       | 70 ++++++++++++++++++++++++++++++++++++++-----
 arch/arm/kernel/patch.h       | 12 +++++++-
 4 files changed, 79 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 3ed08232be55..056f2be273a3 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -19,6 +19,10 @@ enum fixed_addresses {
 	FIX_KMAP_BEGIN,
 	FIX_KMAP_END = (FIX_KMAP_NR_PTES * FIX_KMAP_NR_CPUS) - 1,
 
+	/* Support writing RO kernel text via kprobes, jump labels, etc. */
+	FIX_TEXT_POKE0,
+	FIX_TEXT_POKE1,
+
 	__end_of_fixed_addresses
 };
 
diff --git a/arch/arm/kernel/jump_label.c b/arch/arm/kernel/jump_label.c
index 4ce4f789446d..afeeb9ea6f43 100644
--- a/arch/arm/kernel/jump_label.c
+++ b/arch/arm/kernel/jump_label.c
@@ -19,7 +19,7 @@ static void __arch_jump_label_transform(struct jump_entry *entry,
 		insn = arm_gen_nop();
 
 	if (is_static)
-		__patch_text(addr, insn);
+		__patch_text_early(addr, insn);
 	else
 		patch_text(addr, insn);
 }
diff --git a/arch/arm/kernel/patch.c b/arch/arm/kernel/patch.c
index 07314af47733..03dd4e39c833 100644
--- a/arch/arm/kernel/patch.c
+++ b/arch/arm/kernel/patch.c
@@ -1,8 +1,11 @@
 #include <linux/kernel.h>
+#include <linux/spinlock.h>
 #include <linux/kprobes.h>
+#include <linux/mm.h>
 #include <linux/stop_machine.h>
 
 #include <asm/cacheflush.h>
+#include <asm/fixmap.h>
 #include <asm/smp_plat.h>
 #include <asm/opcodes.h>
 
@@ -13,21 +16,69 @@ struct patch {
 	unsigned int insn;
 };
 
-void __kprobes __patch_text(void *addr, unsigned int insn)
+static DEFINE_SPINLOCK(patch_lock);
+
+static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
+{
+	unsigned int uintaddr = (uintptr_t) addr;
+	bool module = !core_kernel_text(uintaddr);
+	struct page *page;
+
+	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
+		page = vmalloc_to_page(addr);
+	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
+		page = virt_to_page(addr);
+	else
+		return addr;
+
+	if (flags)
+		spin_lock_irqsave(&patch_lock, *flags);
+
+	set_fixmap(fixmap, page_to_phys(page));
+
+	return (void *) (__fix_to_virt(fixmap) + (uintaddr & ~PAGE_MASK));
+}
+
+static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+{
+	clear_fixmap(fixmap);
+
+	if (flags)
+		spin_unlock_irqrestore(&patch_lock, *flags);
+}
+
+void __kprobes __patch_text_real(void *addr, unsigned int insn, bool remap)
 {
 	bool thumb2 = IS_ENABLED(CONFIG_THUMB2_KERNEL);
+	unsigned int uintaddr = (uintptr_t) addr;
+	bool twopage = false;
+	unsigned long flags;
+	void *waddr = addr;
 	int size;
 
+	if (remap)
+		waddr = patch_map(addr, FIX_TEXT_POKE0, &flags);
+
 	if (thumb2 && __opcode_is_thumb16(insn)) {
-		*(u16 *)addr = __opcode_to_mem_thumb16(insn);
+		*(u16 *)waddr = __opcode_to_mem_thumb16(insn);
 		size = sizeof(u16);
-	} else if (thumb2 && ((uintptr_t)addr & 2)) {
+	} else if (thumb2 && (uintaddr & 2)) {
 		u16 first = __opcode_thumb32_first(insn);
 		u16 second = __opcode_thumb32_second(insn);
-		u16 *addrh = addr;
+		u16 *addrh0 = waddr;
+		u16 *addrh1 = waddr + 2;
 
-		addrh[0] = __opcode_to_mem_thumb16(first);
-		addrh[1] = __opcode_to_mem_thumb16(second);
+		twopage = (uintaddr & ~PAGE_MASK) == PAGE_SIZE - 2;
+		if (twopage && remap)
+			addrh1 = patch_map(addr + 2, FIX_TEXT_POKE1, NULL);
+
+		*addrh0 = __opcode_to_mem_thumb16(first);
+		*addrh1 = __opcode_to_mem_thumb16(second);
+
+		if (twopage && addrh1 != addr + 2) {
+			flush_kernel_vmap_range(addrh1, 2);
+			patch_unmap(FIX_TEXT_POKE1, NULL);
+		}
 
 		size = sizeof(u32);
 	} else {
@@ -36,10 +87,15 @@ void __kprobes __patch_text(void *addr, unsigned int insn)
 		else
 			insn = __opcode_to_mem_arm(insn);
 
-		*(u32 *)addr = insn;
+		*(u32 *)waddr = insn;
 		size = sizeof(u32);
 	}
 
+	if (waddr != addr) {
+		flush_kernel_vmap_range(waddr, twopage ? size / 2 : size);
+		patch_unmap(FIX_TEXT_POKE0, &flags);
+	}
+
 	flush_icache_range((uintptr_t)(addr),
 			   (uintptr_t)(addr) + size);
 }
diff --git a/arch/arm/kernel/patch.h b/arch/arm/kernel/patch.h
index b4731f2dac38..77e054c2f6cd 100644
--- a/arch/arm/kernel/patch.h
+++ b/arch/arm/kernel/patch.h
@@ -2,6 +2,16 @@
 #define _ARM_KERNEL_PATCH_H
 
 void patch_text(void *addr, unsigned int insn);
-void __patch_text(void *addr, unsigned int insn);
+void __patch_text_real(void *addr, unsigned int insn, bool remap);
+
+static inline void __patch_text(void *addr, unsigned int insn)
+{
+	__patch_text_real(addr, insn, true);
+}
+
+static inline void __patch_text_early(void *addr, unsigned int insn)
+{
+	__patch_text_real(addr, insn, false);
+}
 
 #endif
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/7] ARM: kexec: Make .text R/W in machine_kexec
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Nikolay Borisov, Liu hua, Mark Salter, Rabin Vincent,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

From: Nikolay Borisov <Nikolay.Borisov@arm.com>

With the introduction of Kees Cook's patch to make the kernel .text
read-only the existing method by which kexec works got broken since it
directly pokes some values in the template code, which resides in the
.text section.

The current patch changes the way those values are inserted so that poking
.text section occurs only in machine_kexec (e.g when we are about to nuke
the old kernel and are beyond the point of return). This allows to use
set_kernel_text_rw() to directly patch the values in the .text section.

I had already sent a patch which achieved this but it was significantly
more complicated, so this is a cleaner/straight-forward approach.

Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
[collapsed kexec_boot_atags (will.daecon)]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/kernel/machine_kexec.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
index 8cf0996aa1a8..fe6f3860ea57 100644
--- a/arch/arm/kernel/machine_kexec.c
+++ b/arch/arm/kernel/machine_kexec.c
@@ -29,6 +29,7 @@ extern unsigned long kexec_boot_atags;
 
 static atomic_t waiting_for_crash_ipi;
 
+static unsigned long dt_mem;
 /*
  * Provide a dummy crash_notes definition while crash dump arrives to arm.
  * This prevents breakage of crash_notes attribute in kernel/ksysfs.c.
@@ -64,7 +65,7 @@ int machine_kexec_prepare(struct kimage *image)
 			return err;
 
 		if (be32_to_cpu(header) == OF_DT_HEADER)
-			kexec_boot_atags = current_segment->mem;
+			dt_mem = current_segment->mem;
 	}
 	return 0;
 }
@@ -163,12 +164,13 @@ void machine_kexec(struct kimage *image)
 	reboot_code_buffer = page_address(image->control_code_page);
 
 	/* Prepare parameters for reboot_code_buffer*/
+	set_kernel_text_rw();
 	kexec_start_address = image->start;
 	kexec_indirection_page = page_list;
 	kexec_mach_type = machine_arch_type;
-	if (!kexec_boot_atags)
-		kexec_boot_atags = image->start - KEXEC_ARM_ZIMAGE_OFFSET + KEXEC_ARM_ATAGS_OFFSET;
-
+	kexec_boot_atags = dt_mem ?: kexec_boot_atags = image->start
+				     - KEXEC_ARM_ZIMAGE_OFFSET
+				     + KEXEC_ARM_ATAGS_OFFSET;
 
 	/* copy our kernel relocation code to the control code page */
 	reboot_entry = fncpy(reboot_code_buffer,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/7] ARM: kexec: Make .text R/W in machine_kexec
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

From: Nikolay Borisov <Nikolay.Borisov@arm.com>

With the introduction of Kees Cook's patch to make the kernel .text
read-only the existing method by which kexec works got broken since it
directly pokes some values in the template code, which resides in the
.text section.

The current patch changes the way those values are inserted so that poking
.text section occurs only in machine_kexec (e.g when we are about to nuke
the old kernel and are beyond the point of return). This allows to use
set_kernel_text_rw() to directly patch the values in the .text section.

I had already sent a patch which achieved this but it was significantly
more complicated, so this is a cleaner/straight-forward approach.

Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
[collapsed kexec_boot_atags (will.daecon)]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/kernel/machine_kexec.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
index 8cf0996aa1a8..fe6f3860ea57 100644
--- a/arch/arm/kernel/machine_kexec.c
+++ b/arch/arm/kernel/machine_kexec.c
@@ -29,6 +29,7 @@ extern unsigned long kexec_boot_atags;
 
 static atomic_t waiting_for_crash_ipi;
 
+static unsigned long dt_mem;
 /*
  * Provide a dummy crash_notes definition while crash dump arrives to arm.
  * This prevents breakage of crash_notes attribute in kernel/ksysfs.c.
@@ -64,7 +65,7 @@ int machine_kexec_prepare(struct kimage *image)
 			return err;
 
 		if (be32_to_cpu(header) == OF_DT_HEADER)
-			kexec_boot_atags = current_segment->mem;
+			dt_mem = current_segment->mem;
 	}
 	return 0;
 }
@@ -163,12 +164,13 @@ void machine_kexec(struct kimage *image)
 	reboot_code_buffer = page_address(image->control_code_page);
 
 	/* Prepare parameters for reboot_code_buffer*/
+	set_kernel_text_rw();
 	kexec_start_address = image->start;
 	kexec_indirection_page = page_list;
 	kexec_mach_type = machine_arch_type;
-	if (!kexec_boot_atags)
-		kexec_boot_atags = image->start - KEXEC_ARM_ZIMAGE_OFFSET + KEXEC_ARM_ATAGS_OFFSET;
-
+	kexec_boot_atags = dt_mem ?: kexec_boot_atags = image->start
+				     - KEXEC_ARM_ZIMAGE_OFFSET
+				     + KEXEC_ARM_ATAGS_OFFSET;
 
 	/* copy our kernel relocation code to the control code page */
 	reboot_entry = fncpy(reboot_code_buffer,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/7] ARM: mm: allow non-text sections to be non-executable
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Liu hua, Mark Salter, Rabin Vincent, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions
into section-sized areas that can have different permisions. Performs
the NX permission changes during free_initmem, so that init memory can be
reclaimed.

This uses section size instead of PMD size to reduce memory lost to
padding on non-LPAE systems.

Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/kernel/vmlinux.lds.S |  17 +++++++
 arch/arm/mm/Kconfig           |   9 ++++
 arch/arm/mm/init.c            | 106 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/mmu.c             |  13 +++++-
 4 files changed, 144 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 7bcee5c9b604..08fa667ef2f1 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -8,6 +8,9 @@
 #include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/page.h>
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+#include <asm/pgtable.h>
+#endif
 	
 #define PROC_INFO							\
 	. = ALIGN(4);							\
@@ -90,6 +93,11 @@ SECTIONS
 		_text = .;
 		HEAD_TEXT
 	}
+
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+	. = ALIGN(1<<SECTION_SHIFT);
+#endif
+
 	.text : {			/* Real text segment		*/
 		_stext = .;		/* Text and read-only data	*/
 			__exception_text_start = .;
@@ -145,7 +153,11 @@ SECTIONS
 	_etext = .;			/* End of text and rodata section */
 
 #ifndef CONFIG_XIP_KERNEL
+# ifdef CONFIG_ARM_KERNMEM_PERMS
+	. = ALIGN(1<<SECTION_SHIFT);
+# else
 	. = ALIGN(PAGE_SIZE);
+# endif
 	__init_begin = .;
 #endif
 	/*
@@ -220,7 +232,12 @@ SECTIONS
 	. = PAGE_OFFSET + TEXT_OFFSET;
 #else
 	__init_end = .;
+
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+	. = ALIGN(1<<SECTION_SHIFT);
+#else
 	. = ALIGN(THREAD_SIZE);
+#endif
 	__data_loc = .;
 #endif
 
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index c348eaee7ee2..0ea121dbf940 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1007,3 +1007,12 @@ config ARCH_SUPPORTS_BIG_ENDIAN
 	help
 	  This option specifies the architecture can support big endian
 	  operation.
+
+config ARM_KERNMEM_PERMS
+	bool "Restrict kernel memory permissions"
+	help
+	  If this is set, kernel memory other than kernel text (and rodata)
+	  will be made non-executable. The tradeoff is that each region is
+	  padded to section-size (1MiB) boundaries (because their permissions
+	  are different and splitting the 1M pages into 4K ones causes TLB
+	  performance problems), wasting memory.
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad82c05bfc3a..ccf392ef40d4 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -32,6 +32,11 @@
 #include <asm/tlb.h>
 #include <asm/fixmap.h>
 
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+#include <asm/system_info.h>
+#include <asm/cp15.h>
+#endif
+
 #include <asm/mach/arch.h>
 #include <asm/mach/map.h>
 
@@ -615,11 +620,112 @@ void __init mem_init(void)
 	}
 }
 
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+struct section_perm {
+	unsigned long start;
+	unsigned long end;
+	pmdval_t mask;
+	pmdval_t prot;
+};
+
+struct section_perm nx_perms[] = {
+	/* Make pages tables, etc before _stext RW (set NX). */
+	{
+		.start	= PAGE_OFFSET,
+		.end	= (unsigned long)_stext,
+		.mask	= ~PMD_SECT_XN,
+		.prot	= PMD_SECT_XN,
+	},
+	/* Make init RW (set NX). */
+	{
+		.start	= (unsigned long)__init_begin,
+		.end	= (unsigned long)_sdata,
+		.mask	= ~PMD_SECT_XN,
+		.prot	= PMD_SECT_XN,
+	},
+};
+
+/*
+ * Updates section permissions only for the current mm (sections are
+ * copied into each mm). During startup, this is the init_mm.
+ */
+static inline void section_update(unsigned long addr, pmdval_t mask,
+				  pmdval_t prot)
+{
+	struct mm_struct *mm;
+	pmd_t *pmd;
+
+	mm = current->active_mm;
+	pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr);
+
+#ifdef CONFIG_ARM_LPAE
+	pmd[0] = __pmd((pmd_val(pmd[0]) & mask) | prot);
+#else
+	if (addr & SECTION_SIZE)
+		pmd[1] = __pmd((pmd_val(pmd[1]) & mask) | prot);
+	else
+		pmd[0] = __pmd((pmd_val(pmd[0]) & mask) | prot);
+#endif
+	flush_pmd_entry(pmd);
+	local_flush_tlb_kernel_range(addr, addr + SECTION_SIZE);
+}
+
+/* Make sure extended page tables are in use. */
+static inline bool arch_has_strict_perms(void)
+{
+	unsigned int cr;
+
+	if (cpu_architecture() < CPU_ARCH_ARMv6)
+		return false;
+
+	cr = get_cr();
+	if (!(cr & CR_XP))
+		return false;
+
+	return true;
+}
+
+#define set_section_perms(perms, field)	{				\
+	size_t i;							\
+	unsigned long addr;						\
+									\
+	if (!arch_has_strict_perms())					\
+		return;							\
+									\
+	for (i = 0; i < ARRAY_SIZE(perms); i++) {			\
+		if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) ||	\
+		    !IS_ALIGNED(perms[i].end, SECTION_SIZE)) {		\
+			pr_err("BUG: section %lx-%lx not aligned to %lx\n", \
+				perms[i].start, perms[i].end,		\
+				SECTION_SIZE);				\
+			continue;					\
+		}							\
+									\
+		for (addr = perms[i].start;				\
+		     addr < perms[i].end;				\
+		     addr += SECTION_SIZE)				\
+			section_update(addr, perms[i].mask,		\
+				       perms[i].field);			\
+	}								\
+}
+
+static inline void fix_kernmem_perms(void)
+{
+	set_section_perms(nx_perms, prot);
+}
+#else
+static inline void fix_kernmem_perms(void) { }
+#endif /* CONFIG_ARM_KERNMEM_PERMS */
+
 void free_initmem(void)
 {
 #ifdef CONFIG_HAVE_TCM
 	extern char __tcm_start, __tcm_end;
+#endif
+
+	fix_kernmem_perms();
 
+#ifdef CONFIG_HAVE_TCM
 	poison_init_mem(&__tcm_start, &__tcm_end - &__tcm_start);
 	free_reserved_area(&__tcm_start, &__tcm_end, -1, "TCM link");
 #endif
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 8dbdadc42d75..3b81f111ef27 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1368,13 +1368,24 @@ static void __init map_lowmem(void)
 		if (start >= end)
 			break;
 
-		if (end < kernel_x_start || start >= kernel_x_end) {
+		if (end < kernel_x_start) {
 			map.pfn = __phys_to_pfn(start);
 			map.virtual = __phys_to_virt(start);
 			map.length = end - start;
 			map.type = MT_MEMORY_RWX;
 
 			create_mapping(&map);
+		} else if (start >= kernel_x_end) {
+			map.pfn = __phys_to_pfn(start);
+			map.virtual = __phys_to_virt(start);
+			map.length = end - start;
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+			map.type = MT_MEMORY_RW;
+#else
+			map.type = MT_MEMORY_RWX;
+#endif
+
+			create_mapping(&map);
 		} else {
 			/* This better cover the entire kernel */
 			if (start < kernel_x_start) {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/7] ARM: mm: allow non-text sections to be non-executable
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions
into section-sized areas that can have different permisions. Performs
the NX permission changes during free_initmem, so that init memory can be
reclaimed.

This uses section size instead of PMD size to reduce memory lost to
padding on non-LPAE systems.

Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/kernel/vmlinux.lds.S |  17 +++++++
 arch/arm/mm/Kconfig           |   9 ++++
 arch/arm/mm/init.c            | 106 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mm/mmu.c             |  13 +++++-
 4 files changed, 144 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 7bcee5c9b604..08fa667ef2f1 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -8,6 +8,9 @@
 #include <asm/thread_info.h>
 #include <asm/memory.h>
 #include <asm/page.h>
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+#include <asm/pgtable.h>
+#endif
 	
 #define PROC_INFO							\
 	. = ALIGN(4);							\
@@ -90,6 +93,11 @@ SECTIONS
 		_text = .;
 		HEAD_TEXT
 	}
+
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+	. = ALIGN(1<<SECTION_SHIFT);
+#endif
+
 	.text : {			/* Real text segment		*/
 		_stext = .;		/* Text and read-only data	*/
 			__exception_text_start = .;
@@ -145,7 +153,11 @@ SECTIONS
 	_etext = .;			/* End of text and rodata section */
 
 #ifndef CONFIG_XIP_KERNEL
+# ifdef CONFIG_ARM_KERNMEM_PERMS
+	. = ALIGN(1<<SECTION_SHIFT);
+# else
 	. = ALIGN(PAGE_SIZE);
+# endif
 	__init_begin = .;
 #endif
 	/*
@@ -220,7 +232,12 @@ SECTIONS
 	. = PAGE_OFFSET + TEXT_OFFSET;
 #else
 	__init_end = .;
+
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+	. = ALIGN(1<<SECTION_SHIFT);
+#else
 	. = ALIGN(THREAD_SIZE);
+#endif
 	__data_loc = .;
 #endif
 
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index c348eaee7ee2..0ea121dbf940 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1007,3 +1007,12 @@ config ARCH_SUPPORTS_BIG_ENDIAN
 	help
 	  This option specifies the architecture can support big endian
 	  operation.
+
+config ARM_KERNMEM_PERMS
+	bool "Restrict kernel memory permissions"
+	help
+	  If this is set, kernel memory other than kernel text (and rodata)
+	  will be made non-executable. The tradeoff is that each region is
+	  padded to section-size (1MiB) boundaries (because their permissions
+	  are different and splitting the 1M pages into 4K ones causes TLB
+	  performance problems), wasting memory.
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad82c05bfc3a..ccf392ef40d4 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -32,6 +32,11 @@
 #include <asm/tlb.h>
 #include <asm/fixmap.h>
 
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+#include <asm/system_info.h>
+#include <asm/cp15.h>
+#endif
+
 #include <asm/mach/arch.h>
 #include <asm/mach/map.h>
 
@@ -615,11 +620,112 @@ void __init mem_init(void)
 	}
 }
 
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+struct section_perm {
+	unsigned long start;
+	unsigned long end;
+	pmdval_t mask;
+	pmdval_t prot;
+};
+
+struct section_perm nx_perms[] = {
+	/* Make pages tables, etc before _stext RW (set NX). */
+	{
+		.start	= PAGE_OFFSET,
+		.end	= (unsigned long)_stext,
+		.mask	= ~PMD_SECT_XN,
+		.prot	= PMD_SECT_XN,
+	},
+	/* Make init RW (set NX). */
+	{
+		.start	= (unsigned long)__init_begin,
+		.end	= (unsigned long)_sdata,
+		.mask	= ~PMD_SECT_XN,
+		.prot	= PMD_SECT_XN,
+	},
+};
+
+/*
+ * Updates section permissions only for the current mm (sections are
+ * copied into each mm). During startup, this is the init_mm.
+ */
+static inline void section_update(unsigned long addr, pmdval_t mask,
+				  pmdval_t prot)
+{
+	struct mm_struct *mm;
+	pmd_t *pmd;
+
+	mm = current->active_mm;
+	pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr);
+
+#ifdef CONFIG_ARM_LPAE
+	pmd[0] = __pmd((pmd_val(pmd[0]) & mask) | prot);
+#else
+	if (addr & SECTION_SIZE)
+		pmd[1] = __pmd((pmd_val(pmd[1]) & mask) | prot);
+	else
+		pmd[0] = __pmd((pmd_val(pmd[0]) & mask) | prot);
+#endif
+	flush_pmd_entry(pmd);
+	local_flush_tlb_kernel_range(addr, addr + SECTION_SIZE);
+}
+
+/* Make sure extended page tables are in use. */
+static inline bool arch_has_strict_perms(void)
+{
+	unsigned int cr;
+
+	if (cpu_architecture() < CPU_ARCH_ARMv6)
+		return false;
+
+	cr = get_cr();
+	if (!(cr & CR_XP))
+		return false;
+
+	return true;
+}
+
+#define set_section_perms(perms, field)	{				\
+	size_t i;							\
+	unsigned long addr;						\
+									\
+	if (!arch_has_strict_perms())					\
+		return;							\
+									\
+	for (i = 0; i < ARRAY_SIZE(perms); i++) {			\
+		if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) ||	\
+		    !IS_ALIGNED(perms[i].end, SECTION_SIZE)) {		\
+			pr_err("BUG: section %lx-%lx not aligned to %lx\n", \
+				perms[i].start, perms[i].end,		\
+				SECTION_SIZE);				\
+			continue;					\
+		}							\
+									\
+		for (addr = perms[i].start;				\
+		     addr < perms[i].end;				\
+		     addr += SECTION_SIZE)				\
+			section_update(addr, perms[i].mask,		\
+				       perms[i].field);			\
+	}								\
+}
+
+static inline void fix_kernmem_perms(void)
+{
+	set_section_perms(nx_perms, prot);
+}
+#else
+static inline void fix_kernmem_perms(void) { }
+#endif /* CONFIG_ARM_KERNMEM_PERMS */
+
 void free_initmem(void)
 {
 #ifdef CONFIG_HAVE_TCM
 	extern char __tcm_start, __tcm_end;
+#endif
+
+	fix_kernmem_perms();
 
+#ifdef CONFIG_HAVE_TCM
 	poison_init_mem(&__tcm_start, &__tcm_end - &__tcm_start);
 	free_reserved_area(&__tcm_start, &__tcm_end, -1, "TCM link");
 #endif
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 8dbdadc42d75..3b81f111ef27 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1368,13 +1368,24 @@ static void __init map_lowmem(void)
 		if (start >= end)
 			break;
 
-		if (end < kernel_x_start || start >= kernel_x_end) {
+		if (end < kernel_x_start) {
 			map.pfn = __phys_to_pfn(start);
 			map.virtual = __phys_to_virt(start);
 			map.length = end - start;
 			map.type = MT_MEMORY_RWX;
 
 			create_mapping(&map);
+		} else if (start >= kernel_x_end) {
+			map.pfn = __phys_to_pfn(start);
+			map.virtual = __phys_to_virt(start);
+			map.length = end - start;
+#ifdef CONFIG_ARM_KERNMEM_PERMS
+			map.type = MT_MEMORY_RW;
+#else
+			map.type = MT_MEMORY_RWX;
+#endif
+
+			create_mapping(&map);
 		} else {
 			/* This better cover the entire kernel */
 			if (start < kernel_x_start) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 7/7] ARM: mm: allow text and rodata sections to be read-only
  2014-08-06 19:32 ` Kees Cook
@ 2014-08-06 19:32   ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Liu hua, Mark Salter, Rabin Vincent, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
read-only. Additionally, this splits rodata from text so that rodata can
also be NX, which may lead to wasted memory when aligning to SECTION_SIZE.
The read-only areas are made writable during ftrace updates.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/cacheflush.h | 10 +++++++++
 arch/arm/kernel/ftrace.c          | 19 ++++++++++++++++
 arch/arm/kernel/vmlinux.lds.S     |  3 +++
 arch/arm/mm/Kconfig               | 12 ++++++++++
 arch/arm/mm/init.c                | 46 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 90 insertions(+)

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index fd43f7f55b70..0cdf1e31df86 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -487,6 +487,16 @@ int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
 
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void);
+void set_kernel_text_rw(void);
+void set_kernel_text_ro(void);
+#else
+static inline void set_kernel_text_rw(void) { }
+static inline void set_kernel_text_ro(void) { }
+#endif
+
 void flush_uprobe_xol_access(struct page *page, unsigned long uaddr,
 			     void *kaddr, unsigned long len);
+
 #endif
diff --git a/arch/arm/kernel/ftrace.c b/arch/arm/kernel/ftrace.c
index af9a8a927a4e..b8c75e45a950 100644
--- a/arch/arm/kernel/ftrace.c
+++ b/arch/arm/kernel/ftrace.c
@@ -15,6 +15,7 @@
 #include <linux/ftrace.h>
 #include <linux/uaccess.h>
 #include <linux/module.h>
+#include <linux/stop_machine.h>
 
 #include <asm/cacheflush.h>
 #include <asm/opcodes.h>
@@ -35,6 +36,22 @@
 
 #define	OLD_NOP		0xe1a00000	/* mov r0, r0 */
 
+static int __ftrace_modify_code(void *data)
+{
+	int *command = data;
+
+	set_kernel_text_rw();
+	ftrace_modify_all_code(*command);
+	set_kernel_text_ro();
+
+	return 0;
+}
+
+void arch_ftrace_update_code(int command)
+{
+	stop_machine(__ftrace_modify_code, &command, NULL);
+}
+
 static unsigned long ftrace_nop_replace(struct dyn_ftrace *rec)
 {
 	return rec->arch.old_mcount ? OLD_NOP : NOP;
@@ -73,6 +90,8 @@ int ftrace_arch_code_modify_prepare(void)
 int ftrace_arch_code_modify_post_process(void)
 {
 	set_all_modules_text_ro();
+	/* Make sure any TLB misses during machine stop are cleared. */
+	flush_tlb_all();
 	return 0;
 }
 
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 08fa667ef2f1..ec79e7268e09 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -120,6 +120,9 @@ SECTIONS
 			ARM_CPU_KEEP(PROC_INFO)
 	}
 
+#ifdef CONFIG_DEBUG_RODATA
+	. = ALIGN(1<<SECTION_SHIFT);
+#endif
 	RO_DATA(PAGE_SIZE)
 
 	. = ALIGN(4);
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 0ea121dbf940..3a98cf340344 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1016,3 +1016,15 @@ config ARM_KERNMEM_PERMS
 	  padded to section-size (1MiB) boundaries (because their permissions
 	  are different and splitting the 1M pages into 4K ones causes TLB
 	  performance problems), wasting memory.
+
+config DEBUG_RODATA
+	bool "Make kernel text and rodata read-only"
+	depends on ARM_KERNMEM_PERMS
+	default y
+	help
+	  If this is set, kernel text and rodata will be made read-only. This
+	  is to help catch accidental or malicious attempts to change the
+	  kernel's executable code. Additionally splits rodata from kernel
+	  text so it can be made explicitly non-executable. This creates
+	  another section-size padded region, so it can waste more memory
+	  space while gaining the read-only protections.
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ccf392ef40d4..1a0248a9cfdb 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -626,6 +626,7 @@ struct section_perm {
 	unsigned long end;
 	pmdval_t mask;
 	pmdval_t prot;
+	pmdval_t clear;
 };
 
 struct section_perm nx_perms[] = {
@@ -643,8 +644,35 @@ struct section_perm nx_perms[] = {
 		.mask	= ~PMD_SECT_XN,
 		.prot	= PMD_SECT_XN,
 	},
+#ifdef CONFIG_DEBUG_RODATA
+	/* Make rodata NX (set RO in ro_perms below). */
+	{
+		.start  = (unsigned long)__start_rodata,
+		.end    = (unsigned long)__init_begin,
+		.mask   = ~PMD_SECT_XN,
+		.prot   = PMD_SECT_XN,
+	},
+#endif
 };
 
+#ifdef CONFIG_DEBUG_RODATA
+struct section_perm ro_perms[] = {
+	/* Make kernel code and rodata RX (set RO). */
+	{
+		.start  = (unsigned long)_stext,
+		.end    = (unsigned long)__init_begin,
+#ifdef CONFIG_ARM_LPAE
+		.mask   = ~PMD_SECT_RDONLY,
+		.prot   = PMD_SECT_RDONLY,
+#else
+		.mask   = ~(PMD_SECT_APX | PMD_SECT_AP_WRITE),
+		.prot   = PMD_SECT_APX | PMD_SECT_AP_WRITE,
+		.clear  = PMD_SECT_AP_WRITE,
+#endif
+	},
+};
+#endif
+
 /*
  * Updates section permissions only for the current mm (sections are
  * copied into each mm). During startup, this is the init_mm.
@@ -713,6 +741,24 @@ static inline void fix_kernmem_perms(void)
 {
 	set_section_perms(nx_perms, prot);
 }
+
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void)
+{
+	set_section_perms(ro_perms, prot);
+}
+
+void set_kernel_text_rw(void)
+{
+	set_section_perms(ro_perms, clear);
+}
+
+void set_kernel_text_ro(void)
+{
+	set_section_perms(ro_perms, prot);
+}
+#endif /* CONFIG_DEBUG_RODATA */
+
 #else
 static inline void fix_kernmem_perms(void) { }
 #endif /* CONFIG_ARM_KERNMEM_PERMS */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 7/7] ARM: mm: allow text and rodata sections to be read-only
@ 2014-08-06 19:32   ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-06 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
read-only. Additionally, this splits rodata from text so that rodata can
also be NX, which may lead to wasted memory when aligning to SECTION_SIZE.
The read-only areas are made writable during ftrace updates.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/include/asm/cacheflush.h | 10 +++++++++
 arch/arm/kernel/ftrace.c          | 19 ++++++++++++++++
 arch/arm/kernel/vmlinux.lds.S     |  3 +++
 arch/arm/mm/Kconfig               | 12 ++++++++++
 arch/arm/mm/init.c                | 46 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 90 insertions(+)

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index fd43f7f55b70..0cdf1e31df86 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -487,6 +487,16 @@ int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
 
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void);
+void set_kernel_text_rw(void);
+void set_kernel_text_ro(void);
+#else
+static inline void set_kernel_text_rw(void) { }
+static inline void set_kernel_text_ro(void) { }
+#endif
+
 void flush_uprobe_xol_access(struct page *page, unsigned long uaddr,
 			     void *kaddr, unsigned long len);
+
 #endif
diff --git a/arch/arm/kernel/ftrace.c b/arch/arm/kernel/ftrace.c
index af9a8a927a4e..b8c75e45a950 100644
--- a/arch/arm/kernel/ftrace.c
+++ b/arch/arm/kernel/ftrace.c
@@ -15,6 +15,7 @@
 #include <linux/ftrace.h>
 #include <linux/uaccess.h>
 #include <linux/module.h>
+#include <linux/stop_machine.h>
 
 #include <asm/cacheflush.h>
 #include <asm/opcodes.h>
@@ -35,6 +36,22 @@
 
 #define	OLD_NOP		0xe1a00000	/* mov r0, r0 */
 
+static int __ftrace_modify_code(void *data)
+{
+	int *command = data;
+
+	set_kernel_text_rw();
+	ftrace_modify_all_code(*command);
+	set_kernel_text_ro();
+
+	return 0;
+}
+
+void arch_ftrace_update_code(int command)
+{
+	stop_machine(__ftrace_modify_code, &command, NULL);
+}
+
 static unsigned long ftrace_nop_replace(struct dyn_ftrace *rec)
 {
 	return rec->arch.old_mcount ? OLD_NOP : NOP;
@@ -73,6 +90,8 @@ int ftrace_arch_code_modify_prepare(void)
 int ftrace_arch_code_modify_post_process(void)
 {
 	set_all_modules_text_ro();
+	/* Make sure any TLB misses during machine stop are cleared. */
+	flush_tlb_all();
 	return 0;
 }
 
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 08fa667ef2f1..ec79e7268e09 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -120,6 +120,9 @@ SECTIONS
 			ARM_CPU_KEEP(PROC_INFO)
 	}
 
+#ifdef CONFIG_DEBUG_RODATA
+	. = ALIGN(1<<SECTION_SHIFT);
+#endif
 	RO_DATA(PAGE_SIZE)
 
 	. = ALIGN(4);
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 0ea121dbf940..3a98cf340344 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1016,3 +1016,15 @@ config ARM_KERNMEM_PERMS
 	  padded to section-size (1MiB) boundaries (because their permissions
 	  are different and splitting the 1M pages into 4K ones causes TLB
 	  performance problems), wasting memory.
+
+config DEBUG_RODATA
+	bool "Make kernel text and rodata read-only"
+	depends on ARM_KERNMEM_PERMS
+	default y
+	help
+	  If this is set, kernel text and rodata will be made read-only. This
+	  is to help catch accidental or malicious attempts to change the
+	  kernel's executable code. Additionally splits rodata from kernel
+	  text so it can be made explicitly non-executable. This creates
+	  another section-size padded region, so it can waste more memory
+	  space while gaining the read-only protections.
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ccf392ef40d4..1a0248a9cfdb 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -626,6 +626,7 @@ struct section_perm {
 	unsigned long end;
 	pmdval_t mask;
 	pmdval_t prot;
+	pmdval_t clear;
 };
 
 struct section_perm nx_perms[] = {
@@ -643,8 +644,35 @@ struct section_perm nx_perms[] = {
 		.mask	= ~PMD_SECT_XN,
 		.prot	= PMD_SECT_XN,
 	},
+#ifdef CONFIG_DEBUG_RODATA
+	/* Make rodata NX (set RO in ro_perms below). */
+	{
+		.start  = (unsigned long)__start_rodata,
+		.end    = (unsigned long)__init_begin,
+		.mask   = ~PMD_SECT_XN,
+		.prot   = PMD_SECT_XN,
+	},
+#endif
 };
 
+#ifdef CONFIG_DEBUG_RODATA
+struct section_perm ro_perms[] = {
+	/* Make kernel code and rodata RX (set RO). */
+	{
+		.start  = (unsigned long)_stext,
+		.end    = (unsigned long)__init_begin,
+#ifdef CONFIG_ARM_LPAE
+		.mask   = ~PMD_SECT_RDONLY,
+		.prot   = PMD_SECT_RDONLY,
+#else
+		.mask   = ~(PMD_SECT_APX | PMD_SECT_AP_WRITE),
+		.prot   = PMD_SECT_APX | PMD_SECT_AP_WRITE,
+		.clear  = PMD_SECT_AP_WRITE,
+#endif
+	},
+};
+#endif
+
 /*
  * Updates section permissions only for the current mm (sections are
  * copied into each mm). During startup, this is the init_mm.
@@ -713,6 +741,24 @@ static inline void fix_kernmem_perms(void)
 {
 	set_section_perms(nx_perms, prot);
 }
+
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void)
+{
+	set_section_perms(ro_perms, prot);
+}
+
+void set_kernel_text_rw(void)
+{
+	set_section_perms(ro_perms, clear);
+}
+
+void set_kernel_text_ro(void)
+{
+	set_section_perms(ro_perms, prot);
+}
+#endif /* CONFIG_DEBUG_RODATA */
+
 #else
 static inline void fix_kernmem_perms(void) { }
 #endif /* CONFIG_ARM_KERNMEM_PERMS */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-06 19:32   ` Kees Cook
@ 2014-08-07  2:24     ` Laura Abbott
  -1 siblings, 0 replies; 30+ messages in thread
From: Laura Abbott @ 2014-08-07  2:24 UTC (permalink / raw)
  To: Kees Cook, linux-kernel
  Cc: Nicolas Pitre, Rob Herring, Liu hua, Catalin Marinas,
	Tomasz Figa, Will Deacon, Leif Lindholm, Doug Anderson,
	Rabin Vincent, Nikolay Borisov, Mark Salter,
	Russell King - ARM Linux, linux-arm-kernel

On 8/6/2014 12:32 PM, Kees Cook wrote:
> From: Mark Salter <msalter@redhat.com>
> 
> ARM is different from other architectures in that fixmap pages are indexed
> with a positive offset from FIXADDR_START.  Other architectures index with
> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
> definitions, this patch redefines FIXADDR_TOP to be inclusive of the
> useable range.  That is, FIXADDR_TOP is the virtual address of the topmost
> fixed page.  The newly defined FIXADDR_END is the first virtual address
> past the fixed mappings.
> 
> Signed-off-by: Mark Salter <msalter@redhat.com>
> Reviewed-by: Doug Anderson <dianders@chromium.org>
> [update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/arm/include/asm/fixmap.h | 27 +++++++++------------------
>  arch/arm/mm/init.c            |  2 +-
>  2 files changed, 10 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
> index 74124b0d0d79..190142d174ee 100644
> --- a/arch/arm/include/asm/fixmap.h
> +++ b/arch/arm/include/asm/fixmap.h
> @@ -2,27 +2,18 @@
>  #define _ASM_FIXMAP_H
>  
>  #define FIXADDR_START		0xffc00000UL
> -#define FIXADDR_TOP		0xffe00000UL
> -#define FIXADDR_SIZE		(FIXADDR_TOP - FIXADDR_START)
> +#define FIXADDR_END		0xffe00000UL
> +#define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
> +#define FIXADDR_SIZE		(FIXADDR_END - FIXADDR_START)
>  
>  #define FIX_KMAP_NR_PTES	(FIXADDR_SIZE >> PAGE_SHIFT)
>  
> -#define __fix_to_virt(x)	(FIXADDR_START + ((x) << PAGE_SHIFT))
> -#define __virt_to_fix(x)	(((x) - FIXADDR_START) >> PAGE_SHIFT)
> +enum fixed_addresses {
> +	FIX_KMAP_BEGIN,
> +	FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
> +	__end_of_fixed_addresses
> +};
>  
> -extern void __this_fixmap_does_not_exist(void);
> -
> -static inline unsigned long fix_to_virt(const unsigned int idx)
> -{
> -	if (idx >= FIX_KMAP_NR_PTES)
> -		__this_fixmap_does_not_exist();
> -	return __fix_to_virt(idx);
> -}
> -
> -static inline unsigned int virt_to_fix(const unsigned long vaddr)
> -{
> -	BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
> -	return __virt_to_fix(vaddr);
> -}
> +#include <asm-generic/fixmap.h>
>  
>  #endif
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index 659c75d808dc..ad82c05bfc3a 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -570,7 +570,7 @@ void __init mem_init(void)
>  			MLK(DTCM_OFFSET, (unsigned long) dtcm_end),
>  			MLK(ITCM_OFFSET, (unsigned long) itcm_end),
>  #endif
> -			MLK(FIXADDR_START, FIXADDR_TOP),
> +			MLK(FIXADDR_START, FIXADDR_END),
>  			MLM(VMALLOC_START, VMALLOC_END),
>  			MLM(PAGE_OFFSET, (unsigned long)high_memory),
>  #ifdef CONFIG_HIGHMEM
> 

I'm working off of a 3.14 kernel and with this backported 
kmap_atomic does not actually map properly for me. This was

my quick fix (not sure if we should be using __set_fixmap?). Or did
I fail at backportery?

-----8<-----
>From ea11b54704aa0a311ab3d05fd70072679bfe1a0b Mon Sep 17 00:00:00 2001
From: Laura Abbott <lauraa@codeaurora.org>
Date: Wed, 6 Aug 2014 19:20:46 -0700
Subject: [PATCH] arm: Get proper pte for fixmaps

The generic fixmap.h gets indexes from high to low instead
of low to high so the fixmap idx does not correspond to
the array entry in fixmap_page_table. Get the proper pte
to update.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm/mm/highmem.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index bedca3a..7f08e64 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -22,14 +22,18 @@
 static inline void set_fixmap_pte(int idx, pte_t pte)
 {
 	unsigned long vaddr = __fix_to_virt(idx);
-	set_pte_ext(fixmap_page_table + idx, pte, 0);
+	pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
+
+	set_pte_ext(ppte, pte, 0);
 	local_flush_tlb_kernel_page(vaddr);
 }
 
 static inline pte_t get_fixmap_pte(unsigned long vaddr)
 {
 	unsigned long idx = __virt_to_fix(vaddr);
-	return *(fixmap_page_table + idx);
+	pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
+
+	return *ppte;
 }
 
 void *kmap(struct page *page)
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07  2:24     ` Laura Abbott
  0 siblings, 0 replies; 30+ messages in thread
From: Laura Abbott @ 2014-08-07  2:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 8/6/2014 12:32 PM, Kees Cook wrote:
> From: Mark Salter <msalter@redhat.com>
> 
> ARM is different from other architectures in that fixmap pages are indexed
> with a positive offset from FIXADDR_START.  Other architectures index with
> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
> definitions, this patch redefines FIXADDR_TOP to be inclusive of the
> useable range.  That is, FIXADDR_TOP is the virtual address of the topmost
> fixed page.  The newly defined FIXADDR_END is the first virtual address
> past the fixed mappings.
> 
> Signed-off-by: Mark Salter <msalter@redhat.com>
> Reviewed-by: Doug Anderson <dianders@chromium.org>
> [update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/arm/include/asm/fixmap.h | 27 +++++++++------------------
>  arch/arm/mm/init.c            |  2 +-
>  2 files changed, 10 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
> index 74124b0d0d79..190142d174ee 100644
> --- a/arch/arm/include/asm/fixmap.h
> +++ b/arch/arm/include/asm/fixmap.h
> @@ -2,27 +2,18 @@
>  #define _ASM_FIXMAP_H
>  
>  #define FIXADDR_START		0xffc00000UL
> -#define FIXADDR_TOP		0xffe00000UL
> -#define FIXADDR_SIZE		(FIXADDR_TOP - FIXADDR_START)
> +#define FIXADDR_END		0xffe00000UL
> +#define FIXADDR_TOP		(FIXADDR_END - PAGE_SIZE)
> +#define FIXADDR_SIZE		(FIXADDR_END - FIXADDR_START)
>  
>  #define FIX_KMAP_NR_PTES	(FIXADDR_SIZE >> PAGE_SHIFT)
>  
> -#define __fix_to_virt(x)	(FIXADDR_START + ((x) << PAGE_SHIFT))
> -#define __virt_to_fix(x)	(((x) - FIXADDR_START) >> PAGE_SHIFT)
> +enum fixed_addresses {
> +	FIX_KMAP_BEGIN,
> +	FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
> +	__end_of_fixed_addresses
> +};
>  
> -extern void __this_fixmap_does_not_exist(void);
> -
> -static inline unsigned long fix_to_virt(const unsigned int idx)
> -{
> -	if (idx >= FIX_KMAP_NR_PTES)
> -		__this_fixmap_does_not_exist();
> -	return __fix_to_virt(idx);
> -}
> -
> -static inline unsigned int virt_to_fix(const unsigned long vaddr)
> -{
> -	BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
> -	return __virt_to_fix(vaddr);
> -}
> +#include <asm-generic/fixmap.h>
>  
>  #endif
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index 659c75d808dc..ad82c05bfc3a 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -570,7 +570,7 @@ void __init mem_init(void)
>  			MLK(DTCM_OFFSET, (unsigned long) dtcm_end),
>  			MLK(ITCM_OFFSET, (unsigned long) itcm_end),
>  #endif
> -			MLK(FIXADDR_START, FIXADDR_TOP),
> +			MLK(FIXADDR_START, FIXADDR_END),
>  			MLM(VMALLOC_START, VMALLOC_END),
>  			MLM(PAGE_OFFSET, (unsigned long)high_memory),
>  #ifdef CONFIG_HIGHMEM
> 

I'm working off of a 3.14 kernel and with this backported 
kmap_atomic does not actually map properly for me. This was

my quick fix (not sure if we should be using __set_fixmap?). Or did
I fail at backportery?

-----8<-----
>From ea11b54704aa0a311ab3d05fd70072679bfe1a0b Mon Sep 17 00:00:00 2001
From: Laura Abbott <lauraa@codeaurora.org>
Date: Wed, 6 Aug 2014 19:20:46 -0700
Subject: [PATCH] arm: Get proper pte for fixmaps

The generic fixmap.h gets indexes from high to low instead
of low to high so the fixmap idx does not correspond to
the array entry in fixmap_page_table. Get the proper pte
to update.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm/mm/highmem.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index bedca3a..7f08e64 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -22,14 +22,18 @@
 static inline void set_fixmap_pte(int idx, pte_t pte)
 {
 	unsigned long vaddr = __fix_to_virt(idx);
-	set_pte_ext(fixmap_page_table + idx, pte, 0);
+	pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
+
+	set_pte_ext(ppte, pte, 0);
 	local_flush_tlb_kernel_page(vaddr);
 }
 
 static inline pte_t get_fixmap_pte(unsigned long vaddr)
 {
 	unsigned long idx = __virt_to_fix(vaddr);
-	return *(fixmap_page_table + idx);
+	pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
+
+	return *ppte;
 }
 
 void *kmap(struct page *page)
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-07  2:24     ` Laura Abbott
@ 2014-08-07 14:35       ` Kees Cook
  -1 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-07 14:35 UTC (permalink / raw)
  To: Laura Abbott
  Cc: LKML, Nicolas Pitre, Rob Herring, Liu hua, Catalin Marinas,
	Tomasz Figa, Will Deacon, Leif Lindholm, Doug Anderson,
	Rabin Vincent, Nikolay Borisov, Mark Salter,
	Russell King - ARM Linux, linux-arm-kernel

On Wed, Aug 6, 2014 at 7:24 PM, Laura Abbott <lauraa@codeaurora.org> wrote:
> On 8/6/2014 12:32 PM, Kees Cook wrote:
>> From: Mark Salter <msalter@redhat.com>
>>
>> ARM is different from other architectures in that fixmap pages are indexed
>> with a positive offset from FIXADDR_START.  Other architectures index with
>> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
>> definitions, this patch redefines FIXADDR_TOP to be inclusive of the
>> useable range.  That is, FIXADDR_TOP is the virtual address of the topmost
>> fixed page.  The newly defined FIXADDR_END is the first virtual address
>> past the fixed mappings.
>>
>> Signed-off-by: Mark Salter <msalter@redhat.com>
>> Reviewed-by: Doug Anderson <dianders@chromium.org>
>> [update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  arch/arm/include/asm/fixmap.h | 27 +++++++++------------------
>>  arch/arm/mm/init.c            |  2 +-
>>  2 files changed, 10 insertions(+), 19 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
>> index 74124b0d0d79..190142d174ee 100644
>> --- a/arch/arm/include/asm/fixmap.h
>> +++ b/arch/arm/include/asm/fixmap.h
>> @@ -2,27 +2,18 @@
>>  #define _ASM_FIXMAP_H
>>
>>  #define FIXADDR_START                0xffc00000UL
>> -#define FIXADDR_TOP          0xffe00000UL
>> -#define FIXADDR_SIZE         (FIXADDR_TOP - FIXADDR_START)
>> +#define FIXADDR_END          0xffe00000UL
>> +#define FIXADDR_TOP          (FIXADDR_END - PAGE_SIZE)
>> +#define FIXADDR_SIZE         (FIXADDR_END - FIXADDR_START)
>>
>>  #define FIX_KMAP_NR_PTES     (FIXADDR_SIZE >> PAGE_SHIFT)
>>
>> -#define __fix_to_virt(x)     (FIXADDR_START + ((x) << PAGE_SHIFT))
>> -#define __virt_to_fix(x)     (((x) - FIXADDR_START) >> PAGE_SHIFT)
>> +enum fixed_addresses {
>> +     FIX_KMAP_BEGIN,
>> +     FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
>> +     __end_of_fixed_addresses
>> +};
>>
>> -extern void __this_fixmap_does_not_exist(void);
>> -
>> -static inline unsigned long fix_to_virt(const unsigned int idx)
>> -{
>> -     if (idx >= FIX_KMAP_NR_PTES)
>> -             __this_fixmap_does_not_exist();
>> -     return __fix_to_virt(idx);
>> -}
>> -
>> -static inline unsigned int virt_to_fix(const unsigned long vaddr)
>> -{
>> -     BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
>> -     return __virt_to_fix(vaddr);
>> -}
>> +#include <asm-generic/fixmap.h>
>>
>>  #endif
>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>> index 659c75d808dc..ad82c05bfc3a 100644
>> --- a/arch/arm/mm/init.c
>> +++ b/arch/arm/mm/init.c
>> @@ -570,7 +570,7 @@ void __init mem_init(void)
>>                       MLK(DTCM_OFFSET, (unsigned long) dtcm_end),
>>                       MLK(ITCM_OFFSET, (unsigned long) itcm_end),
>>  #endif
>> -                     MLK(FIXADDR_START, FIXADDR_TOP),
>> +                     MLK(FIXADDR_START, FIXADDR_END),
>>                       MLM(VMALLOC_START, VMALLOC_END),
>>                       MLM(PAGE_OFFSET, (unsigned long)high_memory),
>>  #ifdef CONFIG_HIGHMEM
>>
>
> I'm working off of a 3.14 kernel and with this backported
> kmap_atomic does not actually map properly for me. This was
>
> my quick fix (not sure if we should be using __set_fixmap?). Or did
> I fail at backportery?
>
> -----8<-----
> From ea11b54704aa0a311ab3d05fd70072679bfe1a0b Mon Sep 17 00:00:00 2001
> From: Laura Abbott <lauraa@codeaurora.org>
> Date: Wed, 6 Aug 2014 19:20:46 -0700
> Subject: [PATCH] arm: Get proper pte for fixmaps
>
> The generic fixmap.h gets indexes from high to low instead
> of low to high so the fixmap idx does not correspond to
> the array entry in fixmap_page_table. Get the proper pte
> to update.
>
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---
>  arch/arm/mm/highmem.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
> index bedca3a..7f08e64 100644
> --- a/arch/arm/mm/highmem.c
> +++ b/arch/arm/mm/highmem.c
> @@ -22,14 +22,18 @@
>  static inline void set_fixmap_pte(int idx, pte_t pte)
>  {
>         unsigned long vaddr = __fix_to_virt(idx);
> -       set_pte_ext(fixmap_page_table + idx, pte, 0);
> +       pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
> +
> +       set_pte_ext(ppte, pte, 0);
>         local_flush_tlb_kernel_page(vaddr);
>  }
>
>  static inline pte_t get_fixmap_pte(unsigned long vaddr)
>  {
>         unsigned long idx = __virt_to_fix(vaddr);
> -       return *(fixmap_page_table + idx);
> +       pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
> +
> +       return *ppte;
>  }

IIUC, what you have is correct: it matches what I had to do for
__set_fixmap and flips the high/low indexing. Thanks! I'll merge this
with the "use generic fixmap" patch, since it's what flips around the
ordering.

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07 14:35       ` Kees Cook
  0 siblings, 0 replies; 30+ messages in thread
From: Kees Cook @ 2014-08-07 14:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Aug 6, 2014 at 7:24 PM, Laura Abbott <lauraa@codeaurora.org> wrote:
> On 8/6/2014 12:32 PM, Kees Cook wrote:
>> From: Mark Salter <msalter@redhat.com>
>>
>> ARM is different from other architectures in that fixmap pages are indexed
>> with a positive offset from FIXADDR_START.  Other architectures index with
>> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
>> definitions, this patch redefines FIXADDR_TOP to be inclusive of the
>> useable range.  That is, FIXADDR_TOP is the virtual address of the topmost
>> fixed page.  The newly defined FIXADDR_END is the first virtual address
>> past the fixed mappings.
>>
>> Signed-off-by: Mark Salter <msalter@redhat.com>
>> Reviewed-by: Doug Anderson <dianders@chromium.org>
>> [update for "ARM: 8031/2: change fixmap mapping region to support 32 CPUs"]
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  arch/arm/include/asm/fixmap.h | 27 +++++++++------------------
>>  arch/arm/mm/init.c            |  2 +-
>>  2 files changed, 10 insertions(+), 19 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
>> index 74124b0d0d79..190142d174ee 100644
>> --- a/arch/arm/include/asm/fixmap.h
>> +++ b/arch/arm/include/asm/fixmap.h
>> @@ -2,27 +2,18 @@
>>  #define _ASM_FIXMAP_H
>>
>>  #define FIXADDR_START                0xffc00000UL
>> -#define FIXADDR_TOP          0xffe00000UL
>> -#define FIXADDR_SIZE         (FIXADDR_TOP - FIXADDR_START)
>> +#define FIXADDR_END          0xffe00000UL
>> +#define FIXADDR_TOP          (FIXADDR_END - PAGE_SIZE)
>> +#define FIXADDR_SIZE         (FIXADDR_END - FIXADDR_START)
>>
>>  #define FIX_KMAP_NR_PTES     (FIXADDR_SIZE >> PAGE_SHIFT)
>>
>> -#define __fix_to_virt(x)     (FIXADDR_START + ((x) << PAGE_SHIFT))
>> -#define __virt_to_fix(x)     (((x) - FIXADDR_START) >> PAGE_SHIFT)
>> +enum fixed_addresses {
>> +     FIX_KMAP_BEGIN,
>> +     FIX_KMAP_END = FIX_KMAP_NR_PTES - 1,
>> +     __end_of_fixed_addresses
>> +};
>>
>> -extern void __this_fixmap_does_not_exist(void);
>> -
>> -static inline unsigned long fix_to_virt(const unsigned int idx)
>> -{
>> -     if (idx >= FIX_KMAP_NR_PTES)
>> -             __this_fixmap_does_not_exist();
>> -     return __fix_to_virt(idx);
>> -}
>> -
>> -static inline unsigned int virt_to_fix(const unsigned long vaddr)
>> -{
>> -     BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
>> -     return __virt_to_fix(vaddr);
>> -}
>> +#include <asm-generic/fixmap.h>
>>
>>  #endif
>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>> index 659c75d808dc..ad82c05bfc3a 100644
>> --- a/arch/arm/mm/init.c
>> +++ b/arch/arm/mm/init.c
>> @@ -570,7 +570,7 @@ void __init mem_init(void)
>>                       MLK(DTCM_OFFSET, (unsigned long) dtcm_end),
>>                       MLK(ITCM_OFFSET, (unsigned long) itcm_end),
>>  #endif
>> -                     MLK(FIXADDR_START, FIXADDR_TOP),
>> +                     MLK(FIXADDR_START, FIXADDR_END),
>>                       MLM(VMALLOC_START, VMALLOC_END),
>>                       MLM(PAGE_OFFSET, (unsigned long)high_memory),
>>  #ifdef CONFIG_HIGHMEM
>>
>
> I'm working off of a 3.14 kernel and with this backported
> kmap_atomic does not actually map properly for me. This was
>
> my quick fix (not sure if we should be using __set_fixmap?). Or did
> I fail at backportery?
>
> -----8<-----
> From ea11b54704aa0a311ab3d05fd70072679bfe1a0b Mon Sep 17 00:00:00 2001
> From: Laura Abbott <lauraa@codeaurora.org>
> Date: Wed, 6 Aug 2014 19:20:46 -0700
> Subject: [PATCH] arm: Get proper pte for fixmaps
>
> The generic fixmap.h gets indexes from high to low instead
> of low to high so the fixmap idx does not correspond to
> the array entry in fixmap_page_table. Get the proper pte
> to update.
>
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---
>  arch/arm/mm/highmem.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
> index bedca3a..7f08e64 100644
> --- a/arch/arm/mm/highmem.c
> +++ b/arch/arm/mm/highmem.c
> @@ -22,14 +22,18 @@
>  static inline void set_fixmap_pte(int idx, pte_t pte)
>  {
>         unsigned long vaddr = __fix_to_virt(idx);
> -       set_pte_ext(fixmap_page_table + idx, pte, 0);
> +       pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
> +
> +       set_pte_ext(ppte, pte, 0);
>         local_flush_tlb_kernel_page(vaddr);
>  }
>
>  static inline pte_t get_fixmap_pte(unsigned long vaddr)
>  {
>         unsigned long idx = __virt_to_fix(vaddr);
> -       return *(fixmap_page_table + idx);
> +       pte_t *ppte = pte_offset_kernel(pmd_off_k(FIXADDR_START), vaddr);
> +
> +       return *ppte;
>  }

IIUC, what you have is correct: it matches what I had to do for
__set_fixmap and flips the high/low indexing. Thanks! I'll merge this
with the "use generic fixmap" patch, since it's what flips around the
ordering.

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-06 19:32   ` Kees Cook
@ 2014-08-07 15:15     ` Max Filippov
  -1 siblings, 0 replies; 30+ messages in thread
From: Max Filippov @ 2014-08-07 15:15 UTC (permalink / raw)
  To: Kees Cook
  Cc: LKML, Mark Salter, Liu hua, Rabin Vincent, Nikolay Borisov,
	Nicolas Pitre, Leif Lindholm, Tomasz Figa, Rob Herring,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

Hi,

On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
> ARM is different from other architectures in that fixmap pages are indexed
> with a positive offset from FIXADDR_START.  Other architectures index with
> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h

Does anybody know if there's any reason why generic fixmap.h uses negative
offsets? It complicates things with no obvious benefit if you e.g. try to align
virtual address in the fixmap region with physical page color (that's why I've
switched xtensa to positive fixmap addressing in v3.17).

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07 15:15     ` Max Filippov
  0 siblings, 0 replies; 30+ messages in thread
From: Max Filippov @ 2014-08-07 15:15 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
> ARM is different from other architectures in that fixmap pages are indexed
> with a positive offset from FIXADDR_START.  Other architectures index with
> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h

Does anybody know if there's any reason why generic fixmap.h uses negative
offsets? It complicates things with no obvious benefit if you e.g. try to align
virtual address in the fixmap region with physical page color (that's why I've
switched xtensa to positive fixmap addressing in v3.17).

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-07 15:15     ` Max Filippov
@ 2014-08-07 15:22       ` Rob Herring
  -1 siblings, 0 replies; 30+ messages in thread
From: Rob Herring @ 2014-08-07 15:22 UTC (permalink / raw)
  To: Max Filippov
  Cc: Kees Cook, LKML, Mark Salter, Liu hua, Rabin Vincent,
	Nikolay Borisov, Nicolas Pitre, Leif Lindholm, Tomasz Figa,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> Hi,
>
> On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
>> ARM is different from other architectures in that fixmap pages are indexed
>> with a positive offset from FIXADDR_START.  Other architectures index with
>> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
>
> Does anybody know if there's any reason why generic fixmap.h uses negative
> offsets? It complicates things with no obvious benefit if you e.g. try to align
> virtual address in the fixmap region with physical page color (that's why I've
> switched xtensa to positive fixmap addressing in v3.17).

No, but each arch doing it differently is even more annoying.

Rob

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07 15:22       ` Rob Herring
  0 siblings, 0 replies; 30+ messages in thread
From: Rob Herring @ 2014-08-07 15:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> Hi,
>
> On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
>> ARM is different from other architectures in that fixmap pages are indexed
>> with a positive offset from FIXADDR_START.  Other architectures index with
>> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
>
> Does anybody know if there's any reason why generic fixmap.h uses negative
> offsets? It complicates things with no obvious benefit if you e.g. try to align
> virtual address in the fixmap region with physical page color (that's why I've
> switched xtensa to positive fixmap addressing in v3.17).

No, but each arch doing it differently is even more annoying.

Rob

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-07 15:22       ` Rob Herring
@ 2014-08-07 15:32         ` Nicolas Pitre
  -1 siblings, 0 replies; 30+ messages in thread
From: Nicolas Pitre @ 2014-08-07 15:32 UTC (permalink / raw)
  To: Rob Herring
  Cc: Max Filippov, Kees Cook, LKML, Mark Salter, Liu hua,
	Rabin Vincent, Nikolay Borisov, Leif Lindholm, Tomasz Figa,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

On Thu, 7 Aug 2014, Rob Herring wrote:

> On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> > Hi,
> >
> > On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
> >> ARM is different from other architectures in that fixmap pages are indexed
> >> with a positive offset from FIXADDR_START.  Other architectures index with
> >> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
> >
> > Does anybody know if there's any reason why generic fixmap.h uses negative
> > offsets? It complicates things with no obvious benefit if you e.g. try to align
> > virtual address in the fixmap region with physical page color (that's why I've
> > switched xtensa to positive fixmap addressing in v3.17).
> 
> No, but each arch doing it differently is even more annoying.

Why not switching everybody to positive offsets then?


Nicolas

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07 15:32         ` Nicolas Pitre
  0 siblings, 0 replies; 30+ messages in thread
From: Nicolas Pitre @ 2014-08-07 15:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 7 Aug 2014, Rob Herring wrote:

> On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> > Hi,
> >
> > On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
> >> ARM is different from other architectures in that fixmap pages are indexed
> >> with a positive offset from FIXADDR_START.  Other architectures index with
> >> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
> >
> > Does anybody know if there's any reason why generic fixmap.h uses negative
> > offsets? It complicates things with no obvious benefit if you e.g. try to align
> > virtual address in the fixmap region with physical page color (that's why I've
> > switched xtensa to positive fixmap addressing in v3.17).
> 
> No, but each arch doing it differently is even more annoying.

Why not switching everybody to positive offsets then?


Nicolas

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-07 15:32         ` Nicolas Pitre
@ 2014-08-07 15:42           ` Max Filippov
  -1 siblings, 0 replies; 30+ messages in thread
From: Max Filippov @ 2014-08-07 15:42 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rob Herring, Kees Cook, LKML, Mark Salter, Liu hua,
	Rabin Vincent, Nikolay Borisov, Leif Lindholm, Tomasz Figa,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

On Thu, Aug 7, 2014 at 7:32 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Thu, 7 Aug 2014, Rob Herring wrote:
>
>> On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
>> > Hi,
>> >
>> > On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> ARM is different from other architectures in that fixmap pages are indexed
>> >> with a positive offset from FIXADDR_START.  Other architectures index with
>> >> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
>> >
>> > Does anybody know if there's any reason why generic fixmap.h uses negative
>> > offsets? It complicates things with no obvious benefit if you e.g. try to align
>> > virtual address in the fixmap region with physical page color (that's why I've
>> > switched xtensa to positive fixmap addressing in v3.17).
>>
>> No, but each arch doing it differently is even more annoying.
>
> Why not switching everybody to positive offsets then?

I can cook a patch if people agree that that'd be good.

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07 15:42           ` Max Filippov
  0 siblings, 0 replies; 30+ messages in thread
From: Max Filippov @ 2014-08-07 15:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Aug 7, 2014 at 7:32 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Thu, 7 Aug 2014, Rob Herring wrote:
>
>> On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
>> > Hi,
>> >
>> > On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> ARM is different from other architectures in that fixmap pages are indexed
>> >> with a positive offset from FIXADDR_START.  Other architectures index with
>> >> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
>> >
>> > Does anybody know if there's any reason why generic fixmap.h uses negative
>> > offsets? It complicates things with no obvious benefit if you e.g. try to align
>> > virtual address in the fixmap region with physical page color (that's why I've
>> > switched xtensa to positive fixmap addressing in v3.17).
>>
>> No, but each arch doing it differently is even more annoying.
>
> Why not switching everybody to positive offsets then?

I can cook a patch if people agree that that'd be good.

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/7] arm: use generic fixmap.h
  2014-08-07 15:42           ` Max Filippov
@ 2014-08-07 17:23             ` Mark Salter
  -1 siblings, 0 replies; 30+ messages in thread
From: Mark Salter @ 2014-08-07 17:23 UTC (permalink / raw)
  To: Max Filippov
  Cc: Nicolas Pitre, Rob Herring, Kees Cook, LKML, Liu hua,
	Rabin Vincent, Nikolay Borisov, Leif Lindholm, Tomasz Figa,
	Doug Anderson, Will Deacon, Laura Abbott, Catalin Marinas,
	Russell King - ARM Linux, linux-arm-kernel

On Thu, 2014-08-07 at 19:42 +0400, Max Filippov wrote:
> On Thu, Aug 7, 2014 at 7:32 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > On Thu, 7 Aug 2014, Rob Herring wrote:
> >
> >> On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> >> > Hi,
> >> >
> >> > On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> ARM is different from other architectures in that fixmap pages are indexed
> >> >> with a positive offset from FIXADDR_START.  Other architectures index with
> >> >> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
> >> >
> >> > Does anybody know if there's any reason why generic fixmap.h uses negative
> >> > offsets? It complicates things with no obvious benefit if you e.g. try to align
> >> > virtual address in the fixmap region with physical page color (that's why I've
> >> > switched xtensa to positive fixmap addressing in v3.17).
> >>
> >> No, but each arch doing it differently is even more annoying.
> >
> > Why not switching everybody to positive offsets then?
> 
> I can cook a patch if people agree that that'd be good.
> 

I think that would be fine. I think x86 was first and used a negative
negative offset. Others that followed just copied that. When I did the
generic fixmap patch, using a negative offset was the natural thing to
do. Arm was only arch doing it differently.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/7] arm: use generic fixmap.h
@ 2014-08-07 17:23             ` Mark Salter
  0 siblings, 0 replies; 30+ messages in thread
From: Mark Salter @ 2014-08-07 17:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 2014-08-07 at 19:42 +0400, Max Filippov wrote:
> On Thu, Aug 7, 2014 at 7:32 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > On Thu, 7 Aug 2014, Rob Herring wrote:
> >
> >> On Thu, Aug 7, 2014 at 10:15 AM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> >> > Hi,
> >> >
> >> > On Wed, Aug 6, 2014 at 11:32 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> ARM is different from other architectures in that fixmap pages are indexed
> >> >> with a positive offset from FIXADDR_START.  Other architectures index with
> >> >> a negative offset from FIXADDR_TOP.  In order to use the generic fixmap.h
> >> >
> >> > Does anybody know if there's any reason why generic fixmap.h uses negative
> >> > offsets? It complicates things with no obvious benefit if you e.g. try to align
> >> > virtual address in the fixmap region with physical page color (that's why I've
> >> > switched xtensa to positive fixmap addressing in v3.17).
> >>
> >> No, but each arch doing it differently is even more annoying.
> >
> > Why not switching everybody to positive offsets then?
> 
> I can cook a patch if people agree that that'd be good.
> 

I think that would be fine. I think x86 was first and used a negative
negative offset. Others that followed just copied that. When I did the
generic fixmap patch, using a negative offset was the natural thing to
do. Arm was only arch doing it differently.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2014-08-07 17:24 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-06 19:32 [PATCH 0/7] arm: support CONFIG_RODATA Kees Cook
2014-08-06 19:32 ` Kees Cook
2014-08-06 19:32 ` [PATCH 1/7] arm: use generic fixmap.h Kees Cook
2014-08-06 19:32   ` Kees Cook
2014-08-07  2:24   ` Laura Abbott
2014-08-07  2:24     ` Laura Abbott
2014-08-07 14:35     ` Kees Cook
2014-08-07 14:35       ` Kees Cook
2014-08-07 15:15   ` Max Filippov
2014-08-07 15:15     ` Max Filippov
2014-08-07 15:22     ` Rob Herring
2014-08-07 15:22       ` Rob Herring
2014-08-07 15:32       ` Nicolas Pitre
2014-08-07 15:32         ` Nicolas Pitre
2014-08-07 15:42         ` Max Filippov
2014-08-07 15:42           ` Max Filippov
2014-08-07 17:23           ` Mark Salter
2014-08-07 17:23             ` Mark Salter
2014-08-06 19:32 ` [PATCH 2/7] arm: fixmap: implement __set_fixmap() Kees Cook
2014-08-06 19:32   ` Kees Cook
2014-08-06 19:32 ` [PATCH 3/7] arm: mm: reduce fixmap kmap from 32 to 16 CPUS Kees Cook
2014-08-06 19:32   ` Kees Cook
2014-08-06 19:32 ` [PATCH 4/7] arm: use fixmap for text patching when text is RO Kees Cook
2014-08-06 19:32   ` Kees Cook
2014-08-06 19:32 ` [PATCH 5/7] ARM: kexec: Make .text R/W in machine_kexec Kees Cook
2014-08-06 19:32   ` Kees Cook
2014-08-06 19:32 ` [PATCH 6/7] ARM: mm: allow non-text sections to be non-executable Kees Cook
2014-08-06 19:32   ` Kees Cook
2014-08-06 19:32 ` [PATCH 7/7] ARM: mm: allow text and rodata sections to be read-only Kees Cook
2014-08-06 19:32   ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.