All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/8] introduce post-init read-only memory
@ 2016-01-19 18:08 ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and uses
it on the x86 vDSO to kill an extant kernel exploitation method. Also
adds a new kernel parameter to help debug future use and adds an lkdtm
test to check the results.

-Kees

v4:
- rebased
v3:
- conslidated mark_rodata_ro()
- make CONFIG_DEBUG_RODATA always enabled on x86, mingo
- enhanced strtobool and potential callers to use "on"/"off"
- use strtobool for rodata= param, gregkh
v2:
- renamed __read_only to __ro_after_init

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
@ 2016-01-19 18:08 ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and uses
it on the x86 vDSO to kill an extant kernel exploitation method. Also
adds a new kernel parameter to help debug future use and adds an lkdtm
test to check the results.

-Kees

v4:
- rebased
v3:
- conslidated mark_rodata_ro()
- make CONFIG_DEBUG_RODATA always enabled on x86, mingo
- enhanced strtobool and potential callers to use "on"/"off"
- use strtobool for rodata= param, gregkh
v2:
- renamed __read_only to __ro_after_init

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH v4 1/8] asm-generic: consolidate mark_rodata_ro()
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Russell King, Catalin Marinas, Will Deacon,
	James E.J. Bottomley, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, kernel-hardening,
	linux-kernel, linux-arch

Instead of defining mark_rodata_ro() in each architecture, consolidate it.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
---
 arch/arm/include/asm/cacheflush.h    | 1 -
 arch/arm64/include/asm/cacheflush.h  | 4 ----
 arch/parisc/include/asm/cacheflush.h | 4 ----
 arch/x86/include/asm/cacheflush.h    | 1 -
 include/linux/init.h                 | 4 ++++
 5 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index d5525bfc7e3e..9156fc303afd 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -491,7 +491,6 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
 void set_kernel_text_rw(void);
 void set_kernel_text_ro(void);
 #else
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 7fc294c3bc5b..22dda613f9c9 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -156,8 +156,4 @@ int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
 
-#ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
-#endif
-
 #endif
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 845272ce9cc5..7bd69bd43a01 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -121,10 +121,6 @@ flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vma
 	}
 }
 
-#ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
-#endif
-
 #include <asm/kmap_types.h>
 
 #define ARCH_HAS_KMAP
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index e63aa38e85fb..c8cff75c5b21 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -92,7 +92,6 @@ void clflush_cache_range(void *addr, unsigned int size);
 #define mmio_flush_range(addr, size) clflush_cache_range(addr, size)
 
 #ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
 extern const int rodata_test_data;
 extern int kernel_set_to_readonly;
 void set_kernel_text_rw(void);
diff --git a/include/linux/init.h b/include/linux/init.h
index b449f378f995..aedb254abc37 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -142,6 +142,10 @@ void prepare_namespace(void);
 void __init load_default_modules(void);
 int __init init_rootfs(void);
 
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void);
+#endif
+
 extern void (*late_time_init)(void);
 
 extern bool initcall_debug;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 1/8] asm-generic: consolidate mark_rodata_ro()
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Russell King, Catalin Marinas, Will Deacon,
	James E.J. Bottomley, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, kernel-hardening,
	linux-kernel, linux-arch

Instead of defining mark_rodata_ro() in each architecture, consolidate it.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
---
 arch/arm/include/asm/cacheflush.h    | 1 -
 arch/arm64/include/asm/cacheflush.h  | 4 ----
 arch/parisc/include/asm/cacheflush.h | 4 ----
 arch/x86/include/asm/cacheflush.h    | 1 -
 include/linux/init.h                 | 4 ++++
 5 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index d5525bfc7e3e..9156fc303afd 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -491,7 +491,6 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
 void set_kernel_text_rw(void);
 void set_kernel_text_ro(void);
 #else
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 7fc294c3bc5b..22dda613f9c9 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -156,8 +156,4 @@ int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
 
-#ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
-#endif
-
 #endif
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 845272ce9cc5..7bd69bd43a01 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -121,10 +121,6 @@ flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vma
 	}
 }
 
-#ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
-#endif
-
 #include <asm/kmap_types.h>
 
 #define ARCH_HAS_KMAP
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index e63aa38e85fb..c8cff75c5b21 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -92,7 +92,6 @@ void clflush_cache_range(void *addr, unsigned int size);
 #define mmio_flush_range(addr, size) clflush_cache_range(addr, size)
 
 #ifdef CONFIG_DEBUG_RODATA
-void mark_rodata_ro(void);
 extern const int rodata_test_data;
 extern int kernel_set_to_readonly;
 void set_kernel_text_rw(void);
diff --git a/include/linux/init.h b/include/linux/init.h
index b449f378f995..aedb254abc37 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -142,6 +142,10 @@ void prepare_namespace(void);
 void __init load_default_modules(void);
 int __init init_rootfs(void);
 
+#ifdef CONFIG_DEBUG_RODATA
+void mark_rodata_ro(void);
+#endif
+
 extern void (*late_time_init)(void);
 
 extern bool initcall_debug;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 2/8] lib: add "on" and "off" to strtobool
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

Several places in the kernel expect to use "on" and "off" for their
boolean signifiers, so add them to strtobool.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Daniel Borkmann <daniel@iogearbox.net>
---
 lib/string.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/lib/string.c b/lib/string.c
index 0323c0d5629a..091570708db7 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
  * @s: input string
  * @res: result
  *
- * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
- * Otherwise it will return -EINVAL.  Value pointed to by res is
- * updated upon finding a match.
+ * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
+ * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
+ * pointed to by res is updated upon finding a match.
  */
 int strtobool(const char *s, bool *res)
 {
+	if (!s)
+		return -EINVAL;
+
 	switch (s[0]) {
 	case 'y':
 	case 'Y':
@@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
 	case '0':
 		*res = false;
 		break;
+	case 'o':
+	case 'O':
+		switch (s[1]) {
+		case 'n':
+		case 'N':
+			*res = true;
+			break;
+		case 'f':
+		case 'F':
+			*res = false;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 2/8] lib: add "on" and "off" to strtobool
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

Several places in the kernel expect to use "on" and "off" for their
boolean signifiers, so add them to strtobool.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Daniel Borkmann <daniel@iogearbox.net>
---
 lib/string.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/lib/string.c b/lib/string.c
index 0323c0d5629a..091570708db7 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
  * @s: input string
  * @res: result
  *
- * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
- * Otherwise it will return -EINVAL.  Value pointed to by res is
- * updated upon finding a match.
+ * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
+ * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
+ * pointed to by res is updated upon finding a match.
  */
 int strtobool(const char *s, bool *res)
 {
+	if (!s)
+		return -EINVAL;
+
 	switch (s[0]) {
 	case 'y':
 	case 'Y':
@@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
 	case '0':
 		*res = false;
 		break;
+	case 'o':
+	case 'O':
+		switch (s[1]) {
+		case 'n':
+		case 'N':
+			*res = true;
+			break;
+		case 'f':
+		case 'F':
+			*res = false;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 3/8] param: convert some "on"/"off" users to strtobool
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

This changes several users of manual "on"/"off" parsing to use strtobool.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/rtasd.c                  | 10 +++-------
 arch/powerpc/platforms/pseries/hotplug-cpu.c | 11 +++--------
 arch/s390/kernel/time.c                      |  8 ++------
 arch/s390/kernel/topology.c                  |  8 +++-----
 arch/x86/kernel/aperture_64.c                | 13 +++----------
 kernel/time/hrtimer.c                        | 11 +++--------
 kernel/time/tick-sched.c                     | 11 +++--------
 7 files changed, 20 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/kernel/rtasd.c b/arch/powerpc/kernel/rtasd.c
index 5a2c049c1c61..984e67e91ba3 100644
--- a/arch/powerpc/kernel/rtasd.c
+++ b/arch/powerpc/kernel/rtasd.c
@@ -21,6 +21,7 @@
 #include <linux/cpu.h>
 #include <linux/workqueue.h>
 #include <linux/slab.h>
+#include <linux/string.h>
 
 #include <asm/uaccess.h>
 #include <asm/io.h>
@@ -49,7 +50,7 @@ static unsigned int rtas_error_log_buffer_max;
 static unsigned int event_scan;
 static unsigned int rtas_event_scan_rate;
 
-static int full_rtas_msgs = 0;
+static bool full_rtas_msgs;
 
 /* Stop logging to nvram after first fatal error */
 static int logging_enabled; /* Until we initialize everything,
@@ -592,11 +593,6 @@ __setup("surveillance=", surveillance_setup);
 
 static int __init rtasmsgs_setup(char *str)
 {
-	if (strcmp(str, "on") == 0)
-		full_rtas_msgs = 1;
-	else if (strcmp(str, "off") == 0)
-		full_rtas_msgs = 0;
-
-	return 1;
+	return strtobool(str, &full_rtas_msgs);
 }
 __setup("rtasmsgs=", rtasmsgs_setup);
diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 32274f72fe3f..bb333e9fd77a 100644
--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
@@ -27,6 +27,7 @@
 #include <linux/cpu.h>
 #include <linux/of.h>
 #include <linux/slab.h>
+#include <linux/string.h>
 #include <asm/prom.h>
 #include <asm/rtas.h>
 #include <asm/firmware.h>
@@ -47,20 +48,14 @@ static DEFINE_PER_CPU(enum cpu_state_vals, current_state) = CPU_STATE_OFFLINE;
 
 static enum cpu_state_vals default_offline_state = CPU_STATE_OFFLINE;
 
-static int cede_offline_enabled __read_mostly = 1;
+static bool cede_offline_enabled __read_mostly = true;
 
 /*
  * Enable/disable cede_offline when available.
  */
 static int __init setup_cede_offline(char *str)
 {
-	if (!strcmp(str, "off"))
-		cede_offline_enabled = 0;
-	else if (!strcmp(str, "on"))
-		cede_offline_enabled = 1;
-	else
-		return 0;
-	return 1;
+	return strtobool(str, &cede_offline_enabled);
 }
 
 __setup("cede_offline=", setup_cede_offline);
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index 99f84ac31307..afc7fc9684ba 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -1433,7 +1433,7 @@ device_initcall(etr_init_sysfs);
 /*
  * Server Time Protocol (STP) code.
  */
-static int stp_online;
+static bool stp_online;
 static struct stp_sstpi stp_info;
 static void *stp_page;
 
@@ -1444,11 +1444,7 @@ static struct timer_list stp_timer;
 
 static int __init early_parse_stp(char *p)
 {
-	if (strncmp(p, "off", 3) == 0)
-		stp_online = 0;
-	else if (strncmp(p, "on", 2) == 0)
-		stp_online = 1;
-	return 0;
+	return strtobool(p, &stp_online);
 }
 early_param("stp", early_parse_stp);
 
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index 40b8102fdadb..10e388216307 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -15,6 +15,7 @@
 #include <linux/delay.h>
 #include <linux/init.h>
 #include <linux/slab.h>
+#include <linux/string.h>
 #include <linux/cpu.h>
 #include <linux/smp.h>
 #include <linux/mm.h>
@@ -37,7 +38,7 @@ static void set_topology_timer(void);
 static void topology_work_fn(struct work_struct *work);
 static struct sysinfo_15_1_x *tl_info;
 
-static int topology_enabled = 1;
+static bool topology_enabled = true;
 static DECLARE_WORK(topology_work, topology_work_fn);
 
 /*
@@ -444,10 +445,7 @@ static const struct cpumask *cpu_book_mask(int cpu)
 
 static int __init early_parse_topology(char *p)
 {
-	if (strncmp(p, "off", 3))
-		return 0;
-	topology_enabled = 0;
-	return 0;
+	return strtobool(p, &topology_enabled);
 }
 early_param("topology", early_parse_topology);
 
diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c
index 6e85f713641d..6608b00a516a 100644
--- a/arch/x86/kernel/aperture_64.c
+++ b/arch/x86/kernel/aperture_64.c
@@ -20,6 +20,7 @@
 #include <linux/pci_ids.h>
 #include <linux/pci.h>
 #include <linux/bitops.h>
+#include <linux/string.h>
 #include <linux/suspend.h>
 #include <asm/e820.h>
 #include <asm/io.h>
@@ -227,19 +228,11 @@ static u32 __init search_agp_bridge(u32 *order, int *valid_agp)
 	return 0;
 }
 
-static int gart_fix_e820 __initdata = 1;
+static bool gart_fix_e820 __initdata = true;
 
 static int __init parse_gart_mem(char *p)
 {
-	if (!p)
-		return -EINVAL;
-
-	if (!strncmp(p, "off", 3))
-		gart_fix_e820 = 0;
-	else if (!strncmp(p, "on", 2))
-		gart_fix_e820 = 1;
-
-	return 0;
+	return strtobool(p, &gart_fix_e820);
 }
 early_param("gart_fix_e820", parse_gart_mem);
 
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 435b8850dd80..40d82fe4d2a5 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -39,6 +39,7 @@
 #include <linux/syscalls.h>
 #include <linux/kallsyms.h>
 #include <linux/interrupt.h>
+#include <linux/string.h>
 #include <linux/tick.h>
 #include <linux/seq_file.h>
 #include <linux/err.h>
@@ -515,7 +516,7 @@ static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
 /*
  * High resolution timer enabled ?
  */
-static int hrtimer_hres_enabled __read_mostly  = 1;
+static bool hrtimer_hres_enabled __read_mostly  = true;
 unsigned int hrtimer_resolution __read_mostly = LOW_RES_NSEC;
 EXPORT_SYMBOL_GPL(hrtimer_resolution);
 
@@ -524,13 +525,7 @@ EXPORT_SYMBOL_GPL(hrtimer_resolution);
  */
 static int __init setup_hrtimer_hres(char *str)
 {
-	if (!strcmp(str, "off"))
-		hrtimer_hres_enabled = 0;
-	else if (!strcmp(str, "on"))
-		hrtimer_hres_enabled = 1;
-	else
-		return 0;
-	return 1;
+	return strtobool(str, &hrtimer_hres_enabled);
 }
 
 __setup("highres=", setup_hrtimer_hres);
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 9cc20af58c76..f5ea98490ffa 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -19,6 +19,7 @@
 #include <linux/percpu.h>
 #include <linux/profile.h>
 #include <linux/sched.h>
+#include <linux/string.h>
 #include <linux/module.h>
 #include <linux/irq_work.h>
 #include <linux/posix-timers.h>
@@ -387,20 +388,14 @@ void __init tick_nohz_init(void)
 /*
  * NO HZ enabled ?
  */
-static int tick_nohz_enabled __read_mostly  = 1;
+static bool tick_nohz_enabled __read_mostly  = true;
 unsigned long tick_nohz_active  __read_mostly;
 /*
  * Enable / Disable tickless mode
  */
 static int __init setup_tick_nohz(char *str)
 {
-	if (!strcmp(str, "off"))
-		tick_nohz_enabled = 0;
-	else if (!strcmp(str, "on"))
-		tick_nohz_enabled = 1;
-	else
-		return 0;
-	return 1;
+	return strtobool(str, &tick_nohz_enabled);
 }
 
 __setup("nohz=", setup_tick_nohz);
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 3/8] param: convert some "on"/"off" users to strtobool
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

This changes several users of manual "on"/"off" parsing to use strtobool.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/rtasd.c                  | 10 +++-------
 arch/powerpc/platforms/pseries/hotplug-cpu.c | 11 +++--------
 arch/s390/kernel/time.c                      |  8 ++------
 arch/s390/kernel/topology.c                  |  8 +++-----
 arch/x86/kernel/aperture_64.c                | 13 +++----------
 kernel/time/hrtimer.c                        | 11 +++--------
 kernel/time/tick-sched.c                     | 11 +++--------
 7 files changed, 20 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/kernel/rtasd.c b/arch/powerpc/kernel/rtasd.c
index 5a2c049c1c61..984e67e91ba3 100644
--- a/arch/powerpc/kernel/rtasd.c
+++ b/arch/powerpc/kernel/rtasd.c
@@ -21,6 +21,7 @@
 #include <linux/cpu.h>
 #include <linux/workqueue.h>
 #include <linux/slab.h>
+#include <linux/string.h>
 
 #include <asm/uaccess.h>
 #include <asm/io.h>
@@ -49,7 +50,7 @@ static unsigned int rtas_error_log_buffer_max;
 static unsigned int event_scan;
 static unsigned int rtas_event_scan_rate;
 
-static int full_rtas_msgs = 0;
+static bool full_rtas_msgs;
 
 /* Stop logging to nvram after first fatal error */
 static int logging_enabled; /* Until we initialize everything,
@@ -592,11 +593,6 @@ __setup("surveillance=", surveillance_setup);
 
 static int __init rtasmsgs_setup(char *str)
 {
-	if (strcmp(str, "on") == 0)
-		full_rtas_msgs = 1;
-	else if (strcmp(str, "off") == 0)
-		full_rtas_msgs = 0;
-
-	return 1;
+	return strtobool(str, &full_rtas_msgs);
 }
 __setup("rtasmsgs=", rtasmsgs_setup);
diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 32274f72fe3f..bb333e9fd77a 100644
--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
@@ -27,6 +27,7 @@
 #include <linux/cpu.h>
 #include <linux/of.h>
 #include <linux/slab.h>
+#include <linux/string.h>
 #include <asm/prom.h>
 #include <asm/rtas.h>
 #include <asm/firmware.h>
@@ -47,20 +48,14 @@ static DEFINE_PER_CPU(enum cpu_state_vals, current_state) = CPU_STATE_OFFLINE;
 
 static enum cpu_state_vals default_offline_state = CPU_STATE_OFFLINE;
 
-static int cede_offline_enabled __read_mostly = 1;
+static bool cede_offline_enabled __read_mostly = true;
 
 /*
  * Enable/disable cede_offline when available.
  */
 static int __init setup_cede_offline(char *str)
 {
-	if (!strcmp(str, "off"))
-		cede_offline_enabled = 0;
-	else if (!strcmp(str, "on"))
-		cede_offline_enabled = 1;
-	else
-		return 0;
-	return 1;
+	return strtobool(str, &cede_offline_enabled);
 }
 
 __setup("cede_offline=", setup_cede_offline);
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index 99f84ac31307..afc7fc9684ba 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -1433,7 +1433,7 @@ device_initcall(etr_init_sysfs);
 /*
  * Server Time Protocol (STP) code.
  */
-static int stp_online;
+static bool stp_online;
 static struct stp_sstpi stp_info;
 static void *stp_page;
 
@@ -1444,11 +1444,7 @@ static struct timer_list stp_timer;
 
 static int __init early_parse_stp(char *p)
 {
-	if (strncmp(p, "off", 3) == 0)
-		stp_online = 0;
-	else if (strncmp(p, "on", 2) == 0)
-		stp_online = 1;
-	return 0;
+	return strtobool(p, &stp_online);
 }
 early_param("stp", early_parse_stp);
 
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index 40b8102fdadb..10e388216307 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -15,6 +15,7 @@
 #include <linux/delay.h>
 #include <linux/init.h>
 #include <linux/slab.h>
+#include <linux/string.h>
 #include <linux/cpu.h>
 #include <linux/smp.h>
 #include <linux/mm.h>
@@ -37,7 +38,7 @@ static void set_topology_timer(void);
 static void topology_work_fn(struct work_struct *work);
 static struct sysinfo_15_1_x *tl_info;
 
-static int topology_enabled = 1;
+static bool topology_enabled = true;
 static DECLARE_WORK(topology_work, topology_work_fn);
 
 /*
@@ -444,10 +445,7 @@ static const struct cpumask *cpu_book_mask(int cpu)
 
 static int __init early_parse_topology(char *p)
 {
-	if (strncmp(p, "off", 3))
-		return 0;
-	topology_enabled = 0;
-	return 0;
+	return strtobool(p, &topology_enabled);
 }
 early_param("topology", early_parse_topology);
 
diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c
index 6e85f713641d..6608b00a516a 100644
--- a/arch/x86/kernel/aperture_64.c
+++ b/arch/x86/kernel/aperture_64.c
@@ -20,6 +20,7 @@
 #include <linux/pci_ids.h>
 #include <linux/pci.h>
 #include <linux/bitops.h>
+#include <linux/string.h>
 #include <linux/suspend.h>
 #include <asm/e820.h>
 #include <asm/io.h>
@@ -227,19 +228,11 @@ static u32 __init search_agp_bridge(u32 *order, int *valid_agp)
 	return 0;
 }
 
-static int gart_fix_e820 __initdata = 1;
+static bool gart_fix_e820 __initdata = true;
 
 static int __init parse_gart_mem(char *p)
 {
-	if (!p)
-		return -EINVAL;
-
-	if (!strncmp(p, "off", 3))
-		gart_fix_e820 = 0;
-	else if (!strncmp(p, "on", 2))
-		gart_fix_e820 = 1;
-
-	return 0;
+	return strtobool(p, &gart_fix_e820);
 }
 early_param("gart_fix_e820", parse_gart_mem);
 
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 435b8850dd80..40d82fe4d2a5 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -39,6 +39,7 @@
 #include <linux/syscalls.h>
 #include <linux/kallsyms.h>
 #include <linux/interrupt.h>
+#include <linux/string.h>
 #include <linux/tick.h>
 #include <linux/seq_file.h>
 #include <linux/err.h>
@@ -515,7 +516,7 @@ static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
 /*
  * High resolution timer enabled ?
  */
-static int hrtimer_hres_enabled __read_mostly  = 1;
+static bool hrtimer_hres_enabled __read_mostly  = true;
 unsigned int hrtimer_resolution __read_mostly = LOW_RES_NSEC;
 EXPORT_SYMBOL_GPL(hrtimer_resolution);
 
@@ -524,13 +525,7 @@ EXPORT_SYMBOL_GPL(hrtimer_resolution);
  */
 static int __init setup_hrtimer_hres(char *str)
 {
-	if (!strcmp(str, "off"))
-		hrtimer_hres_enabled = 0;
-	else if (!strcmp(str, "on"))
-		hrtimer_hres_enabled = 1;
-	else
-		return 0;
-	return 1;
+	return strtobool(str, &hrtimer_hres_enabled);
 }
 
 __setup("highres=", setup_hrtimer_hres);
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 9cc20af58c76..f5ea98490ffa 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -19,6 +19,7 @@
 #include <linux/percpu.h>
 #include <linux/profile.h>
 #include <linux/sched.h>
+#include <linux/string.h>
 #include <linux/module.h>
 #include <linux/irq_work.h>
 #include <linux/posix-timers.h>
@@ -387,20 +388,14 @@ void __init tick_nohz_init(void)
 /*
  * NO HZ enabled ?
  */
-static int tick_nohz_enabled __read_mostly  = 1;
+static bool tick_nohz_enabled __read_mostly  = true;
 unsigned long tick_nohz_active  __read_mostly;
 /*
  * Enable / Disable tickless mode
  */
 static int __init setup_tick_nohz(char *str)
 {
-	if (!strcmp(str, "off"))
-		tick_nohz_enabled = 0;
-	else if (!strcmp(str, "on"))
-		tick_nohz_enabled = 1;
-	else
-		return 0;
-	return 1;
+	return strtobool(str, &tick_nohz_enabled);
 }
 
 __setup("nohz=", setup_tick_nohz);
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 4/8] init: create cmdline param to disable readonly
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

It may be useful to debug writes to the readonly sections of memory,
so provide a cmdline "rodata=off" to allow for this. This can be
expanded in the future to support "log" and "write" modes, but that
will need to be architecture-specific.

Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 Documentation/kernel-parameters.txt |  4 ++++
 init/main.c                         | 27 +++++++++++++++++++++++----
 kernel/debug/kdb/kdb_bp.c           |  4 +---
 3 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 3ea869d7a31c..bf4820c53992 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3450,6 +3450,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 
 	ro		[KNL] Mount root device read-only on boot
 
+	rodata=		[KNL]
+		on	Mark read-only kernel memory as read-only (default).
+		off	Leave read-only kernel memory writable for debugging.
+
 	root=		[KNL] Root filesystem
 			See name_to_dev_t comment in init/do_mounts.c.
 
diff --git a/init/main.c b/init/main.c
index c6ebefafa496..93dfb300cdfd 100644
--- a/init/main.c
+++ b/init/main.c
@@ -93,9 +93,6 @@ static int kernel_init(void *);
 extern void init_IRQ(void);
 extern void fork_init(void);
 extern void radix_tree_init(void);
-#ifndef CONFIG_DEBUG_RODATA
-static inline void mark_rodata_ro(void) { }
-#endif
 
 /*
  * Debug helper: via this flag we know that we are in 'early bootup code'
@@ -929,6 +926,28 @@ static int try_to_run_init_process(const char *init_filename)
 
 static noinline void __init kernel_init_freeable(void);
 
+#ifdef CONFIG_DEBUG_RODATA
+static bool rodata_enabled = true;
+static int __init set_debug_rodata(char *str)
+{
+	return strtobool(str, &rodata_enabled);
+}
+__setup("rodata=", set_debug_rodata);
+
+static void mark_readonly(void)
+{
+	if (rodata_enabled)
+		mark_rodata_ro();
+	else
+		pr_info("Kernel memory protection disabled.\n");
+}
+#else
+static inline void mark_readonly(void)
+{
+	pr_warn("This architecture does not have kernel memory protection.\n");
+}
+#endif
+
 static int __ref kernel_init(void *unused)
 {
 	int ret;
@@ -937,7 +956,7 @@ static int __ref kernel_init(void *unused)
 	/* need to finish all async __init code before freeing the memory */
 	async_synchronize_full();
 	free_initmem();
-	mark_rodata_ro();
+	mark_readonly();
 	system_state = SYSTEM_RUNNING;
 	numa_default_policy();
 
diff --git a/kernel/debug/kdb/kdb_bp.c b/kernel/debug/kdb/kdb_bp.c
index e1dbf4a2c69e..90ff129c88a2 100644
--- a/kernel/debug/kdb/kdb_bp.c
+++ b/kernel/debug/kdb/kdb_bp.c
@@ -153,13 +153,11 @@ static int _kdb_bp_install(struct pt_regs *regs, kdb_bp_t *bp)
 	} else {
 		kdb_printf("%s: failed to set breakpoint at 0x%lx\n",
 			   __func__, bp->bp_addr);
-#ifdef CONFIG_DEBUG_RODATA
 		if (!bp->bp_type) {
 			kdb_printf("Software breakpoints are unavailable.\n"
-				   "  Change the kernel CONFIG_DEBUG_RODATA=n\n"
+				   "  Boot the kernel with rodata=off\n"
 				   "  OR use hw breaks: help bph\n");
 		}
-#endif
 		return 1;
 	}
 	return 0;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 4/8] init: create cmdline param to disable readonly
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

It may be useful to debug writes to the readonly sections of memory,
so provide a cmdline "rodata=off" to allow for this. This can be
expanded in the future to support "log" and "write" modes, but that
will need to be architecture-specific.

Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 Documentation/kernel-parameters.txt |  4 ++++
 init/main.c                         | 27 +++++++++++++++++++++++----
 kernel/debug/kdb/kdb_bp.c           |  4 +---
 3 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 3ea869d7a31c..bf4820c53992 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3450,6 +3450,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 
 	ro		[KNL] Mount root device read-only on boot
 
+	rodata=		[KNL]
+		on	Mark read-only kernel memory as read-only (default).
+		off	Leave read-only kernel memory writable for debugging.
+
 	root=		[KNL] Root filesystem
 			See name_to_dev_t comment in init/do_mounts.c.
 
diff --git a/init/main.c b/init/main.c
index c6ebefafa496..93dfb300cdfd 100644
--- a/init/main.c
+++ b/init/main.c
@@ -93,9 +93,6 @@ static int kernel_init(void *);
 extern void init_IRQ(void);
 extern void fork_init(void);
 extern void radix_tree_init(void);
-#ifndef CONFIG_DEBUG_RODATA
-static inline void mark_rodata_ro(void) { }
-#endif
 
 /*
  * Debug helper: via this flag we know that we are in 'early bootup code'
@@ -929,6 +926,28 @@ static int try_to_run_init_process(const char *init_filename)
 
 static noinline void __init kernel_init_freeable(void);
 
+#ifdef CONFIG_DEBUG_RODATA
+static bool rodata_enabled = true;
+static int __init set_debug_rodata(char *str)
+{
+	return strtobool(str, &rodata_enabled);
+}
+__setup("rodata=", set_debug_rodata);
+
+static void mark_readonly(void)
+{
+	if (rodata_enabled)
+		mark_rodata_ro();
+	else
+		pr_info("Kernel memory protection disabled.\n");
+}
+#else
+static inline void mark_readonly(void)
+{
+	pr_warn("This architecture does not have kernel memory protection.\n");
+}
+#endif
+
 static int __ref kernel_init(void *unused)
 {
 	int ret;
@@ -937,7 +956,7 @@ static int __ref kernel_init(void *unused)
 	/* need to finish all async __init code before freeing the memory */
 	async_synchronize_full();
 	free_initmem();
-	mark_rodata_ro();
+	mark_readonly();
 	system_state = SYSTEM_RUNNING;
 	numa_default_policy();
 
diff --git a/kernel/debug/kdb/kdb_bp.c b/kernel/debug/kdb/kdb_bp.c
index e1dbf4a2c69e..90ff129c88a2 100644
--- a/kernel/debug/kdb/kdb_bp.c
+++ b/kernel/debug/kdb/kdb_bp.c
@@ -153,13 +153,11 @@ static int _kdb_bp_install(struct pt_regs *regs, kdb_bp_t *bp)
 	} else {
 		kdb_printf("%s: failed to set breakpoint at 0x%lx\n",
 			   __func__, bp->bp_addr);
-#ifdef CONFIG_DEBUG_RODATA
 		if (!bp->bp_type) {
 			kdb_printf("Software breakpoints are unavailable.\n"
-				   "  Change the kernel CONFIG_DEBUG_RODATA=n\n"
+				   "  Boot the kernel with rodata=off\n"
 				   "  OR use hw breaks: help bph\n");
 		}
-#endif
 		return 1;
 	}
 	return 0;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 5/8] x86: make CONFIG_DEBUG_RODATA non-optional
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  3 +++
 arch/x86/Kconfig.debug            | 18 +++---------------
 arch/x86/include/asm/cacheflush.h |  5 -----
 arch/x86/include/asm/kvm_para.h   |  7 -------
 arch/x86/include/asm/sections.h   |  2 +-
 arch/x86/kernel/ftrace.c          |  6 +++---
 arch/x86/kernel/kgdb.c            |  8 ++------
 arch/x86/kernel/test_nx.c         |  2 --
 arch/x86/kernel/test_rodata.c     |  2 +-
 arch/x86/kernel/vmlinux.lds.S     | 25 +++++++++++--------------
 arch/x86/mm/init_32.c             |  3 ---
 arch/x86/mm/init_64.c             |  3 ---
 arch/x86/mm/pageattr.c            |  2 +-
 13 files changed, 25 insertions(+), 61 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4a10ba9e95da..69164efd0333 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -303,6 +303,9 @@ config ARCH_SUPPORTS_UPROBES
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config DEBUG_RODATA
+	def_bool y
+
 config PGTABLE_LEVELS
 	int
 	default 4 if X86_64
diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
index 9b18ed97a8a2..7816b7b276f4 100644
--- a/arch/x86/Kconfig.debug
+++ b/arch/x86/Kconfig.debug
@@ -74,28 +74,16 @@ config EFI_PGT_DUMP
 	  issues with the mapping of the EFI runtime regions into that
 	  table.
 
-config DEBUG_RODATA
-	bool "Write protect kernel read-only data structures"
-	default y
-	depends on DEBUG_KERNEL
-	---help---
-	  Mark the kernel read-only data as write-protected in the pagetables,
-	  in order to catch accidental (and incorrect) writes to such const
-	  data. This is recommended so that we can catch kernel bugs sooner.
-	  If in doubt, say "Y".
-
 config DEBUG_RODATA_TEST
-	bool "Testcase for the DEBUG_RODATA feature"
-	depends on DEBUG_RODATA
+	bool "Testcase for the marking rodata read-only"
 	default y
 	---help---
-	  This option enables a testcase for the DEBUG_RODATA
-	  feature as well as for the change_page_attr() infrastructure.
+	  This option enables a testcase for the setting rodata read-only
+	  as well as for the change_page_attr() infrastructure.
 	  If in doubt, say "N"
 
 config DEBUG_WX
 	bool "Warn on W+X mappings at boot"
-	depends on DEBUG_RODATA
 	select X86_PTDUMP_CORE
 	---help---
 	  Generate a warning if any W+X mappings are found at boot.
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index c8cff75c5b21..61518cf79437 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -91,15 +91,10 @@ void clflush_cache_range(void *addr, unsigned int size);
 
 #define mmio_flush_range(addr, size) clflush_cache_range(addr, size)
 
-#ifdef CONFIG_DEBUG_RODATA
 extern const int rodata_test_data;
 extern int kernel_set_to_readonly;
 void set_kernel_text_rw(void);
 void set_kernel_text_ro(void);
-#else
-static inline void set_kernel_text_rw(void) { }
-static inline void set_kernel_text_ro(void) { }
-#endif
 
 #ifdef CONFIG_DEBUG_RODATA_TEST
 int rodata_test(void);
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index c1adf33fdd0d..bc62e7cbf1b1 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -17,15 +17,8 @@ static inline bool kvm_check_and_clear_guest_paused(void)
 }
 #endif /* CONFIG_KVM_GUEST */
 
-#ifdef CONFIG_DEBUG_RODATA
 #define KVM_HYPERCALL \
         ALTERNATIVE(".byte 0x0f,0x01,0xc1", ".byte 0x0f,0x01,0xd9", X86_FEATURE_VMMCALL)
-#else
-/* On AMD processors, vmcall will generate a trap that we will
- * then rewrite to the appropriate instruction.
- */
-#define KVM_HYPERCALL ".byte 0x0f,0x01,0xc1"
-#endif
 
 /* For KVM hypercalls, a three-byte sequence of either the vmcall or the vmmcall
  * instruction.  The hypervisor may replace it with something else but only the
diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
index 0a5242428659..13b6cdd0af57 100644
--- a/arch/x86/include/asm/sections.h
+++ b/arch/x86/include/asm/sections.h
@@ -7,7 +7,7 @@
 extern char __brk_base[], __brk_limit[];
 extern struct exception_table_entry __stop___ex_table[];
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
+#if defined(CONFIG_X86_64)
 extern char __end_rodata_hpage_align[];
 #endif
 
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 29408d6d6626..05c9e3f5b6d7 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -81,9 +81,9 @@ within(unsigned long addr, unsigned long start, unsigned long end)
 static unsigned long text_ip_addr(unsigned long ip)
 {
 	/*
-	 * On x86_64, kernel text mappings are mapped read-only with
-	 * CONFIG_DEBUG_RODATA. So we use the kernel identity mapping instead
-	 * of the kernel text mapping to modify the kernel text.
+	 * On x86_64, kernel text mappings are mapped read-only, so we use
+	 * the kernel identity mapping instead of the kernel text mapping
+	 * to modify the kernel text.
 	 *
 	 * For 32bit kernels, these mappings are same and we can use
 	 * kernel identity mapping to modify code.
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 44256a62702b..ed15cd486d06 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -750,9 +750,7 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 {
 	int err;
-#ifdef CONFIG_DEBUG_RODATA
 	char opc[BREAK_INSTR_SIZE];
-#endif /* CONFIG_DEBUG_RODATA */
 
 	bpt->type = BP_BREAKPOINT;
 	err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
@@ -761,7 +759,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 		return err;
 	err = probe_kernel_write((char *)bpt->bpt_addr,
 				 arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE);
-#ifdef CONFIG_DEBUG_RODATA
 	if (!err)
 		return err;
 	/*
@@ -778,13 +775,12 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 	if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE))
 		return -EINVAL;
 	bpt->type = BP_POKE_BREAKPOINT;
-#endif /* CONFIG_DEBUG_RODATA */
+
 	return err;
 }
 
 int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 {
-#ifdef CONFIG_DEBUG_RODATA
 	int err;
 	char opc[BREAK_INSTR_SIZE];
 
@@ -801,8 +797,8 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
 		goto knl_write;
 	return err;
+
 knl_write:
-#endif /* CONFIG_DEBUG_RODATA */
 	return probe_kernel_write((char *)bpt->bpt_addr,
 				  (char *)bpt->saved_instr, BREAK_INSTR_SIZE);
 }
diff --git a/arch/x86/kernel/test_nx.c b/arch/x86/kernel/test_nx.c
index 3f92ce07e525..27538f183c3b 100644
--- a/arch/x86/kernel/test_nx.c
+++ b/arch/x86/kernel/test_nx.c
@@ -142,7 +142,6 @@ static int test_NX(void)
 	 * by the error message
 	 */
 
-#ifdef CONFIG_DEBUG_RODATA
 	/* Test 3: Check if the .rodata section is executable */
 	if (rodata_test_data != 0xC3) {
 		printk(KERN_ERR "test_nx: .rodata marker has invalid value\n");
@@ -151,7 +150,6 @@ static int test_NX(void)
 		printk(KERN_ERR "test_nx: .rodata section is executable\n");
 		ret = -ENODEV;
 	}
-#endif
 
 #if 0
 	/* Test 4: Check if the .data section of a module is executable */
diff --git a/arch/x86/kernel/test_rodata.c b/arch/x86/kernel/test_rodata.c
index 5ecbfe5099da..cb4a01b41e27 100644
--- a/arch/x86/kernel/test_rodata.c
+++ b/arch/x86/kernel/test_rodata.c
@@ -76,5 +76,5 @@ int rodata_test(void)
 }
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Testcase for the DEBUG_RODATA infrastructure");
+MODULE_DESCRIPTION("Testcase for marking rodata as read-only");
 MODULE_AUTHOR("Arjan van de Ven <arjan@linux.intel.com>");
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 74e4bf11f562..fe133b710bef 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -41,29 +41,28 @@ ENTRY(phys_startup_64)
 jiffies_64 = jiffies;
 #endif
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
+#if defined(CONFIG_X86_64)
 /*
- * On 64-bit, align RODATA to 2MB so that even with CONFIG_DEBUG_RODATA
- * we retain large page mappings for boundaries spanning kernel text, rodata
- * and data sections.
+ * On 64-bit, align RODATA to 2MB so we retain large page mappings for
+ * boundaries spanning kernel text, rodata and data sections.
  *
  * However, kernel identity mappings will have different RWX permissions
  * to the pages mapping to text and to the pages padding (which are freed) the
  * text section. Hence kernel identity mappings will be broken to smaller
  * pages. For 64-bit, kernel text and kernel identity mappings are different,
- * so we can enable protection checks that come with CONFIG_DEBUG_RODATA,
- * as well as retain 2MB large page mappings for kernel text.
+ * so we can enable protection checks as well as retain 2MB large page
+ * mappings for kernel text.
  */
-#define X64_ALIGN_DEBUG_RODATA_BEGIN	. = ALIGN(HPAGE_SIZE);
+#define X64_ALIGN_RODATA_BEGIN	. = ALIGN(HPAGE_SIZE);
 
-#define X64_ALIGN_DEBUG_RODATA_END				\
+#define X64_ALIGN_RODATA_END					\
 		. = ALIGN(HPAGE_SIZE);				\
 		__end_rodata_hpage_align = .;
 
 #else
 
-#define X64_ALIGN_DEBUG_RODATA_BEGIN
-#define X64_ALIGN_DEBUG_RODATA_END
+#define X64_ALIGN_RODATA_BEGIN
+#define X64_ALIGN_RODATA_END
 
 #endif
 
@@ -112,13 +111,11 @@ SECTIONS
 
 	EXCEPTION_TABLE(16) :text = 0x9090
 
-#if defined(CONFIG_DEBUG_RODATA)
 	/* .text should occupy whole number of pages */
 	. = ALIGN(PAGE_SIZE);
-#endif
-	X64_ALIGN_DEBUG_RODATA_BEGIN
+	X64_ALIGN_RODATA_BEGIN
 	RO_DATA(PAGE_SIZE)
-	X64_ALIGN_DEBUG_RODATA_END
+	X64_ALIGN_RODATA_END
 
 	/* Data */
 	.data : AT(ADDR(.data) - LOAD_OFFSET) {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index cb4ef3de61f9..2ebfbaf61142 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -871,7 +871,6 @@ static noinline int do_test_wp_bit(void)
 	return flag;
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 const int rodata_test_data = 0xC3;
 EXPORT_SYMBOL_GPL(rodata_test_data);
 
@@ -960,5 +959,3 @@ void mark_rodata_ro(void)
 	if (__supported_pte_mask & _PAGE_NX)
 		debug_checkwx();
 }
-#endif
-
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 5488d21123bd..a40b755c67e3 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1074,7 +1074,6 @@ void __init mem_init(void)
 	mem_init_print_info(NULL);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 const int rodata_test_data = 0xC3;
 EXPORT_SYMBOL_GPL(rodata_test_data);
 
@@ -1166,8 +1165,6 @@ void mark_rodata_ro(void)
 	debug_checkwx();
 }
 
-#endif
-
 int kern_addr_valid(unsigned long addr)
 {
 	unsigned long above = ((long)addr) >> __VIRTUAL_MASK_SHIFT;
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index fc6a4c8f6e2a..4df560040314 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -283,7 +283,7 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
 		   __pa_symbol(__end_rodata) >> PAGE_SHIFT))
 		pgprot_val(forbidden) |= _PAGE_RW;
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
+#if defined(CONFIG_X86_64)
 	/*
 	 * Once the kernel maps the text as RO (kernel_set_to_readonly is set),
 	 * kernel text mappings for the large page aligned text, rodata sections
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 5/8] x86: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  3 +++
 arch/x86/Kconfig.debug            | 18 +++---------------
 arch/x86/include/asm/cacheflush.h |  5 -----
 arch/x86/include/asm/kvm_para.h   |  7 -------
 arch/x86/include/asm/sections.h   |  2 +-
 arch/x86/kernel/ftrace.c          |  6 +++---
 arch/x86/kernel/kgdb.c            |  8 ++------
 arch/x86/kernel/test_nx.c         |  2 --
 arch/x86/kernel/test_rodata.c     |  2 +-
 arch/x86/kernel/vmlinux.lds.S     | 25 +++++++++++--------------
 arch/x86/mm/init_32.c             |  3 ---
 arch/x86/mm/init_64.c             |  3 ---
 arch/x86/mm/pageattr.c            |  2 +-
 13 files changed, 25 insertions(+), 61 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4a10ba9e95da..69164efd0333 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -303,6 +303,9 @@ config ARCH_SUPPORTS_UPROBES
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config DEBUG_RODATA
+	def_bool y
+
 config PGTABLE_LEVELS
 	int
 	default 4 if X86_64
diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
index 9b18ed97a8a2..7816b7b276f4 100644
--- a/arch/x86/Kconfig.debug
+++ b/arch/x86/Kconfig.debug
@@ -74,28 +74,16 @@ config EFI_PGT_DUMP
 	  issues with the mapping of the EFI runtime regions into that
 	  table.
 
-config DEBUG_RODATA
-	bool "Write protect kernel read-only data structures"
-	default y
-	depends on DEBUG_KERNEL
-	---help---
-	  Mark the kernel read-only data as write-protected in the pagetables,
-	  in order to catch accidental (and incorrect) writes to such const
-	  data. This is recommended so that we can catch kernel bugs sooner.
-	  If in doubt, say "Y".
-
 config DEBUG_RODATA_TEST
-	bool "Testcase for the DEBUG_RODATA feature"
-	depends on DEBUG_RODATA
+	bool "Testcase for the marking rodata read-only"
 	default y
 	---help---
-	  This option enables a testcase for the DEBUG_RODATA
-	  feature as well as for the change_page_attr() infrastructure.
+	  This option enables a testcase for the setting rodata read-only
+	  as well as for the change_page_attr() infrastructure.
 	  If in doubt, say "N"
 
 config DEBUG_WX
 	bool "Warn on W+X mappings at boot"
-	depends on DEBUG_RODATA
 	select X86_PTDUMP_CORE
 	---help---
 	  Generate a warning if any W+X mappings are found at boot.
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index c8cff75c5b21..61518cf79437 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -91,15 +91,10 @@ void clflush_cache_range(void *addr, unsigned int size);
 
 #define mmio_flush_range(addr, size) clflush_cache_range(addr, size)
 
-#ifdef CONFIG_DEBUG_RODATA
 extern const int rodata_test_data;
 extern int kernel_set_to_readonly;
 void set_kernel_text_rw(void);
 void set_kernel_text_ro(void);
-#else
-static inline void set_kernel_text_rw(void) { }
-static inline void set_kernel_text_ro(void) { }
-#endif
 
 #ifdef CONFIG_DEBUG_RODATA_TEST
 int rodata_test(void);
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index c1adf33fdd0d..bc62e7cbf1b1 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -17,15 +17,8 @@ static inline bool kvm_check_and_clear_guest_paused(void)
 }
 #endif /* CONFIG_KVM_GUEST */
 
-#ifdef CONFIG_DEBUG_RODATA
 #define KVM_HYPERCALL \
         ALTERNATIVE(".byte 0x0f,0x01,0xc1", ".byte 0x0f,0x01,0xd9", X86_FEATURE_VMMCALL)
-#else
-/* On AMD processors, vmcall will generate a trap that we will
- * then rewrite to the appropriate instruction.
- */
-#define KVM_HYPERCALL ".byte 0x0f,0x01,0xc1"
-#endif
 
 /* For KVM hypercalls, a three-byte sequence of either the vmcall or the vmmcall
  * instruction.  The hypervisor may replace it with something else but only the
diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
index 0a5242428659..13b6cdd0af57 100644
--- a/arch/x86/include/asm/sections.h
+++ b/arch/x86/include/asm/sections.h
@@ -7,7 +7,7 @@
 extern char __brk_base[], __brk_limit[];
 extern struct exception_table_entry __stop___ex_table[];
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
+#if defined(CONFIG_X86_64)
 extern char __end_rodata_hpage_align[];
 #endif
 
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 29408d6d6626..05c9e3f5b6d7 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -81,9 +81,9 @@ within(unsigned long addr, unsigned long start, unsigned long end)
 static unsigned long text_ip_addr(unsigned long ip)
 {
 	/*
-	 * On x86_64, kernel text mappings are mapped read-only with
-	 * CONFIG_DEBUG_RODATA. So we use the kernel identity mapping instead
-	 * of the kernel text mapping to modify the kernel text.
+	 * On x86_64, kernel text mappings are mapped read-only, so we use
+	 * the kernel identity mapping instead of the kernel text mapping
+	 * to modify the kernel text.
 	 *
 	 * For 32bit kernels, these mappings are same and we can use
 	 * kernel identity mapping to modify code.
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 44256a62702b..ed15cd486d06 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -750,9 +750,7 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip)
 int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 {
 	int err;
-#ifdef CONFIG_DEBUG_RODATA
 	char opc[BREAK_INSTR_SIZE];
-#endif /* CONFIG_DEBUG_RODATA */
 
 	bpt->type = BP_BREAKPOINT;
 	err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr,
@@ -761,7 +759,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 		return err;
 	err = probe_kernel_write((char *)bpt->bpt_addr,
 				 arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE);
-#ifdef CONFIG_DEBUG_RODATA
 	if (!err)
 		return err;
 	/*
@@ -778,13 +775,12 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
 	if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE))
 		return -EINVAL;
 	bpt->type = BP_POKE_BREAKPOINT;
-#endif /* CONFIG_DEBUG_RODATA */
+
 	return err;
 }
 
 int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 {
-#ifdef CONFIG_DEBUG_RODATA
 	int err;
 	char opc[BREAK_INSTR_SIZE];
 
@@ -801,8 +797,8 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
 	if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE))
 		goto knl_write;
 	return err;
+
 knl_write:
-#endif /* CONFIG_DEBUG_RODATA */
 	return probe_kernel_write((char *)bpt->bpt_addr,
 				  (char *)bpt->saved_instr, BREAK_INSTR_SIZE);
 }
diff --git a/arch/x86/kernel/test_nx.c b/arch/x86/kernel/test_nx.c
index 3f92ce07e525..27538f183c3b 100644
--- a/arch/x86/kernel/test_nx.c
+++ b/arch/x86/kernel/test_nx.c
@@ -142,7 +142,6 @@ static int test_NX(void)
 	 * by the error message
 	 */
 
-#ifdef CONFIG_DEBUG_RODATA
 	/* Test 3: Check if the .rodata section is executable */
 	if (rodata_test_data != 0xC3) {
 		printk(KERN_ERR "test_nx: .rodata marker has invalid value\n");
@@ -151,7 +150,6 @@ static int test_NX(void)
 		printk(KERN_ERR "test_nx: .rodata section is executable\n");
 		ret = -ENODEV;
 	}
-#endif
 
 #if 0
 	/* Test 4: Check if the .data section of a module is executable */
diff --git a/arch/x86/kernel/test_rodata.c b/arch/x86/kernel/test_rodata.c
index 5ecbfe5099da..cb4a01b41e27 100644
--- a/arch/x86/kernel/test_rodata.c
+++ b/arch/x86/kernel/test_rodata.c
@@ -76,5 +76,5 @@ int rodata_test(void)
 }
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Testcase for the DEBUG_RODATA infrastructure");
+MODULE_DESCRIPTION("Testcase for marking rodata as read-only");
 MODULE_AUTHOR("Arjan van de Ven <arjan@linux.intel.com>");
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 74e4bf11f562..fe133b710bef 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -41,29 +41,28 @@ ENTRY(phys_startup_64)
 jiffies_64 = jiffies;
 #endif
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
+#if defined(CONFIG_X86_64)
 /*
- * On 64-bit, align RODATA to 2MB so that even with CONFIG_DEBUG_RODATA
- * we retain large page mappings for boundaries spanning kernel text, rodata
- * and data sections.
+ * On 64-bit, align RODATA to 2MB so we retain large page mappings for
+ * boundaries spanning kernel text, rodata and data sections.
  *
  * However, kernel identity mappings will have different RWX permissions
  * to the pages mapping to text and to the pages padding (which are freed) the
  * text section. Hence kernel identity mappings will be broken to smaller
  * pages. For 64-bit, kernel text and kernel identity mappings are different,
- * so we can enable protection checks that come with CONFIG_DEBUG_RODATA,
- * as well as retain 2MB large page mappings for kernel text.
+ * so we can enable protection checks as well as retain 2MB large page
+ * mappings for kernel text.
  */
-#define X64_ALIGN_DEBUG_RODATA_BEGIN	. = ALIGN(HPAGE_SIZE);
+#define X64_ALIGN_RODATA_BEGIN	. = ALIGN(HPAGE_SIZE);
 
-#define X64_ALIGN_DEBUG_RODATA_END				\
+#define X64_ALIGN_RODATA_END					\
 		. = ALIGN(HPAGE_SIZE);				\
 		__end_rodata_hpage_align = .;
 
 #else
 
-#define X64_ALIGN_DEBUG_RODATA_BEGIN
-#define X64_ALIGN_DEBUG_RODATA_END
+#define X64_ALIGN_RODATA_BEGIN
+#define X64_ALIGN_RODATA_END
 
 #endif
 
@@ -112,13 +111,11 @@ SECTIONS
 
 	EXCEPTION_TABLE(16) :text = 0x9090
 
-#if defined(CONFIG_DEBUG_RODATA)
 	/* .text should occupy whole number of pages */
 	. = ALIGN(PAGE_SIZE);
-#endif
-	X64_ALIGN_DEBUG_RODATA_BEGIN
+	X64_ALIGN_RODATA_BEGIN
 	RO_DATA(PAGE_SIZE)
-	X64_ALIGN_DEBUG_RODATA_END
+	X64_ALIGN_RODATA_END
 
 	/* Data */
 	.data : AT(ADDR(.data) - LOAD_OFFSET) {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index cb4ef3de61f9..2ebfbaf61142 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -871,7 +871,6 @@ static noinline int do_test_wp_bit(void)
 	return flag;
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 const int rodata_test_data = 0xC3;
 EXPORT_SYMBOL_GPL(rodata_test_data);
 
@@ -960,5 +959,3 @@ void mark_rodata_ro(void)
 	if (__supported_pte_mask & _PAGE_NX)
 		debug_checkwx();
 }
-#endif
-
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 5488d21123bd..a40b755c67e3 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1074,7 +1074,6 @@ void __init mem_init(void)
 	mem_init_print_info(NULL);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 const int rodata_test_data = 0xC3;
 EXPORT_SYMBOL_GPL(rodata_test_data);
 
@@ -1166,8 +1165,6 @@ void mark_rodata_ro(void)
 	debug_checkwx();
 }
 
-#endif
-
 int kern_addr_valid(unsigned long addr)
 {
 	unsigned long above = ((long)addr) >> __VIRTUAL_MASK_SHIFT;
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index fc6a4c8f6e2a..4df560040314 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -283,7 +283,7 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
 		   __pa_symbol(__end_rodata) >> PAGE_SHIFT))
 		pgprot_val(forbidden) |= _PAGE_RW;
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
+#if defined(CONFIG_X86_64)
 	/*
 	 * Once the kernel maps the text as RO (kernel_set_to_readonly is set),
 	 * kernel text mappings for the large page aligned text, rodata sections
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 6/8] introduce post-init read-only memory
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and adds
some documentation about the existing __read_mostly marking.

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/parisc/include/asm/cache.h   |  3 +++
 include/asm-generic/vmlinux.lds.h |  1 +
 include/linux/cache.h             | 14 ++++++++++++++
 3 files changed, 18 insertions(+)

diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
index 3d0e17bcc8e9..df0f52bd18b4 100644
--- a/arch/parisc/include/asm/cache.h
+++ b/arch/parisc/include/asm/cache.h
@@ -22,6 +22,9 @@
 
 #define __read_mostly __attribute__((__section__(".data..read_mostly")))
 
+/* Read-only memory is marked before mark_rodata_ro() is called. */
+#define __ro_after_init	__read_mostly
+
 void parisc_cache_init(void);	/* initializes cache-flushing */
 void disable_sr_hashing_asm(int); /* low level support for above */
 void disable_sr_hashing(void);   /* turns off space register hashing */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index c4bd0e2c173c..772c784ba763 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -256,6 +256,7 @@
 	.rodata           : AT(ADDR(.rodata) - LOAD_OFFSET) {		\
 		VMLINUX_SYMBOL(__start_rodata) = .;			\
 		*(.rodata) *(.rodata.*)					\
+		*(.data..ro_after_init)	/* Read only after init */	\
 		*(__vermagic)		/* Kernel version magic */	\
 		. = ALIGN(8);						\
 		VMLINUX_SYMBOL(__start___tracepoints_ptrs) = .;		\
diff --git a/include/linux/cache.h b/include/linux/cache.h
index 17e7e82d2aa7..1be04f8c563a 100644
--- a/include/linux/cache.h
+++ b/include/linux/cache.h
@@ -12,10 +12,24 @@
 #define SMP_CACHE_BYTES L1_CACHE_BYTES
 #endif
 
+/*
+ * __read_mostly is used to keep rarely changing variables out of frequently
+ * updated cachelines. If an architecture doesn't support it, ignore the
+ * hint.
+ */
 #ifndef __read_mostly
 #define __read_mostly
 #endif
 
+/*
+ * __ro_after_init is used to mark things that are read-only after init (i.e.
+ * after mark_rodata_ro() has been called). These are effectively read-only,
+ * but may get written to during init, so can't live in .rodata (via "const").
+ */
+#ifndef __ro_after_init
+#define __ro_after_init __attribute__((__section__(".data..ro_after_init")))
+#endif
+
 #ifndef ____cacheline_aligned
 #define ____cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
 #endif
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 6/8] introduce post-init read-only memory
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and adds
some documentation about the existing __read_mostly marking.

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/parisc/include/asm/cache.h   |  3 +++
 include/asm-generic/vmlinux.lds.h |  1 +
 include/linux/cache.h             | 14 ++++++++++++++
 3 files changed, 18 insertions(+)

diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
index 3d0e17bcc8e9..df0f52bd18b4 100644
--- a/arch/parisc/include/asm/cache.h
+++ b/arch/parisc/include/asm/cache.h
@@ -22,6 +22,9 @@
 
 #define __read_mostly __attribute__((__section__(".data..read_mostly")))
 
+/* Read-only memory is marked before mark_rodata_ro() is called. */
+#define __ro_after_init	__read_mostly
+
 void parisc_cache_init(void);	/* initializes cache-flushing */
 void disable_sr_hashing_asm(int); /* low level support for above */
 void disable_sr_hashing(void);   /* turns off space register hashing */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index c4bd0e2c173c..772c784ba763 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -256,6 +256,7 @@
 	.rodata           : AT(ADDR(.rodata) - LOAD_OFFSET) {		\
 		VMLINUX_SYMBOL(__start_rodata) = .;			\
 		*(.rodata) *(.rodata.*)					\
+		*(.data..ro_after_init)	/* Read only after init */	\
 		*(__vermagic)		/* Kernel version magic */	\
 		. = ALIGN(8);						\
 		VMLINUX_SYMBOL(__start___tracepoints_ptrs) = .;		\
diff --git a/include/linux/cache.h b/include/linux/cache.h
index 17e7e82d2aa7..1be04f8c563a 100644
--- a/include/linux/cache.h
+++ b/include/linux/cache.h
@@ -12,10 +12,24 @@
 #define SMP_CACHE_BYTES L1_CACHE_BYTES
 #endif
 
+/*
+ * __read_mostly is used to keep rarely changing variables out of frequently
+ * updated cachelines. If an architecture doesn't support it, ignore the
+ * hint.
+ */
 #ifndef __read_mostly
 #define __read_mostly
 #endif
 
+/*
+ * __ro_after_init is used to mark things that are read-only after init (i.e.
+ * after mark_rodata_ro() has been called). These are effectively read-only,
+ * but may get written to during init, so can't live in .rodata (via "const").
+ */
+#ifndef __ro_after_init
+#define __ro_after_init __attribute__((__section__(".data..ro_after_init")))
+#endif
+
 #ifndef ____cacheline_aligned
 #define ____cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
 #endif
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 7/8] lkdtm: verify that __ro_after_init works correctly
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

The new __ro_after_init section should be writable before init, but
not after. Validate that it gets updated at init and can't be written
to afterwards.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
index 11fdadc68e53..2a6eaf1122b4 100644
--- a/drivers/misc/lkdtm.c
+++ b/drivers/misc/lkdtm.c
@@ -103,6 +103,7 @@ enum ctype {
 	CT_EXEC_USERSPACE,
 	CT_ACCESS_USERSPACE,
 	CT_WRITE_RO,
+	CT_WRITE_RO_AFTER_INIT,
 	CT_WRITE_KERN,
 };
 
@@ -140,6 +141,7 @@ static char* cp_type[] = {
 	"EXEC_USERSPACE",
 	"ACCESS_USERSPACE",
 	"WRITE_RO",
+	"WRITE_RO_AFTER_INIT",
 	"WRITE_KERN",
 };
 
@@ -162,6 +164,7 @@ static DEFINE_SPINLOCK(lock_me_up);
 static u8 data_area[EXEC_SIZE];
 
 static const unsigned long rodata = 0xAA55AA55;
+static unsigned long ro_after_init __ro_after_init = 0x55AA5500;
 
 module_param(recur_count, int, 0644);
 MODULE_PARM_DESC(recur_count, " Recursion level for the stack overflow test");
@@ -503,11 +506,28 @@ static void lkdtm_do_action(enum ctype which)
 		break;
 	}
 	case CT_WRITE_RO: {
-		unsigned long *ptr;
+		/* Explicitly cast away "const" for the test. */
+		unsigned long *ptr = (unsigned long *)&rodata;
 
-		ptr = (unsigned long *)&rodata;
+		pr_info("attempting bad rodata write at %p\n", ptr);
+		*ptr ^= 0xabcd1234;
 
-		pr_info("attempting bad write at %p\n", ptr);
+		break;
+	}
+	case CT_WRITE_RO_AFTER_INIT: {
+		unsigned long *ptr = &ro_after_init;
+
+		/*
+		 * Verify we were written to during init. Since an Oops
+		 * is considered a "success", a failure is to just skip the
+		 * real test.
+		 */
+		if ((*ptr & 0xAA) != 0xAA) {
+			pr_info("%p was NOT written during init!?\n", ptr);
+			break;
+		}
+
+		pr_info("attempting bad ro_after_init write at %p\n", ptr);
 		*ptr ^= 0xabcd1234;
 
 		break;
@@ -817,6 +837,9 @@ static int __init lkdtm_module_init(void)
 	int n_debugfs_entries = 1; /* Assume only the direct entry */
 	int i;
 
+	/* Make sure we can write to __ro_after_init values during __init */
+	ro_after_init |= 0xAA;
+
 	/* Register debugfs interface */
 	lkdtm_debugfs_root = debugfs_create_dir("provoke-crash", NULL);
 	if (!lkdtm_debugfs_root) {
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 7/8] lkdtm: verify that __ro_after_init works correctly
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

The new __ro_after_init section should be writable before init, but
not after. Validate that it gets updated at init and can't be written
to afterwards.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/misc/lkdtm.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
index 11fdadc68e53..2a6eaf1122b4 100644
--- a/drivers/misc/lkdtm.c
+++ b/drivers/misc/lkdtm.c
@@ -103,6 +103,7 @@ enum ctype {
 	CT_EXEC_USERSPACE,
 	CT_ACCESS_USERSPACE,
 	CT_WRITE_RO,
+	CT_WRITE_RO_AFTER_INIT,
 	CT_WRITE_KERN,
 };
 
@@ -140,6 +141,7 @@ static char* cp_type[] = {
 	"EXEC_USERSPACE",
 	"ACCESS_USERSPACE",
 	"WRITE_RO",
+	"WRITE_RO_AFTER_INIT",
 	"WRITE_KERN",
 };
 
@@ -162,6 +164,7 @@ static DEFINE_SPINLOCK(lock_me_up);
 static u8 data_area[EXEC_SIZE];
 
 static const unsigned long rodata = 0xAA55AA55;
+static unsigned long ro_after_init __ro_after_init = 0x55AA5500;
 
 module_param(recur_count, int, 0644);
 MODULE_PARM_DESC(recur_count, " Recursion level for the stack overflow test");
@@ -503,11 +506,28 @@ static void lkdtm_do_action(enum ctype which)
 		break;
 	}
 	case CT_WRITE_RO: {
-		unsigned long *ptr;
+		/* Explicitly cast away "const" for the test. */
+		unsigned long *ptr = (unsigned long *)&rodata;
 
-		ptr = (unsigned long *)&rodata;
+		pr_info("attempting bad rodata write at %p\n", ptr);
+		*ptr ^= 0xabcd1234;
 
-		pr_info("attempting bad write at %p\n", ptr);
+		break;
+	}
+	case CT_WRITE_RO_AFTER_INIT: {
+		unsigned long *ptr = &ro_after_init;
+
+		/*
+		 * Verify we were written to during init. Since an Oops
+		 * is considered a "success", a failure is to just skip the
+		 * real test.
+		 */
+		if ((*ptr & 0xAA) != 0xAA) {
+			pr_info("%p was NOT written during init!?\n", ptr);
+			break;
+		}
+
+		pr_info("attempting bad ro_after_init write at %p\n", ptr);
 		*ptr ^= 0xabcd1234;
 
 		break;
@@ -817,6 +837,9 @@ static int __init lkdtm_module_init(void)
 	int n_debugfs_entries = 1; /* Assume only the direct entry */
 	int i;
 
+	/* Make sure we can write to __ro_after_init values during __init */
+	ro_after_init |= 0xAA;
+
 	/* Register debugfs interface */
 	lkdtm_debugfs_root = debugfs_create_dir("provoke-crash", NULL);
 	if (!lkdtm_debugfs_root) {
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-19 18:08   ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

The vDSO does not need to be writable after __init, so mark it as
__ro_after_init. The result kills the exploit method of writing to the
vDSO from kernel space resulting in userspace executing the modified code,
as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21

The memory map (with added vDSO address reporting) shows the vDSO moving
into read-only memory:

Before:
[    0.143067] vDSO @ ffffffff82004000
[    0.143551] vDSO @ ffffffff82006000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

After:
[    0.145062] vDSO @ ffffffff81da1000
[    0.146057] vDSO @ ffffffff81da4000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/entry/vdso/vdso2c.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
index 0224987556ce..eb93a3137ed2 100644
--- a/arch/x86/entry/vdso/vdso2c.h
+++ b/arch/x86/entry/vdso/vdso2c.h
@@ -140,7 +140,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
 	fprintf(outfile, "#include <asm/vdso.h>\n");
 	fprintf(outfile, "\n");
 	fprintf(outfile,
-		"static unsigned char raw_data[%lu] __page_aligned_data = {",
+		"static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
 		mapping_size);
 	for (j = 0; j < stripped_len; j++) {
 		if (j % 10 == 0)
@@ -150,7 +150,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
 	}
 	fprintf(outfile, "\n};\n\n");
 
-	fprintf(outfile, "static struct page *pages[%lu];\n\n",
+	fprintf(outfile, "static struct page *pages[%lu] __ro_after_init;\n\n",
 		mapping_size / 4096);
 
 	fprintf(outfile, "const struct vdso_image %s = {\n", name);
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
@ 2016-01-19 18:08   ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-19 18:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Kees Cook, Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, kernel-hardening, linux-kernel, linux-arch

The vDSO does not need to be writable after __init, so mark it as
__ro_after_init. The result kills the exploit method of writing to the
vDSO from kernel space resulting in userspace executing the modified code,
as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21

The memory map (with added vDSO address reporting) shows the vDSO moving
into read-only memory:

Before:
[    0.143067] vDSO @ ffffffff82004000
[    0.143551] vDSO @ ffffffff82006000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

After:
[    0.145062] vDSO @ ffffffff81da1000
[    0.146057] vDSO @ ffffffff81da4000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/entry/vdso/vdso2c.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
index 0224987556ce..eb93a3137ed2 100644
--- a/arch/x86/entry/vdso/vdso2c.h
+++ b/arch/x86/entry/vdso/vdso2c.h
@@ -140,7 +140,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
 	fprintf(outfile, "#include <asm/vdso.h>\n");
 	fprintf(outfile, "\n");
 	fprintf(outfile,
-		"static unsigned char raw_data[%lu] __page_aligned_data = {",
+		"static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
 		mapping_size);
 	for (j = 0; j < stripped_len; j++) {
 		if (j % 10 == 0)
@@ -150,7 +150,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
 	}
 	fprintf(outfile, "\n};\n\n");
 
-	fprintf(outfile, "static struct page *pages[%lu];\n\n",
+	fprintf(outfile, "static struct page *pages[%lu] __ro_after_init;\n\n",
 		mapping_size / 4096);
 
 	fprintf(outfile, "const struct vdso_image %s = {\n", name);
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
  2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
  (?)
@ 2016-01-19 19:09     ` Andy Lutomirski
  -1 siblings, 0 replies; 104+ messages in thread
From: Andy Lutomirski @ 2016-01-19 19:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, X86 ML, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08 AM, Kees Cook <keescook@chromium.org> wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
>
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:

Acked-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
@ 2016-01-19 19:09     ` Andy Lutomirski
  0 siblings, 0 replies; 104+ messages in thread
From: Andy Lutomirski @ 2016-01-19 19:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, X86 ML, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08 AM, Kees Cook <keescook@chromium.org> wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
>
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:

Acked-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
@ 2016-01-19 19:09     ` Andy Lutomirski
  0 siblings, 0 replies; 104+ messages in thread
From: Andy Lutomirski @ 2016-01-19 19:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, X86 ML, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08 AM, Kees Cook <keescook@chromium.org> wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
>
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:

Acked-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 2/8] lib: add "on" and "off" to strtobool
  2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
@ 2016-01-20  2:09     ` Joe Perches
  -1 siblings, 0 replies; 104+ messages in thread
From: Joe Perches @ 2016-01-20  2:09 UTC (permalink / raw)
  To: Kees Cook, Ingo Molnar
  Cc: Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, 2016-01-19 at 10:08 -0800, Kees Cook wrote:
> Several places in the kernel expect to use "on" and "off" for their
> boolean signifiers, so add them to strtobool.

Several places in the kernel use a char address like
fs/cifs/cifs_debug.c


	char c;
	...


	if (strtobool(&c, ...))

Using s[1] might cause problems for those uses.
> diff --git a/lib/string.c b/lib/string.c
[]
> @@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
>   * @s: input string
>   * @res: result
>   *
> - * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
> - * Otherwise it will return -EINVAL.  Value pointed to by res is
> - * updated upon finding a match.
> + * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
> + * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
> + * pointed to by res is updated upon finding a match.
>   */
>  int strtobool(const char *s, bool *res)
>  {
> +	if (!s)
> +		return -EINVAL;
> +
>  	switch (s[0]) {
>  	case 'y':
>  	case 'Y':
> @@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
>  	case '0':
>  		*res = false;
>  		break;
> +	case 'o':
> +	case 'O':
> +		switch (s[1]) {
> +		case 'n':
> +		case 'N':
> +			*res = true;
> +			break;
> +		case 'f':
> +		case 'F':

Perhaps
		switch (tolower(s[1])) {
is more readable

> +			*res = false;
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +		break;

or maybe /* fallthrough */

>  	default:
>  		return -EINVAL;
>  	}

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH v4 2/8] lib: add "on" and "off" to strtobool
@ 2016-01-20  2:09     ` Joe Perches
  0 siblings, 0 replies; 104+ messages in thread
From: Joe Perches @ 2016-01-20  2:09 UTC (permalink / raw)
  To: Kees Cook, Ingo Molnar
  Cc: Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, 2016-01-19 at 10:08 -0800, Kees Cook wrote:
> Several places in the kernel expect to use "on" and "off" for their
> boolean signifiers, so add them to strtobool.

Several places in the kernel use a char address like
fs/cifs/cifs_debug.c


	char c;
	...


	if (strtobool(&c, ...))

Using s[1] might cause problems for those uses.
> diff --git a/lib/string.c b/lib/string.c
[]
> @@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
>   * @s: input string
>   * @res: result
>   *
> - * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
> - * Otherwise it will return -EINVAL.  Value pointed to by res is
> - * updated upon finding a match.
> + * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
> + * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
> + * pointed to by res is updated upon finding a match.
>   */
>  int strtobool(const char *s, bool *res)
>  {
> +	if (!s)
> +		return -EINVAL;
> +
>  	switch (s[0]) {
>  	case 'y':
>  	case 'Y':
> @@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
>  	case '0':
>  		*res = false;
>  		break;
> +	case 'o':
> +	case 'O':
> +		switch (s[1]) {
> +		case 'n':
> +		case 'N':
> +			*res = true;
> +			break;
> +		case 'f':
> +		case 'F':

Perhaps
		switch (tolower(s[1])) {
is more readable

> +			*res = false;
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +		break;

or maybe /* fallthrough */

>  	default:
>  		return -EINVAL;
>  	}

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
  2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
@ 2016-01-20  2:51     ` H. Peter Anvin
  -1 siblings, 0 replies; 104+ messages in thread
From: H. Peter Anvin @ 2016-01-20  2:51 UTC (permalink / raw)
  To: Kees Cook, Ingo Molnar
  Cc: Andy Lutomirski, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On 01/19/16 10:08, Kees Cook wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
> 
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:
> 
> Before:
> [    0.143067] vDSO @ ffffffff82004000
> [    0.143551] vDSO @ ffffffff82006000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
> 0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
> 0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
> 0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
> 
> After:
> [    0.145062] vDSO @ ffffffff81da1000
> [    0.146057] vDSO @ ffffffff81da4000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
> 0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
> 0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
> 0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
> 
> Based on work by PaX Team and Brad Spengler.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Andy Lutomirski <luto@kernel.org>

Acked-by: H. Peter Anvin <hpa@linux.intel.com>

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
@ 2016-01-20  2:51     ` H. Peter Anvin
  0 siblings, 0 replies; 104+ messages in thread
From: H. Peter Anvin @ 2016-01-20  2:51 UTC (permalink / raw)
  To: Kees Cook, Ingo Molnar
  Cc: Andy Lutomirski, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On 01/19/16 10:08, Kees Cook wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
> 
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:
> 
> Before:
> [    0.143067] vDSO @ ffffffff82004000
> [    0.143551] vDSO @ ffffffff82006000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
> 0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
> 0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
> 0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
> 
> After:
> [    0.145062] vDSO @ ffffffff81da1000
> [    0.146057] vDSO @ ffffffff81da4000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
> 0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
> 0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
> 0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
> 
> Based on work by PaX Team and Brad Spengler.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Andy Lutomirski <luto@kernel.org>

Acked-by: H. Peter Anvin <hpa@linux.intel.com>

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
  2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
  (?)
@ 2016-01-20  2:56     ` Andy Lutomirski
  -1 siblings, 0 replies; 104+ messages in thread
From: Andy Lutomirski @ 2016-01-20  2:56 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, X86 ML, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08 AM, Kees Cook <keescook@chromium.org> wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
>
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:
>
> Before:
> [    0.143067] vDSO @ ffffffff82004000
> [    0.143551] vDSO @ ffffffff82006000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
> 0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
> 0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
> 0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
>
> After:
> [    0.145062] vDSO @ ffffffff81da1000
> [    0.146057] vDSO @ ffffffff81da4000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
> 0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
> 0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
> 0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
>
> Based on work by PaX Team and Brad Spengler.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/entry/vdso/vdso2c.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
> index 0224987556ce..eb93a3137ed2 100644
> --- a/arch/x86/entry/vdso/vdso2c.h
> +++ b/arch/x86/entry/vdso/vdso2c.h
> @@ -140,7 +140,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
>         fprintf(outfile, "#include <asm/vdso.h>\n");
>         fprintf(outfile, "\n");
>         fprintf(outfile,
> -               "static unsigned char raw_data[%lu] __page_aligned_data = {",
> +               "static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
>                 mapping_size);
>         for (j = 0; j < stripped_len; j++) {
>                 if (j % 10 == 0)
> @@ -150,7 +150,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
>         }
>         fprintf(outfile, "\n};\n\n");
>
> -       fprintf(outfile, "static struct page *pages[%lu];\n\n",
> +       fprintf(outfile, "static struct page *pages[%lu] __ro_after_init;\n\n",

I spoke a bit too soon.  This line of code is gone in -tip:

https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/asm&id=05ef76b20fc4297b0d3f8a956f1c809a8a1b3f1d

My ack stands for the other change in here, though.

--Andy

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
@ 2016-01-20  2:56     ` Andy Lutomirski
  0 siblings, 0 replies; 104+ messages in thread
From: Andy Lutomirski @ 2016-01-20  2:56 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, X86 ML, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08 AM, Kees Cook <keescook@chromium.org> wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
>
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:
>
> Before:
> [    0.143067] vDSO @ ffffffff82004000
> [    0.143551] vDSO @ ffffffff82006000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
> 0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
> 0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
> 0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
>
> After:
> [    0.145062] vDSO @ ffffffff81da1000
> [    0.146057] vDSO @ ffffffff81da4000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
> 0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
> 0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
> 0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
>
> Based on work by PaX Team and Brad Spengler.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/entry/vdso/vdso2c.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
> index 0224987556ce..eb93a3137ed2 100644
> --- a/arch/x86/entry/vdso/vdso2c.h
> +++ b/arch/x86/entry/vdso/vdso2c.h
> @@ -140,7 +140,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
>         fprintf(outfile, "#include <asm/vdso.h>\n");
>         fprintf(outfile, "\n");
>         fprintf(outfile,
> -               "static unsigned char raw_data[%lu] __page_aligned_data = {",
> +               "static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
>                 mapping_size);
>         for (j = 0; j < stripped_len; j++) {
>                 if (j % 10 == 0)
> @@ -150,7 +150,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
>         }
>         fprintf(outfile, "\n};\n\n");
>
> -       fprintf(outfile, "static struct page *pages[%lu];\n\n",
> +       fprintf(outfile, "static struct page *pages[%lu] __ro_after_init;\n\n",

I spoke a bit too soon.  This line of code is gone in -tip:

https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/asm&id=05ef76b20fc4297b0d3f8a956f1c809a8a1b3f1d

My ack stands for the other change in here, though.

--Andy

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init
@ 2016-01-20  2:56     ` Andy Lutomirski
  0 siblings, 0 replies; 104+ messages in thread
From: Andy Lutomirski @ 2016-01-20  2:56 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, X86 ML, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08 AM, Kees Cook <keescook@chromium.org> wrote:
> The vDSO does not need to be writable after __init, so mark it as
> __ro_after_init. The result kills the exploit method of writing to the
> vDSO from kernel space resulting in userspace executing the modified code,
> as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21
>
> The memory map (with added vDSO address reporting) shows the vDSO moving
> into read-only memory:
>
> Before:
> [    0.143067] vDSO @ ffffffff82004000
> [    0.143551] vDSO @ ffffffff82006000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
> 0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
> 0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
> 0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
>
> After:
> [    0.145062] vDSO @ ffffffff81da1000
> [    0.146057] vDSO @ ffffffff81da4000
> ---[ High Kernel Mapping ]---
> 0xffffffff80000000-0xffffffff81000000      16M                         pmd
> 0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
> 0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
> 0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
> 0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
> 0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
> 0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
> 0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
> 0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
> 0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
> 0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
> 0xffffffff83200000-0xffffffffc0000000     974M                         pmd
>
> Based on work by PaX Team and Brad Spengler.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/entry/vdso/vdso2c.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
> index 0224987556ce..eb93a3137ed2 100644
> --- a/arch/x86/entry/vdso/vdso2c.h
> +++ b/arch/x86/entry/vdso/vdso2c.h
> @@ -140,7 +140,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
>         fprintf(outfile, "#include <asm/vdso.h>\n");
>         fprintf(outfile, "\n");
>         fprintf(outfile,
> -               "static unsigned char raw_data[%lu] __page_aligned_data = {",
> +               "static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
>                 mapping_size);
>         for (j = 0; j < stripped_len; j++) {
>                 if (j % 10 == 0)
> @@ -150,7 +150,7 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
>         }
>         fprintf(outfile, "\n};\n\n");
>
> -       fprintf(outfile, "static struct page *pages[%lu];\n\n",
> +       fprintf(outfile, "static struct page *pages[%lu] __ro_after_init;\n\n",

I spoke a bit too soon.  This line of code is gone in -tip:

https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/asm&id=05ef76b20fc4297b0d3f8a956f1c809a8a1b3f1d

My ack stands for the other change in here, though.

--Andy

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
@ 2016-01-22 17:19   ` David Brown
  -1 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-22 17:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Ingo Molnar, Kees Cook, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, linux-kernel, linux-arch,
	Laura Abbott

On Tue, Jan 19, 2016 at 10:08:34AM -0800, Kees Cook wrote:

>This introduces __ro_after_init as a way to mark such memory, and uses
>it on the x86 vDSO to kill an extant kernel exploitation method. Also
>adds a new kernel parameter to help debug future use and adds an lkdtm
>test to check the results.

I've tested these patches on 32-bit ARM using the provoke-crashes
test.  However, they do require CONFIG_ARM_KERNMEM_PERMS to be enabled
as well, which does incur additional memory usage.

Do we want to consider making CONFIG_ARM_KERNMEM_PERMS default y for
security reasons, and just document that memory-constrained systems
may want to turn it off?

I'll test the arm64 next.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 0/8] introduce post-init read-only memory
@ 2016-01-22 17:19   ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-22 17:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Ingo Molnar, Kees Cook, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, linux-kernel, linux-arch,
	Laura Abbott

On Tue, Jan 19, 2016 at 10:08:34AM -0800, Kees Cook wrote:

>This introduces __ro_after_init as a way to mark such memory, and uses
>it on the x86 vDSO to kill an extant kernel exploitation method. Also
>adds a new kernel parameter to help debug future use and adds an lkdtm
>test to check the results.

I've tested these patches on 32-bit ARM using the provoke-crashes
test.  However, they do require CONFIG_ARM_KERNMEM_PERMS to be enabled
as well, which does incur additional memory usage.

Do we want to consider making CONFIG_ARM_KERNMEM_PERMS default y for
security reasons, and just document that memory-constrained systems
may want to turn it off?

I'll test the arm64 next.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
  2016-01-22 17:19   ` David Brown
  (?)
@ 2016-01-22 19:16   ` Laura Abbott
  2016-01-22 19:57       ` Kees Cook
  -1 siblings, 1 reply; 104+ messages in thread
From: Laura Abbott @ 2016-01-22 19:16 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Ingo Molnar, Kees Cook, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, linux-kernel, linux-arch

On 1/22/16 9:19 AM, David Brown wrote:
> On Tue, Jan 19, 2016 at 10:08:34AM -0800, Kees Cook wrote:
>
>> This introduces __ro_after_init as a way to mark such memory, and uses
>> it on the x86 vDSO to kill an extant kernel exploitation method. Also
>> adds a new kernel parameter to help debug future use and adds an lkdtm
>> test to check the results.
>
> I've tested these patches on 32-bit ARM using the provoke-crashes
> test.  However, they do require CONFIG_ARM_KERNMEM_PERMS to be enabled
> as well, which does incur additional memory usage.
>
> Do we want to consider making CONFIG_ARM_KERNMEM_PERMS default y for
> security reasons, and just document that memory-constrained systems
> may want to turn it off?
>
> I'll test the arm64 next.
>
> David

Kees had previously pushed a patch to do so but it exposed a couple of
underlying issues, mostly with low power paths
(c.f. http://article.gmane.org/gmane.linux.ports.arm.kernel/471199,
http://article.gmane.org/gmane.linux.kernel.mm/143489)
Those will need to be all fixed up before this could be made default.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
  2016-01-22 19:16   ` [kernel-hardening] " Laura Abbott
@ 2016-01-22 19:57       ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-22 19:57 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch

On Fri, Jan 22, 2016 at 11:16 AM, Laura Abbott <laura@labbott.name> wrote:
> On 1/22/16 9:19 AM, David Brown wrote:
>>
>> On Tue, Jan 19, 2016 at 10:08:34AM -0800, Kees Cook wrote:
>>
>>> This introduces __ro_after_init as a way to mark such memory, and uses
>>> it on the x86 vDSO to kill an extant kernel exploitation method. Also
>>> adds a new kernel parameter to help debug future use and adds an lkdtm
>>> test to check the results.
>>
>>
>> I've tested these patches on 32-bit ARM using the provoke-crashes
>> test.  However, they do require CONFIG_ARM_KERNMEM_PERMS to be enabled
>> as well, which does incur additional memory usage.

Thanks for testing!

>> Do we want to consider making CONFIG_ARM_KERNMEM_PERMS default y for
>> security reasons, and just document that memory-constrained systems
>> may want to turn it off?
>>
>> I'll test the arm64 next.
>>
>> David
>
>
> Kees had previously pushed a patch to do so but it exposed a couple of
> underlying issues, mostly with low power paths
> (c.f. http://article.gmane.org/gmane.linux.ports.arm.kernel/471199,
> http://article.gmane.org/gmane.linux.kernel.mm/143489)
> Those will need to be all fixed up before this could be made default.

Yeah, I've got a patch waiting to reorganize CONFIG_ARM_KERNMEM_PERMS
to look more like arm64 (and x86) and get the feature correctly under
CONFIG_DEBUG_RODATA. I made it default=y on v7+. rmk asked me to wait
until -rc1 before resubmitting it.

http://git.kernel.org/cgit/linux/kernel/git/kees/linux.git/commit/?h=kspp/arm-rodata&id=08bebfd2e7fb8a9f364ced74c356642d64e1f43e

and a small improvement too:

http://git.kernel.org/cgit/linux/kernel/git/kees/linux.git/commit/?h=kspp/arm-rodata&id=8e16f005ce0d4069aee5502379cff845b4c6f950

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
@ 2016-01-22 19:57       ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-22 19:57 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch

On Fri, Jan 22, 2016 at 11:16 AM, Laura Abbott <laura@labbott.name> wrote:
> On 1/22/16 9:19 AM, David Brown wrote:
>>
>> On Tue, Jan 19, 2016 at 10:08:34AM -0800, Kees Cook wrote:
>>
>>> This introduces __ro_after_init as a way to mark such memory, and uses
>>> it on the x86 vDSO to kill an extant kernel exploitation method. Also
>>> adds a new kernel parameter to help debug future use and adds an lkdtm
>>> test to check the results.
>>
>>
>> I've tested these patches on 32-bit ARM using the provoke-crashes
>> test.  However, they do require CONFIG_ARM_KERNMEM_PERMS to be enabled
>> as well, which does incur additional memory usage.

Thanks for testing!

>> Do we want to consider making CONFIG_ARM_KERNMEM_PERMS default y for
>> security reasons, and just document that memory-constrained systems
>> may want to turn it off?
>>
>> I'll test the arm64 next.
>>
>> David
>
>
> Kees had previously pushed a patch to do so but it exposed a couple of
> underlying issues, mostly with low power paths
> (c.f. http://article.gmane.org/gmane.linux.ports.arm.kernel/471199,
> http://article.gmane.org/gmane.linux.kernel.mm/143489)
> Those will need to be all fixed up before this could be made default.

Yeah, I've got a patch waiting to reorganize CONFIG_ARM_KERNMEM_PERMS
to look more like arm64 (and x86) and get the feature correctly under
CONFIG_DEBUG_RODATA. I made it default=y on v7+. rmk asked me to wait
until -rc1 before resubmitting it.

http://git.kernel.org/cgit/linux/kernel/git/kees/linux.git/commit/?h=kspp/arm-rodata&id=08bebfd2e7fb8a9f364ced74c356642d64e1f43e

and a small improvement too:

http://git.kernel.org/cgit/linux/kernel/git/kees/linux.git/commit/?h=kspp/arm-rodata&id=8e16f005ce0d4069aee5502379cff845b4c6f950

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 2/8] lib: add "on" and "off" to strtobool
  2016-01-20  2:09     ` [kernel-hardening] " Joe Perches
  (?)
@ 2016-01-22 23:29       ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-22 23:29 UTC (permalink / raw)
  To: Joe Perches
  Cc: Ingo Molnar, Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, LKML, linux-arch

On Tue, Jan 19, 2016 at 6:09 PM, Joe Perches <joe@perches.com> wrote:
> On Tue, 2016-01-19 at 10:08 -0800, Kees Cook wrote:
>> Several places in the kernel expect to use "on" and "off" for their
>> boolean signifiers, so add them to strtobool.
>
> Several places in the kernel use a char address like
> fs/cifs/cifs_debug.c
>
>
>         char c;
>         ...
>
>
>         if (strtobool(&c, ...))
>
> Using s[1] might cause problems for those uses.

Oh ew. Thanks for noticing that.

>> diff --git a/lib/string.c b/lib/string.c
> []
>> @@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
>>   * @s: input string
>>   * @res: result
>>   *
>> - * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
>> - * Otherwise it will return -EINVAL.  Value pointed to by res is
>> - * updated upon finding a match.
>> + * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
>> + * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
>> + * pointed to by res is updated upon finding a match.
>>   */
>>  int strtobool(const char *s, bool *res)
>>  {
>> +     if (!s)
>> +             return -EINVAL;
>> +
>>       switch (s[0]) {
>>       case 'y':
>>       case 'Y':
>> @@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
>>       case '0':
>>               *res = false;
>>               break;
>> +     case 'o':
>> +     case 'O':
>> +             switch (s[1]) {
>> +             case 'n':
>> +             case 'N':
>> +                     *res = true;
>> +                     break;
>> +             case 'f':
>> +             case 'F':
>
> Perhaps
>                 switch (tolower(s[1])) {
> is more readable

I opted to let the compiler deal with optimizing this, and I left the
switch statement as close to original as possible.

-Kees

>
>> +                     *res = false;
>> +                     break;
>> +             default:
>> +                     return -EINVAL;
>> +             }
>> +             break;
>
> or maybe /* fallthrough */
>
>>       default:
>>               return -EINVAL;
>>       }
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 2/8] lib: add "on" and "off" to strtobool
@ 2016-01-22 23:29       ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-22 23:29 UTC (permalink / raw)
  To: Joe Perches
  Cc: Ingo Molnar, Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, LKML, linux-arch

On Tue, Jan 19, 2016 at 6:09 PM, Joe Perches <joe@perches.com> wrote:
> On Tue, 2016-01-19 at 10:08 -0800, Kees Cook wrote:
>> Several places in the kernel expect to use "on" and "off" for their
>> boolean signifiers, so add them to strtobool.
>
> Several places in the kernel use a char address like
> fs/cifs/cifs_debug.c
>
>
>         char c;
>         ...
>
>
>         if (strtobool(&c, ...))
>
> Using s[1] might cause problems for those uses.

Oh ew. Thanks for noticing that.

>> diff --git a/lib/string.c b/lib/string.c
> []
>> @@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
>>   * @s: input string
>>   * @res: result
>>   *
>> - * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
>> - * Otherwise it will return -EINVAL.  Value pointed to by res is
>> - * updated upon finding a match.
>> + * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
>> + * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
>> + * pointed to by res is updated upon finding a match.
>>   */
>>  int strtobool(const char *s, bool *res)
>>  {
>> +     if (!s)
>> +             return -EINVAL;
>> +
>>       switch (s[0]) {
>>       case 'y':
>>       case 'Y':
>> @@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
>>       case '0':
>>               *res = false;
>>               break;
>> +     case 'o':
>> +     case 'O':
>> +             switch (s[1]) {
>> +             case 'n':
>> +             case 'N':
>> +                     *res = true;
>> +                     break;
>> +             case 'f':
>> +             case 'F':
>
> Perhaps
>                 switch (tolower(s[1])) {
> is more readable

I opted to let the compiler deal with optimizing this, and I left the
switch statement as close to original as possible.

-Kees

>
>> +                     *res = false;
>> +                     break;
>> +             default:
>> +                     return -EINVAL;
>> +             }
>> +             break;
>
> or maybe /* fallthrough */
>
>>       default:
>>               return -EINVAL;
>>       }
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH v4 2/8] lib: add "on" and "off" to strtobool
@ 2016-01-22 23:29       ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-22 23:29 UTC (permalink / raw)
  To: Joe Perches
  Cc: Ingo Molnar, Rasmus Villemoes, Daniel Borkmann, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy,
	kernel-hardening, LKML, linux-arch

On Tue, Jan 19, 2016 at 6:09 PM, Joe Perches <joe@perches.com> wrote:
> On Tue, 2016-01-19 at 10:08 -0800, Kees Cook wrote:
>> Several places in the kernel expect to use "on" and "off" for their
>> boolean signifiers, so add them to strtobool.
>
> Several places in the kernel use a char address like
> fs/cifs/cifs_debug.c
>
>
>         char c;
>         ...
>
>
>         if (strtobool(&c, ...))
>
> Using s[1] might cause problems for those uses.

Oh ew. Thanks for noticing that.

>> diff --git a/lib/string.c b/lib/string.c
> []
>> @@ -635,12 +635,15 @@ EXPORT_SYMBOL(sysfs_streq);
>>   * @s: input string
>>   * @res: result
>>   *
>> - * This routine returns 0 iff the first character is one of 'Yy1Nn0'.
>> - * Otherwise it will return -EINVAL.  Value pointed to by res is
>> - * updated upon finding a match.
>> + * This routine returns 0 iff the first character is one of 'Yy1Nn0', or
>> + * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL.  Value
>> + * pointed to by res is updated upon finding a match.
>>   */
>>  int strtobool(const char *s, bool *res)
>>  {
>> +     if (!s)
>> +             return -EINVAL;
>> +
>>       switch (s[0]) {
>>       case 'y':
>>       case 'Y':
>> @@ -652,6 +655,21 @@ int strtobool(const char *s, bool *res)
>>       case '0':
>>               *res = false;
>>               break;
>> +     case 'o':
>> +     case 'O':
>> +             switch (s[1]) {
>> +             case 'n':
>> +             case 'N':
>> +                     *res = true;
>> +                     break;
>> +             case 'f':
>> +             case 'F':
>
> Perhaps
>                 switch (tolower(s[1])) {
> is more readable

I opted to let the compiler deal with optimizing this, and I left the
switch statement as close to original as possible.

-Kees

>
>> +                     *res = false;
>> +                     break;
>> +             default:
>> +                     return -EINVAL;
>> +             }
>> +             break;
>
> or maybe /* fallthrough */
>
>>       default:
>>               return -EINVAL;
>>       }
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
  2016-01-22 19:57       ` Kees Cook
@ 2016-01-23  9:49         ` Geert Uytterhoeven
  -1 siblings, 0 replies; 104+ messages in thread
From: Geert Uytterhoeven @ 2016-01-23  9:49 UTC (permalink / raw)
  To: Kees Cook
  Cc: Laura Abbott, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch

Hi Kees,

On Fri, Jan 22, 2016 at 8:57 PM, Kees Cook <keescook@chromium.org> wrote:
> On Fri, Jan 22, 2016 at 11:16 AM, Laura Abbott <laura@labbott.name> wrote:
>> Kees had previously pushed a patch to do so but it exposed a couple of
>> underlying issues, mostly with low power paths
>> (c.f. http://article.gmane.org/gmane.linux.ports.arm.kernel/471199,
>> http://article.gmane.org/gmane.linux.kernel.mm/143489)
>> Those will need to be all fixed up before this could be made default.

I'm working on fixing that...

BTW, making the sections read-only is done quite late in the kernel startup
process, which means it doesn't trigger for the writes to the text segment in
secondary CPU bringup, but only for suspend/resume.

> Yeah, I've got a patch waiting to reorganize CONFIG_ARM_KERNMEM_PERMS
> to look more like arm64 (and x86) and get the feature correctly under
> CONFIG_DEBUG_RODATA. I made it default=y on v7+. rmk asked me to wait
> until -rc1 before resubmitting it.
>
> http://git.kernel.org/cgit/linux/kernel/git/kees/linux.git/commit/?h=kspp/arm-rodata&id=08bebfd2e7fb8a9f364ced74c356642d64e1f43e

One other concern is indeed memory usage ("ALIGN(1<<SECTION_SHIFT)"?).
Enabling CONFIG_ARM_KERNMEM_PERMS and CONFIG_DEBUG_RODATA in my test kernel
configs make the kernel too big to boot (overwritten DTB?) for 3 out of 4 arm
shmobile targets...

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory
@ 2016-01-23  9:49         ` Geert Uytterhoeven
  0 siblings, 0 replies; 104+ messages in thread
From: Geert Uytterhoeven @ 2016-01-23  9:49 UTC (permalink / raw)
  To: Kees Cook
  Cc: Laura Abbott, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch

Hi Kees,

On Fri, Jan 22, 2016 at 8:57 PM, Kees Cook <keescook@chromium.org> wrote:
> On Fri, Jan 22, 2016 at 11:16 AM, Laura Abbott <laura@labbott.name> wrote:
>> Kees had previously pushed a patch to do so but it exposed a couple of
>> underlying issues, mostly with low power paths
>> (c.f. http://article.gmane.org/gmane.linux.ports.arm.kernel/471199,
>> http://article.gmane.org/gmane.linux.kernel.mm/143489)
>> Those will need to be all fixed up before this could be made default.

I'm working on fixing that...

BTW, making the sections read-only is done quite late in the kernel startup
process, which means it doesn't trigger for the writes to the text segment in
secondary CPU bringup, but only for suspend/resume.

> Yeah, I've got a patch waiting to reorganize CONFIG_ARM_KERNMEM_PERMS
> to look more like arm64 (and x86) and get the feature correctly under
> CONFIG_DEBUG_RODATA. I made it default=y on v7+. rmk asked me to wait
> until -rc1 before resubmitting it.
>
> http://git.kernel.org/cgit/linux/kernel/git/kees/linux.git/commit/?h=kspp/arm-rodata&id=08bebfd2e7fb8a9f364ced74c356642d64e1f43e

One other concern is indeed memory usage ("ALIGN(1<<SECTION_SHIFT)"?).
Enabling CONFIG_ARM_KERNMEM_PERMS and CONFIG_DEBUG_RODATA in my test kernel
configs make the kernel too big to boot (overwritten DTB?) for 3 out of 4 arm
shmobile targets...

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 3/8] param: convert some "on"/"off" users to strtobool
  2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
@ 2016-01-27 21:11     ` David Brown
  -1 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-27 21:11 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Ingo Molnar, Kees Cook, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08:37AM -0800, Kees Cook wrote:

>diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
>index 9cc20af58c76..f5ea98490ffa 100644
>--- a/kernel/time/tick-sched.c
>+++ b/kernel/time/tick-sched.c
>@@ -387,20 +388,14 @@ void __init tick_nohz_init(void)
> /*
>  * NO HZ enabled ?
>  */
>-static int tick_nohz_enabled __read_mostly  = 1;
>+static bool tick_nohz_enabled __read_mostly  = true;
> unsigned long tick_nohz_active  __read_mostly;
> /*
>  * Enable / Disable tickless mode
>  */

Just discovered this conflicts with a recent patch with
CONFIG_NO_HZ_COMMON:

	commit 46373a15f65fe862f31c19a484acdf551f2b442f
	Author: Jean Delvare <jdelvare@suse.de>
	Date:   Mon Jan 11 17:40:31 2016 +0100

	    time: nohz: Expose tick_nohz_enabled

kernel/time/tick-sched.c:390:6: error: conflicting types for ‘tick_nohz_enabled’
	bool tick_nohz_enabled __read_mostly = true;
	     ^
In file included from kernel/time/tick-internal.h:5:0,
                 from kernel/time/tick-sched.c:30:
include/linux/tick.h:101:12: note: previous declaration of ‘tick_nohz_enabled’ was here
	extern int tick_nohz_enabled;
	           ^

Fixing the compilation error, it compiles and boots on arm64, however
it isn't detecting the write (with the lkdtm test).  I'll continue
looking into what's preventing this.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH v4 3/8] param: convert some "on"/"off" users to strtobool
@ 2016-01-27 21:11     ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-27 21:11 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Ingo Molnar, Kees Cook, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, linux-kernel, linux-arch

On Tue, Jan 19, 2016 at 10:08:37AM -0800, Kees Cook wrote:

>diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
>index 9cc20af58c76..f5ea98490ffa 100644
>--- a/kernel/time/tick-sched.c
>+++ b/kernel/time/tick-sched.c
>@@ -387,20 +388,14 @@ void __init tick_nohz_init(void)
> /*
>  * NO HZ enabled ?
>  */
>-static int tick_nohz_enabled __read_mostly  = 1;
>+static bool tick_nohz_enabled __read_mostly  = true;
> unsigned long tick_nohz_active  __read_mostly;
> /*
>  * Enable / Disable tickless mode
>  */

Just discovered this conflicts with a recent patch with
CONFIG_NO_HZ_COMMON:

	commit 46373a15f65fe862f31c19a484acdf551f2b442f
	Author: Jean Delvare <jdelvare@suse.de>
	Date:   Mon Jan 11 17:40:31 2016 +0100

	    time: nohz: Expose tick_nohz_enabled

kernel/time/tick-sched.c:390:6: error: conflicting types for ‘tick_nohz_enabled’
	bool tick_nohz_enabled __read_mostly = true;
	     ^
In file included from kernel/time/tick-internal.h:5:0,
                 from kernel/time/tick-sched.c:30:
include/linux/tick.h:101:12: note: previous declaration of ‘tick_nohz_enabled’ was here
	extern int tick_nohz_enabled;
	           ^

Fixing the compilation error, it compiles and boots on arm64, however
it isn't detecting the write (with the lkdtm test).  I'll continue
looking into what's preventing this.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 3/8] param: convert some "on"/"off" users to strtobool
  2016-01-27 21:11     ` David Brown
@ 2016-01-27 21:19       ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-27 21:19 UTC (permalink / raw)
  To: David Brown
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch

On Wed, Jan 27, 2016 at 1:11 PM, David Brown <david.brown@linaro.org> wrote:
> On Tue, Jan 19, 2016 at 10:08:37AM -0800, Kees Cook wrote:
>
>> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
>> index 9cc20af58c76..f5ea98490ffa 100644
>> --- a/kernel/time/tick-sched.c
>> +++ b/kernel/time/tick-sched.c
>> @@ -387,20 +388,14 @@ void __init tick_nohz_init(void)
>> /*
>>  * NO HZ enabled ?
>>  */
>> -static int tick_nohz_enabled __read_mostly  = 1;
>> +static bool tick_nohz_enabled __read_mostly  = true;
>> unsigned long tick_nohz_active  __read_mostly;
>> /*
>>  * Enable / Disable tickless mode
>>  */
>
>
> Just discovered this conflicts with a recent patch with
> CONFIG_NO_HZ_COMMON:
>
>         commit 46373a15f65fe862f31c19a484acdf551f2b442f
>         Author: Jean Delvare <jdelvare@suse.de>
>         Date:   Mon Jan 11 17:40:31 2016 +0100
>
>             time: nohz: Expose tick_nohz_enabled
>
> kernel/time/tick-sched.c:390:6: error: conflicting types for
> ‘tick_nohz_enabled’
>         bool tick_nohz_enabled __read_mostly = true;
>              ^
> In file included from kernel/time/tick-internal.h:5:0,
>                 from kernel/time/tick-sched.c:30:
> include/linux/tick.h:101:12: note: previous declaration of
> ‘tick_nohz_enabled’ was here
>         extern int tick_nohz_enabled;
>                    ^

Thanks! Yeah, I noticed this too when rebasing recently.

> Fixing the compilation error, it compiles and boots on arm64, however
> it isn't detecting the write (with the lkdtm test).  I'll continue
> looking into what's preventing this.

I'm going to hope it's something easy like CONFIG_DEBUG_RODATA not being set. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [kernel-hardening] [PATCH v4 3/8] param: convert some "on"/"off" users to strtobool
@ 2016-01-27 21:19       ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-27 21:19 UTC (permalink / raw)
  To: David Brown
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch

On Wed, Jan 27, 2016 at 1:11 PM, David Brown <david.brown@linaro.org> wrote:
> On Tue, Jan 19, 2016 at 10:08:37AM -0800, Kees Cook wrote:
>
>> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
>> index 9cc20af58c76..f5ea98490ffa 100644
>> --- a/kernel/time/tick-sched.c
>> +++ b/kernel/time/tick-sched.c
>> @@ -387,20 +388,14 @@ void __init tick_nohz_init(void)
>> /*
>>  * NO HZ enabled ?
>>  */
>> -static int tick_nohz_enabled __read_mostly  = 1;
>> +static bool tick_nohz_enabled __read_mostly  = true;
>> unsigned long tick_nohz_active  __read_mostly;
>> /*
>>  * Enable / Disable tickless mode
>>  */
>
>
> Just discovered this conflicts with a recent patch with
> CONFIG_NO_HZ_COMMON:
>
>         commit 46373a15f65fe862f31c19a484acdf551f2b442f
>         Author: Jean Delvare <jdelvare@suse.de>
>         Date:   Mon Jan 11 17:40:31 2016 +0100
>
>             time: nohz: Expose tick_nohz_enabled
>
> kernel/time/tick-sched.c:390:6: error: conflicting types for
> ‘tick_nohz_enabled’
>         bool tick_nohz_enabled __read_mostly = true;
>              ^
> In file included from kernel/time/tick-internal.h:5:0,
>                 from kernel/time/tick-sched.c:30:
> include/linux/tick.h:101:12: note: previous declaration of
> ‘tick_nohz_enabled’ was here
>         extern int tick_nohz_enabled;
>                    ^

Thanks! Yeah, I noticed this too when rebasing recently.

> Fixing the compilation error, it compiles and boots on arm64, however
> it isn't detecting the write (with the lkdtm test).  I'll continue
> looking into what's preventing this.

I'm going to hope it's something easy like CONFIG_DEBUG_RODATA not being set. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-27 21:19       ` Kees Cook
                           ` (2 preceding siblings ...)
  (?)
@ 2016-01-28  0:09         ` David Brown
  -1 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-28  0:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose, Steve Capper, Jeremy Linton,
	Mark Salter, linux-arm-kernel

>From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
From: David Brown <david.brown@linaro.org>
Date: Wed, 27 Jan 2016 13:58:44 -0800

This removes the CONFIG_DEBUG_RODATA option and makes it always
enabled.

Signed-off-by: David Brown <david.brown@linaro.org>
---
v1: This is in the same spirit as the x86 patch, removing allowing
this option to be config selected.  The associated patch series adds a
runtime option for the same thing.  However, it does affect the way
some things are mapped, and could possibly result in either increased
memory usage, or a performance hit (due to TLB misses from 4K pages).

I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
without 'rodata=off' on the command line.

 arch/arm64/Kconfig              |  3 +++
 arch/arm64/Kconfig.debug        | 10 ----------
 arch/arm64/kernel/insn.c        |  2 +-
 arch/arm64/kernel/vmlinux.lds.S |  5 +----
 arch/arm64/mm/mmu.c             | 12 ------------
 5 files changed, 5 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc6228..ffa617a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config DEBUG_RODATA
+	def_bool y
+
 config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index e13c4bf..db994ec 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
           against certain classes of kernel exploits.
           If in doubt, say "N".
 
-config DEBUG_RODATA
-	bool "Make kernel text and rodata read-only"
-	help
-	  If this is set, kernel text and rodata will be made read-only. This
-	  is to help catch accidental or malicious attempts to change the
-	  kernel's executable code. Additionally splits rodata from kernel
-	  text so it can be made explicitly non-executable.
-
-          If in doubt, say Y
-
 config DEBUG_ALIGN_RODATA
 	depends on DEBUG_RODATA && ARM64_4K_PAGES
 	bool "Align linker sections up to SECTION_SIZE"
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7371455..a04bdef 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
-	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
+	else if (!module)
 		page = virt_to_page(addr);
 	else
 		return addr;
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e3928f5..f80903c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
 #if defined(CONFIG_DEBUG_ALIGN_RODATA)
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#elif defined(CONFIG_DEBUG_RODATA)
+#else
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#else
-#define ALIGN_DEBUG_RO
-#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
 #endif
 
 SECTIONS
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 58faeaa..3b411b7 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 				phys, virt, size, prot, late_alloc);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 {
 	/*
@@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 	}
 
 }
-#else
-static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
-{
-	create_mapping(start, __phys_to_virt(start), end - start,
-			PAGE_KERNEL_EXEC);
-}
-#endif
 
 static void __init map_mem(void)
 {
@@ -410,7 +402,6 @@ static void __init map_mem(void)
 
 static void __init fixup_executable(void)
 {
-#ifdef CONFIG_DEBUG_RODATA
 	/* now that we are actually fully mapped, make the start/end more fine grained */
 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
 		unsigned long aligned_start = round_down(__pa(_stext),
@@ -428,10 +419,8 @@ static void __init fixup_executable(void)
 				aligned_end - __pa(__init_end),
 				PAGE_KERNEL);
 	}
-#endif
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void)
 {
 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
@@ -439,7 +428,6 @@ void mark_rodata_ro(void)
 				PAGE_KERNEL_ROX);
 
 }
-#endif
 
 void fixup_init(void)
 {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:09         ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-28  0:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose

From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
From: David Brown <david.brown@linaro.org>
Date: Wed, 27 Jan 2016 13:58:44 -0800

This removes the CONFIG_DEBUG_RODATA option and makes it always
enabled.

Signed-off-by: David Brown <david.brown@linaro.org>
---
v1: This is in the same spirit as the x86 patch, removing allowing
this option to be config selected.  The associated patch series adds a
runtime option for the same thing.  However, it does affect the way
some things are mapped, and could possibly result in either increased
memory usage, or a performance hit (due to TLB misses from 4K pages).

I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
without 'rodata=off' on the command line.

 arch/arm64/Kconfig              |  3 +++
 arch/arm64/Kconfig.debug        | 10 ----------
 arch/arm64/kernel/insn.c        |  2 +-
 arch/arm64/kernel/vmlinux.lds.S |  5 +----
 arch/arm64/mm/mmu.c             | 12 ------------
 5 files changed, 5 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc6228..ffa617a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config DEBUG_RODATA
+	def_bool y
+
 config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index e13c4bf..db994ec 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
           against certain classes of kernel exploits.
           If in doubt, say "N".
 
-config DEBUG_RODATA
-	bool "Make kernel text and rodata read-only"
-	help
-	  If this is set, kernel text and rodata will be made read-only. This
-	  is to help catch accidental or malicious attempts to change the
-	  kernel's executable code. Additionally splits rodata from kernel
-	  text so it can be made explicitly non-executable.
-
-          If in doubt, say Y
-
 config DEBUG_ALIGN_RODATA
 	depends on DEBUG_RODATA && ARM64_4K_PAGES
 	bool "Align linker sections up to SECTION_SIZE"
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7371455..a04bdef 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
-	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
+	else if (!module)
 		page = virt_to_page(addr);
 	else
 		return addr;
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e3928f5..f80903c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
 #if defined(CONFIG_DEBUG_ALIGN_RODATA)
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#elif defined(CONFIG_DEBUG_RODATA)
+#else
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#else
-#define ALIGN_DEBUG_RO
-#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
 #endif
 
 SECTIONS
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 58faeaa..3b411b7 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 				phys, virt, size, prot, late_alloc);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 {
 	/*
@@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 	}
 
 }
-#else
-static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
-{
-	create_mapping(start, __phys_to_virt(start), end - start,
-			PAGE_KERNEL_EXEC);
-}
-#endif
 
 static void __init map_mem(void)
 {
@@ -410,7 +402,6 @@ static void __init map_mem(void)
 
 static void __init fixup_executable(void)
 {
-#ifdef CONFIG_DEBUG_RODATA
 	/* now that we are actually fully mapped, make the start/end more fine grained */
 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
 		unsigned long aligned_start = round_down(__pa(_stext),
@@ -428,10 +419,8 @@ static void __init fixup_executable(void)
 				aligned_end - __pa(__init_end),
 				PAGE_KERNEL);
 	}
-#endif
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void)
 {
 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
@@ -439,7 +428,6 @@ void mark_rodata_ro(void)
 				PAGE_KERNEL_ROX);
 
 }
-#endif
 
 void fixup_init(void)
 {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:09         ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-28  0:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose, Steve Capper, Jeremy Linton,
	Mark Salter, linux-arm-kernel



^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:09         ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-28  0:09 UTC (permalink / raw)
  To: linux-arm-kernel

>From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
From: David Brown <david.brown@linaro.org>
Date: Wed, 27 Jan 2016 13:58:44 -0800

This removes the CONFIG_DEBUG_RODATA option and makes it always
enabled.

Signed-off-by: David Brown <david.brown@linaro.org>
---
v1: This is in the same spirit as the x86 patch, removing allowing
this option to be config selected.  The associated patch series adds a
runtime option for the same thing.  However, it does affect the way
some things are mapped, and could possibly result in either increased
memory usage, or a performance hit (due to TLB misses from 4K pages).

I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
without 'rodata=off' on the command line.

 arch/arm64/Kconfig              |  3 +++
 arch/arm64/Kconfig.debug        | 10 ----------
 arch/arm64/kernel/insn.c        |  2 +-
 arch/arm64/kernel/vmlinux.lds.S |  5 +----
 arch/arm64/mm/mmu.c             | 12 ------------
 5 files changed, 5 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc6228..ffa617a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config DEBUG_RODATA
+	def_bool y
+
 config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index e13c4bf..db994ec 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
           against certain classes of kernel exploits.
           If in doubt, say "N".
 
-config DEBUG_RODATA
-	bool "Make kernel text and rodata read-only"
-	help
-	  If this is set, kernel text and rodata will be made read-only. This
-	  is to help catch accidental or malicious attempts to change the
-	  kernel's executable code. Additionally splits rodata from kernel
-	  text so it can be made explicitly non-executable.
-
-          If in doubt, say Y
-
 config DEBUG_ALIGN_RODATA
 	depends on DEBUG_RODATA && ARM64_4K_PAGES
 	bool "Align linker sections up to SECTION_SIZE"
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7371455..a04bdef 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
-	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
+	else if (!module)
 		page = virt_to_page(addr);
 	else
 		return addr;
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e3928f5..f80903c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
 #if defined(CONFIG_DEBUG_ALIGN_RODATA)
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#elif defined(CONFIG_DEBUG_RODATA)
+#else
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#else
-#define ALIGN_DEBUG_RO
-#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
 #endif
 
 SECTIONS
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 58faeaa..3b411b7 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 				phys, virt, size, prot, late_alloc);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 {
 	/*
@@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 	}
 
 }
-#else
-static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
-{
-	create_mapping(start, __phys_to_virt(start), end - start,
-			PAGE_KERNEL_EXEC);
-}
-#endif
 
 static void __init map_mem(void)
 {
@@ -410,7 +402,6 @@ static void __init map_mem(void)
 
 static void __init fixup_executable(void)
 {
-#ifdef CONFIG_DEBUG_RODATA
 	/* now that we are actually fully mapped, make the start/end more fine grained */
 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
 		unsigned long aligned_start = round_down(__pa(_stext),
@@ -428,10 +419,8 @@ static void __init fixup_executable(void)
 				aligned_end - __pa(__init_end),
 				PAGE_KERNEL);
 	}
-#endif
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void)
 {
 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
@@ -439,7 +428,6 @@ void mark_rodata_ro(void)
 				PAGE_KERNEL_ROX);
 
 }
-#endif
 
 void fixup_init(void)
 {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:09         ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-01-28  0:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose, Steve Capper, Jeremy Linton,
	Mark Salter, linux-arm-kernel

>From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
From: David Brown <david.brown@linaro.org>
Date: Wed, 27 Jan 2016 13:58:44 -0800

This removes the CONFIG_DEBUG_RODATA option and makes it always
enabled.

Signed-off-by: David Brown <david.brown@linaro.org>
---
v1: This is in the same spirit as the x86 patch, removing allowing
this option to be config selected.  The associated patch series adds a
runtime option for the same thing.  However, it does affect the way
some things are mapped, and could possibly result in either increased
memory usage, or a performance hit (due to TLB misses from 4K pages).

I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
without 'rodata=off' on the command line.

 arch/arm64/Kconfig              |  3 +++
 arch/arm64/Kconfig.debug        | 10 ----------
 arch/arm64/kernel/insn.c        |  2 +-
 arch/arm64/kernel/vmlinux.lds.S |  5 +----
 arch/arm64/mm/mmu.c             | 12 ------------
 5 files changed, 5 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc6228..ffa617a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config DEBUG_RODATA
+	def_bool y
+
 config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index e13c4bf..db994ec 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
           against certain classes of kernel exploits.
           If in doubt, say "N".
 
-config DEBUG_RODATA
-	bool "Make kernel text and rodata read-only"
-	help
-	  If this is set, kernel text and rodata will be made read-only. This
-	  is to help catch accidental or malicious attempts to change the
-	  kernel's executable code. Additionally splits rodata from kernel
-	  text so it can be made explicitly non-executable.
-
-          If in doubt, say Y
-
 config DEBUG_ALIGN_RODATA
 	depends on DEBUG_RODATA && ARM64_4K_PAGES
 	bool "Align linker sections up to SECTION_SIZE"
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7371455..a04bdef 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
-	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
+	else if (!module)
 		page = virt_to_page(addr);
 	else
 		return addr;
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e3928f5..f80903c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
 #if defined(CONFIG_DEBUG_ALIGN_RODATA)
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#elif defined(CONFIG_DEBUG_RODATA)
+#else
 #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
 #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
-#else
-#define ALIGN_DEBUG_RO
-#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
 #endif
 
 SECTIONS
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 58faeaa..3b411b7 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 				phys, virt, size, prot, late_alloc);
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 {
 	/*
@@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
 	}
 
 }
-#else
-static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
-{
-	create_mapping(start, __phys_to_virt(start), end - start,
-			PAGE_KERNEL_EXEC);
-}
-#endif
 
 static void __init map_mem(void)
 {
@@ -410,7 +402,6 @@ static void __init map_mem(void)
 
 static void __init fixup_executable(void)
 {
-#ifdef CONFIG_DEBUG_RODATA
 	/* now that we are actually fully mapped, make the start/end more fine grained */
 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
 		unsigned long aligned_start = round_down(__pa(_stext),
@@ -428,10 +419,8 @@ static void __init fixup_executable(void)
 				aligned_end - __pa(__init_end),
 				PAGE_KERNEL);
 	}
-#endif
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void)
 {
 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
@@ -439,7 +428,6 @@ void mark_rodata_ro(void)
 				PAGE_KERNEL_ROX);
 
 }
-#endif
 
 void fixup_init(void)
 {
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-28  0:09         ` David Brown
                             ` (2 preceding siblings ...)
  (?)
@ 2016-01-28  0:14           ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28  0:14 UTC (permalink / raw)
  To: David Brown
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose, Steve Capper, Jeremy Linton,
	Mark Salter, linux-arm-kernel

On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
>
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
>
> Signed-off-by: David Brown <david.brown@linaro.org>

I'm all for this!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).
>
> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
>
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
>         def_bool y
>
> +config DEBUG_RODATA
> +       def_bool y
> +
> config PGTABLE_LEVELS
>         int
>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
>
> -config DEBUG_RODATA
> -       bool "Make kernel text and rodata read-only"
> -       help
> -         If this is set, kernel text and rodata will be made read-only.
> This
> -         is to help catch accidental or malicious attempts to change the
> -         kernel's executable code. Additionally splits rodata from kernel
> -         text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>         bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>
>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>                 page = vmalloc_to_page(addr);
> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +       else if (!module)
>                 page = virt_to_page(addr);
>         else
>                 return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S
> b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
> #endif
>
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
> unsigned long virt,
>                                 phys, virt, size, prot, late_alloc);
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
>         /*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
> phys_addr_t end)
>         }
>
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -       create_mapping(start, __phys_to_virt(start), end - start,
> -                       PAGE_KERNEL_EXEC);
> -}
> -#endif
>
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
>         /* now that we are actually fully mapped, make the start/end more
> fine grained */
>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>                 unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>                                 aligned_end - __pa(__init_end),
>                                 PAGE_KERNEL);
>         }
> -#endif
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>                                 PAGE_KERNEL_ROX);
>
> }
> -#endif
>
> void fixup_init(void)
> {
> --
> 2.7.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:14           ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28  0:14 UTC (permalink / raw)
  To: David Brown
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose

On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
>
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
>
> Signed-off-by: David Brown <david.brown@linaro.org>

I'm all for this!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).
>
> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
>
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
>         def_bool y
>
> +config DEBUG_RODATA
> +       def_bool y
> +
> config PGTABLE_LEVELS
>         int
>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
>
> -config DEBUG_RODATA
> -       bool "Make kernel text and rodata read-only"
> -       help
> -         If this is set, kernel text and rodata will be made read-only.
> This
> -         is to help catch accidental or malicious attempts to change the
> -         kernel's executable code. Additionally splits rodata from kernel
> -         text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>         bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>
>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>                 page = vmalloc_to_page(addr);
> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +       else if (!module)
>                 page = virt_to_page(addr);
>         else
>                 return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S
> b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
> #endif
>
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
> unsigned long virt,
>                                 phys, virt, size, prot, late_alloc);
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
>         /*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
> phys_addr_t end)
>         }
>
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -       create_mapping(start, __phys_to_virt(start), end - start,
> -                       PAGE_KERNEL_EXEC);
> -}
> -#endif
>
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
>         /* now that we are actually fully mapped, make the start/end more
> fine grained */
>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>                 unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>                                 aligned_end - __pa(__init_end),
>                                 PAGE_KERNEL);
>         }
> -#endif
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>                                 PAGE_KERNEL_ROX);
>
> }
> -#endif
>
> void fixup_init(void)
> {
> --
> 2.7.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:14           ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28  0:14 UTC (permalink / raw)
  To: David Brown
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose, Steve Capper, Jeremy Linton,
	Mark Salter, linux-arm-kernel

On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
>
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
>
> Signed-off-by: David Brown <david.brown@linaro.org>

I'm all for this!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).
>
> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
>
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
>         def_bool y
>
> +config DEBUG_RODATA
> +       def_bool y
> +
> config PGTABLE_LEVELS
>         int
>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
>
> -config DEBUG_RODATA
> -       bool "Make kernel text and rodata read-only"
> -       help
> -         If this is set, kernel text and rodata will be made read-only.
> This
> -         is to help catch accidental or malicious attempts to change the
> -         kernel's executable code. Additionally splits rodata from kernel
> -         text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>         bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>
>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>                 page = vmalloc_to_page(addr);
> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +       else if (!module)
>                 page = virt_to_page(addr);
>         else
>                 return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S
> b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
> #endif
>
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
> unsigned long virt,
>                                 phys, virt, size, prot, late_alloc);
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
>         /*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
> phys_addr_t end)
>         }
>
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -       create_mapping(start, __phys_to_virt(start), end - start,
> -                       PAGE_KERNEL_EXEC);
> -}
> -#endif
>
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
>         /* now that we are actually fully mapped, make the start/end more
> fine grained */
>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>                 unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>                                 aligned_end - __pa(__init_end),
>                                 PAGE_KERNEL);
>         }
> -#endif
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>                                 PAGE_KERNEL_ROX);
>
> }
> -#endif
>
> void fixup_init(void)
> {
> --
> 2.7.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:14           ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28  0:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
>
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
>
> Signed-off-by: David Brown <david.brown@linaro.org>

I'm all for this!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).
>
> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
>
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
>         def_bool y
>
> +config DEBUG_RODATA
> +       def_bool y
> +
> config PGTABLE_LEVELS
>         int
>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
>
> -config DEBUG_RODATA
> -       bool "Make kernel text and rodata read-only"
> -       help
> -         If this is set, kernel text and rodata will be made read-only.
> This
> -         is to help catch accidental or malicious attempts to change the
> -         kernel's executable code. Additionally splits rodata from kernel
> -         text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>         bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>
>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>                 page = vmalloc_to_page(addr);
> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +       else if (!module)
>                 page = virt_to_page(addr);
>         else
>                 return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S
> b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
> #endif
>
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
> unsigned long virt,
>                                 phys, virt, size, prot, late_alloc);
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
>         /*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
> phys_addr_t end)
>         }
>
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -       create_mapping(start, __phys_to_virt(start), end - start,
> -                       PAGE_KERNEL_EXEC);
> -}
> -#endif
>
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
>         /* now that we are actually fully mapped, make the start/end more
> fine grained */
>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>                 unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>                                 aligned_end - __pa(__init_end),
>                                 PAGE_KERNEL);
>         }
> -#endif
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>                                 PAGE_KERNEL_ROX);
>
> }
> -#endif
>
> void fixup_init(void)
> {
> --
> 2.7.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  0:14           ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28  0:14 UTC (permalink / raw)
  To: David Brown
  Cc: kernel-hardening, Ingo Molnar, Andy Lutomirski, H. Peter Anvin,
	Michael Ellerman, Mathias Krause, Thomas Gleixner, x86,
	Arnd Bergmann, PaX Team, Emese Revfy, LKML, linux-arch,
	Catalin Marinas, Will Deacon, Marc Zyngier, yalin wang,
	Zi Shen Lim, Yang Shi, Mark Rutland, Ard Biesheuvel,
	Laura Abbott, Suzuki K. Poulose, Steve Capper, Jeremy Linton,
	Mark Salter, linux-arm-kernel

On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
>
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
>
> Signed-off-by: David Brown <david.brown@linaro.org>

I'm all for this!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).
>
> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
>
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
>         def_bool y
>
> +config DEBUG_RODATA
> +       def_bool y
> +
> config PGTABLE_LEVELS
>         int
>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
>
> -config DEBUG_RODATA
> -       bool "Make kernel text and rodata read-only"
> -       help
> -         If this is set, kernel text and rodata will be made read-only.
> This
> -         is to help catch accidental or malicious attempts to change the
> -         kernel's executable code. Additionally splits rodata from kernel
> -         text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>         bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>
>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>                 page = vmalloc_to_page(addr);
> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +       else if (!module)
>                 page = virt_to_page(addr);
>         else
>                 return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S
> b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
> #endif
>
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
> unsigned long virt,
>                                 phys, virt, size, prot, late_alloc);
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
>         /*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
> phys_addr_t end)
>         }
>
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -       create_mapping(start, __phys_to_virt(start), end - start,
> -                       PAGE_KERNEL_EXEC);
> -}
> -#endif
>
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
>         /* now that we are actually fully mapped, make the start/end more
> fine grained */
>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>                 unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>                                 aligned_end - __pa(__init_end),
>                                 PAGE_KERNEL);
>         }
> -#endif
> }
>
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>                                 PAGE_KERNEL_ROX);
>
> }
> -#endif
>
> void fixup_init(void)
> {
> --
> 2.7.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-28  0:14           ` Kees Cook
                               ` (2 preceding siblings ...)
  (?)
@ 2016-01-28  8:20             ` Ard Biesheuvel
  -1 siblings, 0 replies; 104+ messages in thread
From: Ard Biesheuvel @ 2016-01-28  8:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Mark Rutland, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On 28 January 2016 at 01:14, Kees Cook <keescook@chromium.org> wrote:
> On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
>> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
>> From: David Brown <david.brown@linaro.org>
>> Date: Wed, 27 Jan 2016 13:58:44 -0800
>>
>> This removes the CONFIG_DEBUG_RODATA option and makes it always
>> enabled.
>>
>> Signed-off-by: David Brown <david.brown@linaro.org>
>
> I'm all for this!
>

I agree that this is probably a good idea, but please note that Mark
Rutland's pagetable rework patches targeted for v4.6 make significant
changes in this area, so you're probably better off building on top of
those.

-- 
Ard.


> Reviewed-by: Kees Cook <keescook@chromium.org>
>
> -Kees
>
>> ---
>> v1: This is in the same spirit as the x86 patch, removing allowing
>> this option to be config selected.  The associated patch series adds a
>> runtime option for the same thing.  However, it does affect the way
>> some things are mapped, and could possibly result in either increased
>> memory usage, or a performance hit (due to TLB misses from 4K pages).
>>
>> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
>> without 'rodata=off' on the command line.
>>
>> arch/arm64/Kconfig              |  3 +++
>> arch/arm64/Kconfig.debug        | 10 ----------
>> arch/arm64/kernel/insn.c        |  2 +-
>> arch/arm64/kernel/vmlinux.lds.S |  5 +----
>> arch/arm64/mm/mmu.c             | 12 ------------
>> 5 files changed, 5 insertions(+), 27 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 8cc6228..ffa617a 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
>> config FIX_EARLYCON_MEM
>>         def_bool y
>>
>> +config DEBUG_RODATA
>> +       def_bool y
>> +
>> config PGTABLE_LEVELS
>>         int
>>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
>> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
>> index e13c4bf..db994ec 100644
>> --- a/arch/arm64/Kconfig.debug
>> +++ b/arch/arm64/Kconfig.debug
>> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>>           against certain classes of kernel exploits.
>>           If in doubt, say "N".
>>
>> -config DEBUG_RODATA
>> -       bool "Make kernel text and rodata read-only"
>> -       help
>> -         If this is set, kernel text and rodata will be made read-only.
>> This
>> -         is to help catch accidental or malicious attempts to change the
>> -         kernel's executable code. Additionally splits rodata from kernel
>> -         text so it can be made explicitly non-executable.
>> -
>> -          If in doubt, say Y
>> -
>> config DEBUG_ALIGN_RODATA
>>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>>         bool "Align linker sections up to SECTION_SIZE"
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7371455..a04bdef 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>
>>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>                 page = vmalloc_to_page(addr);
>> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
>> +       else if (!module)
>>                 page = virt_to_page(addr);
>>         else
>>                 return addr;
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S
>> b/arch/arm64/kernel/vmlinux.lds.S
>> index e3928f5..f80903c 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
>> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#elif defined(CONFIG_DEBUG_RODATA)
>> +#else
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#else
>> -#define ALIGN_DEBUG_RO
>> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
>> #endif
>>
>> SECTIONS
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 58faeaa..3b411b7 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
>> unsigned long virt,
>>                                 phys, virt, size, prot, late_alloc);
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> {
>>         /*
>> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
>> phys_addr_t end)
>>         }
>>
>> }
>> -#else
>> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> -{
>> -       create_mapping(start, __phys_to_virt(start), end - start,
>> -                       PAGE_KERNEL_EXEC);
>> -}
>> -#endif
>>
>> static void __init map_mem(void)
>> {
>> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>>
>> static void __init fixup_executable(void)
>> {
>> -#ifdef CONFIG_DEBUG_RODATA
>>         /* now that we are actually fully mapped, make the start/end more
>> fine grained */
>>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>>                 unsigned long aligned_start = round_down(__pa(_stext),
>> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>>                                 aligned_end - __pa(__init_end),
>>                                 PAGE_KERNEL);
>>         }
>> -#endif
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> void mark_rodata_ro(void)
>> {
>>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
>> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>>                                 PAGE_KERNEL_ROX);
>>
>> }
>> -#endif
>>
>> void fixup_init(void)
>> {
>> --
>> 2.7.0
>>
>
>
>
> --
> Kees Cook
> Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  8:20             ` Ard Biesheuvel
  0 siblings, 0 replies; 104+ messages in thread
From: Ard Biesheuvel @ 2016-01-28  8:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Mark Rutland, Laura Abbott,
	Suzuki K. Poulose

On 28 January 2016 at 01:14, Kees Cook <keescook@chromium.org> wrote:
> On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
>> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
>> From: David Brown <david.brown@linaro.org>
>> Date: Wed, 27 Jan 2016 13:58:44 -0800
>>
>> This removes the CONFIG_DEBUG_RODATA option and makes it always
>> enabled.
>>
>> Signed-off-by: David Brown <david.brown@linaro.org>
>
> I'm all for this!
>

I agree that this is probably a good idea, but please note that Mark
Rutland's pagetable rework patches targeted for v4.6 make significant
changes in this area, so you're probably better off building on top of
those.

-- 
Ard.


> Reviewed-by: Kees Cook <keescook@chromium.org>
>
> -Kees
>
>> ---
>> v1: This is in the same spirit as the x86 patch, removing allowing
>> this option to be config selected.  The associated patch series adds a
>> runtime option for the same thing.  However, it does affect the way
>> some things are mapped, and could possibly result in either increased
>> memory usage, or a performance hit (due to TLB misses from 4K pages).
>>
>> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
>> without 'rodata=off' on the command line.
>>
>> arch/arm64/Kconfig              |  3 +++
>> arch/arm64/Kconfig.debug        | 10 ----------
>> arch/arm64/kernel/insn.c        |  2 +-
>> arch/arm64/kernel/vmlinux.lds.S |  5 +----
>> arch/arm64/mm/mmu.c             | 12 ------------
>> 5 files changed, 5 insertions(+), 27 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 8cc6228..ffa617a 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
>> config FIX_EARLYCON_MEM
>>         def_bool y
>>
>> +config DEBUG_RODATA
>> +       def_bool y
>> +
>> config PGTABLE_LEVELS
>>         int
>>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
>> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
>> index e13c4bf..db994ec 100644
>> --- a/arch/arm64/Kconfig.debug
>> +++ b/arch/arm64/Kconfig.debug
>> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>>           against certain classes of kernel exploits.
>>           If in doubt, say "N".
>>
>> -config DEBUG_RODATA
>> -       bool "Make kernel text and rodata read-only"
>> -       help
>> -         If this is set, kernel text and rodata will be made read-only.
>> This
>> -         is to help catch accidental or malicious attempts to change the
>> -         kernel's executable code. Additionally splits rodata from kernel
>> -         text so it can be made explicitly non-executable.
>> -
>> -          If in doubt, say Y
>> -
>> config DEBUG_ALIGN_RODATA
>>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>>         bool "Align linker sections up to SECTION_SIZE"
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7371455..a04bdef 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>
>>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>                 page = vmalloc_to_page(addr);
>> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
>> +       else if (!module)
>>                 page = virt_to_page(addr);
>>         else
>>                 return addr;
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S
>> b/arch/arm64/kernel/vmlinux.lds.S
>> index e3928f5..f80903c 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
>> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#elif defined(CONFIG_DEBUG_RODATA)
>> +#else
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#else
>> -#define ALIGN_DEBUG_RO
>> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
>> #endif
>>
>> SECTIONS
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 58faeaa..3b411b7 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
>> unsigned long virt,
>>                                 phys, virt, size, prot, late_alloc);
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> {
>>         /*
>> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
>> phys_addr_t end)
>>         }
>>
>> }
>> -#else
>> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> -{
>> -       create_mapping(start, __phys_to_virt(start), end - start,
>> -                       PAGE_KERNEL_EXEC);
>> -}
>> -#endif
>>
>> static void __init map_mem(void)
>> {
>> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>>
>> static void __init fixup_executable(void)
>> {
>> -#ifdef CONFIG_DEBUG_RODATA
>>         /* now that we are actually fully mapped, make the start/end more
>> fine grained */
>>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>>                 unsigned long aligned_start = round_down(__pa(_stext),
>> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>>                                 aligned_end - __pa(__init_end),
>>                                 PAGE_KERNEL);
>>         }
>> -#endif
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> void mark_rodata_ro(void)
>> {
>>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
>> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>>                                 PAGE_KERNEL_ROX);
>>
>> }
>> -#endif
>>
>> void fixup_init(void)
>> {
>> --
>> 2.7.0
>>
>
>
>
> --
> Kees Cook
> Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  8:20             ` Ard Biesheuvel
  0 siblings, 0 replies; 104+ messages in thread
From: Ard Biesheuvel @ 2016-01-28  8:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Mark Rutland, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On 28 January 2016 at 01:14, Kees Cook <keescook@chromium.org> wrote:
> On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
>> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
>> From: David Brown <david.brown@linaro.org>
>> Date: Wed, 27 Jan 2016 13:58:44 -0800
>>
>> This removes the CONFIG_DEBUG_RODATA option and makes it always
>> enabled.
>>
>> Signed-off-by: David Brown <david.brown@linaro.org>
>
> I'm all for this!
>

I agree that this is probably a good idea, but please note that Mark
Rutland's pagetable rework patches targeted for v4.6 make significant
changes in this area, so you're probably better off building on top of
those.

-- 
Ard.


> Reviewed-by: Kees Cook <keescook@chromium.org>
>
> -Kees
>
>> ---
>> v1: This is in the same spirit as the x86 patch, removing allowing
>> this option to be config selected.  The associated patch series adds a
>> runtime option for the same thing.  However, it does affect the way
>> some things are mapped, and could possibly result in either increased
>> memory usage, or a performance hit (due to TLB misses from 4K pages).
>>
>> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
>> without 'rodata=off' on the command line.
>>
>> arch/arm64/Kconfig              |  3 +++
>> arch/arm64/Kconfig.debug        | 10 ----------
>> arch/arm64/kernel/insn.c        |  2 +-
>> arch/arm64/kernel/vmlinux.lds.S |  5 +----
>> arch/arm64/mm/mmu.c             | 12 ------------
>> 5 files changed, 5 insertions(+), 27 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 8cc6228..ffa617a 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
>> config FIX_EARLYCON_MEM
>>         def_bool y
>>
>> +config DEBUG_RODATA
>> +       def_bool y
>> +
>> config PGTABLE_LEVELS
>>         int
>>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
>> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
>> index e13c4bf..db994ec 100644
>> --- a/arch/arm64/Kconfig.debug
>> +++ b/arch/arm64/Kconfig.debug
>> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>>           against certain classes of kernel exploits.
>>           If in doubt, say "N".
>>
>> -config DEBUG_RODATA
>> -       bool "Make kernel text and rodata read-only"
>> -       help
>> -         If this is set, kernel text and rodata will be made read-only.
>> This
>> -         is to help catch accidental or malicious attempts to change the
>> -         kernel's executable code. Additionally splits rodata from kernel
>> -         text so it can be made explicitly non-executable.
>> -
>> -          If in doubt, say Y
>> -
>> config DEBUG_ALIGN_RODATA
>>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>>         bool "Align linker sections up to SECTION_SIZE"
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7371455..a04bdef 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>
>>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>                 page = vmalloc_to_page(addr);
>> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
>> +       else if (!module)
>>                 page = virt_to_page(addr);
>>         else
>>                 return addr;
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S
>> b/arch/arm64/kernel/vmlinux.lds.S
>> index e3928f5..f80903c 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
>> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#elif defined(CONFIG_DEBUG_RODATA)
>> +#else
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#else
>> -#define ALIGN_DEBUG_RO
>> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
>> #endif
>>
>> SECTIONS
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 58faeaa..3b411b7 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
>> unsigned long virt,
>>                                 phys, virt, size, prot, late_alloc);
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> {
>>         /*
>> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
>> phys_addr_t end)
>>         }
>>
>> }
>> -#else
>> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> -{
>> -       create_mapping(start, __phys_to_virt(start), end - start,
>> -                       PAGE_KERNEL_EXEC);
>> -}
>> -#endif
>>
>> static void __init map_mem(void)
>> {
>> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>>
>> static void __init fixup_executable(void)
>> {
>> -#ifdef CONFIG_DEBUG_RODATA
>>         /* now that we are actually fully mapped, make the start/end more
>> fine grained */
>>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>>                 unsigned long aligned_start = round_down(__pa(_stext),
>> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>>                                 aligned_end - __pa(__init_end),
>>                                 PAGE_KERNEL);
>>         }
>> -#endif
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> void mark_rodata_ro(void)
>> {
>>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
>> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>>                                 PAGE_KERNEL_ROX);
>>
>> }
>> -#endif
>>
>> void fixup_init(void)
>> {
>> --
>> 2.7.0
>>
>
>
>
> --
> Kees Cook
> Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  8:20             ` Ard Biesheuvel
  0 siblings, 0 replies; 104+ messages in thread
From: Ard Biesheuvel @ 2016-01-28  8:20 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 January 2016 at 01:14, Kees Cook <keescook@chromium.org> wrote:
> On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
>> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
>> From: David Brown <david.brown@linaro.org>
>> Date: Wed, 27 Jan 2016 13:58:44 -0800
>>
>> This removes the CONFIG_DEBUG_RODATA option and makes it always
>> enabled.
>>
>> Signed-off-by: David Brown <david.brown@linaro.org>
>
> I'm all for this!
>

I agree that this is probably a good idea, but please note that Mark
Rutland's pagetable rework patches targeted for v4.6 make significant
changes in this area, so you're probably better off building on top of
those.

-- 
Ard.


> Reviewed-by: Kees Cook <keescook@chromium.org>
>
> -Kees
>
>> ---
>> v1: This is in the same spirit as the x86 patch, removing allowing
>> this option to be config selected.  The associated patch series adds a
>> runtime option for the same thing.  However, it does affect the way
>> some things are mapped, and could possibly result in either increased
>> memory usage, or a performance hit (due to TLB misses from 4K pages).
>>
>> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
>> without 'rodata=off' on the command line.
>>
>> arch/arm64/Kconfig              |  3 +++
>> arch/arm64/Kconfig.debug        | 10 ----------
>> arch/arm64/kernel/insn.c        |  2 +-
>> arch/arm64/kernel/vmlinux.lds.S |  5 +----
>> arch/arm64/mm/mmu.c             | 12 ------------
>> 5 files changed, 5 insertions(+), 27 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 8cc6228..ffa617a 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
>> config FIX_EARLYCON_MEM
>>         def_bool y
>>
>> +config DEBUG_RODATA
>> +       def_bool y
>> +
>> config PGTABLE_LEVELS
>>         int
>>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
>> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
>> index e13c4bf..db994ec 100644
>> --- a/arch/arm64/Kconfig.debug
>> +++ b/arch/arm64/Kconfig.debug
>> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>>           against certain classes of kernel exploits.
>>           If in doubt, say "N".
>>
>> -config DEBUG_RODATA
>> -       bool "Make kernel text and rodata read-only"
>> -       help
>> -         If this is set, kernel text and rodata will be made read-only.
>> This
>> -         is to help catch accidental or malicious attempts to change the
>> -         kernel's executable code. Additionally splits rodata from kernel
>> -         text so it can be made explicitly non-executable.
>> -
>> -          If in doubt, say Y
>> -
>> config DEBUG_ALIGN_RODATA
>>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>>         bool "Align linker sections up to SECTION_SIZE"
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7371455..a04bdef 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>
>>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>                 page = vmalloc_to_page(addr);
>> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
>> +       else if (!module)
>>                 page = virt_to_page(addr);
>>         else
>>                 return addr;
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S
>> b/arch/arm64/kernel/vmlinux.lds.S
>> index e3928f5..f80903c 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
>> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#elif defined(CONFIG_DEBUG_RODATA)
>> +#else
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#else
>> -#define ALIGN_DEBUG_RO
>> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
>> #endif
>>
>> SECTIONS
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 58faeaa..3b411b7 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
>> unsigned long virt,
>>                                 phys, virt, size, prot, late_alloc);
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> {
>>         /*
>> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
>> phys_addr_t end)
>>         }
>>
>> }
>> -#else
>> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> -{
>> -       create_mapping(start, __phys_to_virt(start), end - start,
>> -                       PAGE_KERNEL_EXEC);
>> -}
>> -#endif
>>
>> static void __init map_mem(void)
>> {
>> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>>
>> static void __init fixup_executable(void)
>> {
>> -#ifdef CONFIG_DEBUG_RODATA
>>         /* now that we are actually fully mapped, make the start/end more
>> fine grained */
>>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>>                 unsigned long aligned_start = round_down(__pa(_stext),
>> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>>                                 aligned_end - __pa(__init_end),
>>                                 PAGE_KERNEL);
>>         }
>> -#endif
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> void mark_rodata_ro(void)
>> {
>>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
>> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>>                                 PAGE_KERNEL_ROX);
>>
>> }
>> -#endif
>>
>> void fixup_init(void)
>> {
>> --
>> 2.7.0
>>
>
>
>
> --
> Kees Cook
> Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28  8:20             ` Ard Biesheuvel
  0 siblings, 0 replies; 104+ messages in thread
From: Ard Biesheuvel @ 2016-01-28  8:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Mark Rutland, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On 28 January 2016 at 01:14, Kees Cook <keescook@chromium.org> wrote:
> On Wed, Jan 27, 2016 at 4:09 PM, David Brown <david.brown@linaro.org> wrote:
>> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
>> From: David Brown <david.brown@linaro.org>
>> Date: Wed, 27 Jan 2016 13:58:44 -0800
>>
>> This removes the CONFIG_DEBUG_RODATA option and makes it always
>> enabled.
>>
>> Signed-off-by: David Brown <david.brown@linaro.org>
>
> I'm all for this!
>

I agree that this is probably a good idea, but please note that Mark
Rutland's pagetable rework patches targeted for v4.6 make significant
changes in this area, so you're probably better off building on top of
those.

-- 
Ard.


> Reviewed-by: Kees Cook <keescook@chromium.org>
>
> -Kees
>
>> ---
>> v1: This is in the same spirit as the x86 patch, removing allowing
>> this option to be config selected.  The associated patch series adds a
>> runtime option for the same thing.  However, it does affect the way
>> some things are mapped, and could possibly result in either increased
>> memory usage, or a performance hit (due to TLB misses from 4K pages).
>>
>> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
>> without 'rodata=off' on the command line.
>>
>> arch/arm64/Kconfig              |  3 +++
>> arch/arm64/Kconfig.debug        | 10 ----------
>> arch/arm64/kernel/insn.c        |  2 +-
>> arch/arm64/kernel/vmlinux.lds.S |  5 +----
>> arch/arm64/mm/mmu.c             | 12 ------------
>> 5 files changed, 5 insertions(+), 27 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 8cc6228..ffa617a 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
>> config FIX_EARLYCON_MEM
>>         def_bool y
>>
>> +config DEBUG_RODATA
>> +       def_bool y
>> +
>> config PGTABLE_LEVELS
>>         int
>>         default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
>> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
>> index e13c4bf..db994ec 100644
>> --- a/arch/arm64/Kconfig.debug
>> +++ b/arch/arm64/Kconfig.debug
>> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>>           against certain classes of kernel exploits.
>>           If in doubt, say "N".
>>
>> -config DEBUG_RODATA
>> -       bool "Make kernel text and rodata read-only"
>> -       help
>> -         If this is set, kernel text and rodata will be made read-only.
>> This
>> -         is to help catch accidental or malicious attempts to change the
>> -         kernel's executable code. Additionally splits rodata from kernel
>> -         text so it can be made explicitly non-executable.
>> -
>> -          If in doubt, say Y
>> -
>> config DEBUG_ALIGN_RODATA
>>         depends on DEBUG_RODATA && ARM64_4K_PAGES
>>         bool "Align linker sections up to SECTION_SIZE"
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 7371455..a04bdef 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>
>>         if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>                 page = vmalloc_to_page(addr);
>> -       else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
>> +       else if (!module)
>>                 page = virt_to_page(addr);
>>         else
>>                 return addr;
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S
>> b/arch/arm64/kernel/vmlinux.lds.S
>> index e3928f5..f80903c 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
>> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<SECTION_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#elif defined(CONFIG_DEBUG_RODATA)
>> +#else
>> #define ALIGN_DEBUG_RO                  . = ALIGN(1<<PAGE_SHIFT);
>> #define ALIGN_DEBUG_RO_MIN(min)         ALIGN_DEBUG_RO
>> -#else
>> -#define ALIGN_DEBUG_RO
>> -#define ALIGN_DEBUG_RO_MIN(min)                . = ALIGN(min);
>> #endif
>>
>> SECTIONS
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 58faeaa..3b411b7 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys,
>> unsigned long virt,
>>                                 phys, virt, size, prot, late_alloc);
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> {
>>         /*
>> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start,
>> phys_addr_t end)
>>         }
>>
>> }
>> -#else
>> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
>> -{
>> -       create_mapping(start, __phys_to_virt(start), end - start,
>> -                       PAGE_KERNEL_EXEC);
>> -}
>> -#endif
>>
>> static void __init map_mem(void)
>> {
>> @@ -410,7 +402,6 @@ static void __init map_mem(void)
>>
>> static void __init fixup_executable(void)
>> {
>> -#ifdef CONFIG_DEBUG_RODATA
>>         /* now that we are actually fully mapped, make the start/end more
>> fine grained */
>>         if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
>>                 unsigned long aligned_start = round_down(__pa(_stext),
>> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
>>                                 aligned_end - __pa(__init_end),
>>                                 PAGE_KERNEL);
>>         }
>> -#endif
>> }
>>
>> -#ifdef CONFIG_DEBUG_RODATA
>> void mark_rodata_ro(void)
>> {
>>         create_mapping_late(__pa(_stext), (unsigned long)_stext,
>> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
>>                                 PAGE_KERNEL_ROX);
>>
>> }
>> -#endif
>>
>> void fixup_init(void)
>> {
>> --
>> 2.7.0
>>
>
>
>
> --
> Kees Cook
> Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-28  0:09         ` David Brown
                             ` (2 preceding siblings ...)
  (?)
@ 2016-01-28 11:06           ` Mark Rutland
  -1 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 11:06 UTC (permalink / raw)
  To: David Brown
  Cc: Kees Cook, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

Hi,

On Wed, Jan 27, 2016 at 05:09:06PM -0700, David Brown wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
> 
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
> 
> Signed-off-by: David Brown <david.brown@linaro.org>

As Ard notes, my pagetable rework series [1] changes this code quite
significantly, and this will need to be rebased atop of that (and
possibly Ard's kASLR changes [2]).

I certainly want to always have the kernel text RO. I was waiting for
those two series to settle first.

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).

With my series [1] the text/rodata "chunk" of the kernel is always
mapped with page-granular boundaries (using sections internally).
Previously we always carved out the init area, so the kernel mapping was
always split.

Atop of my series this change should not increase memory usage or TLB
pressure given that it should only change the permissions.

One thing I would like to do is to avoid the need for fixup_executable
entirely, by mapping the kernel text RO from the outset. However, that
requires rework of the alternatives patching (to use a temporary RW
alias), and I haven't had the time to look into that yet.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/397095.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/402066.html

> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
> 
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
> 	def_bool y
> 
> +config DEBUG_RODATA
> +	def_bool y
> +
> config PGTABLE_LEVELS
> 	int
> 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
> 
> -config DEBUG_RODATA
> -	bool "Make kernel text and rodata read-only"
> -	help
> -	  If this is set, kernel text and rodata will be made read-only. This
> -	  is to help catch accidental or malicious attempts to change the
> -	  kernel's executable code. Additionally splits rodata from kernel
> -	  text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
> 	depends on DEBUG_RODATA && ARM64_4K_PAGES
> 	bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
> 
> 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
> 		page = vmalloc_to_page(addr);
> -	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +	else if (!module)
> 		page = virt_to_page(addr);
> 	else
> 		return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
> #endif
> 
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
> 				phys, virt, size, prot, late_alloc);
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
> 	/*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> 	}
> 
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -	create_mapping(start, __phys_to_virt(start), end - start,
> -			PAGE_KERNEL_EXEC);
> -}
> -#endif
> 
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
> 
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
> 	/* now that we are actually fully mapped, make the start/end more fine grained */
> 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
> 		unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
> 				aligned_end - __pa(__init_end),
> 				PAGE_KERNEL);
> 	}
> -#endif
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
> 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
> 				PAGE_KERNEL_ROX);
> 
> }
> -#endif
> 
> void fixup_init(void)
> {
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 11:06           ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 11:06 UTC (permalink / raw)
  To: David Brown
  Cc: Kees Cook, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose

Hi,

On Wed, Jan 27, 2016 at 05:09:06PM -0700, David Brown wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
> 
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
> 
> Signed-off-by: David Brown <david.brown@linaro.org>

As Ard notes, my pagetable rework series [1] changes this code quite
significantly, and this will need to be rebased atop of that (and
possibly Ard's kASLR changes [2]).

I certainly want to always have the kernel text RO. I was waiting for
those two series to settle first.

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).

With my series [1] the text/rodata "chunk" of the kernel is always
mapped with page-granular boundaries (using sections internally).
Previously we always carved out the init area, so the kernel mapping was
always split.

Atop of my series this change should not increase memory usage or TLB
pressure given that it should only change the permissions.

One thing I would like to do is to avoid the need for fixup_executable
entirely, by mapping the kernel text RO from the outset. However, that
requires rework of the alternatives patching (to use a temporary RW
alias), and I haven't had the time to look into that yet.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/397095.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/402066.html

> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
> 
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
> 	def_bool y
> 
> +config DEBUG_RODATA
> +	def_bool y
> +
> config PGTABLE_LEVELS
> 	int
> 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
> 
> -config DEBUG_RODATA
> -	bool "Make kernel text and rodata read-only"
> -	help
> -	  If this is set, kernel text and rodata will be made read-only. This
> -	  is to help catch accidental or malicious attempts to change the
> -	  kernel's executable code. Additionally splits rodata from kernel
> -	  text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
> 	depends on DEBUG_RODATA && ARM64_4K_PAGES
> 	bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
> 
> 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
> 		page = vmalloc_to_page(addr);
> -	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +	else if (!module)
> 		page = virt_to_page(addr);
> 	else
> 		return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
> #endif
> 
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
> 				phys, virt, size, prot, late_alloc);
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
> 	/*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> 	}
> 
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -	create_mapping(start, __phys_to_virt(start), end - start,
> -			PAGE_KERNEL_EXEC);
> -}
> -#endif
> 
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
> 
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
> 	/* now that we are actually fully mapped, make the start/end more fine grained */
> 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
> 		unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
> 				aligned_end - __pa(__init_end),
> 				PAGE_KERNEL);
> 	}
> -#endif
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
> 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
> 				PAGE_KERNEL_ROX);
> 
> }
> -#endif
> 
> void fixup_init(void)
> {
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 11:06           ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 11:06 UTC (permalink / raw)
  To: David Brown
  Cc: Kees Cook, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

Hi,

On Wed, Jan 27, 2016 at 05:09:06PM -0700, David Brown wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
> 
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
> 
> Signed-off-by: David Brown <david.brown@linaro.org>

As Ard notes, my pagetable rework series [1] changes this code quite
significantly, and this will need to be rebased atop of that (and
possibly Ard's kASLR changes [2]).

I certainly want to always have the kernel text RO. I was waiting for
those two series to settle first.

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).

With my series [1] the text/rodata "chunk" of the kernel is always
mapped with page-granular boundaries (using sections internally).
Previously we always carved out the init area, so the kernel mapping was
always split.

Atop of my series this change should not increase memory usage or TLB
pressure given that it should only change the permissions.

One thing I would like to do is to avoid the need for fixup_executable
entirely, by mapping the kernel text RO from the outset. However, that
requires rework of the alternatives patching (to use a temporary RW
alias), and I haven't had the time to look into that yet.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/397095.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/402066.html

> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
> 
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
> 	def_bool y
> 
> +config DEBUG_RODATA
> +	def_bool y
> +
> config PGTABLE_LEVELS
> 	int
> 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
> 
> -config DEBUG_RODATA
> -	bool "Make kernel text and rodata read-only"
> -	help
> -	  If this is set, kernel text and rodata will be made read-only. This
> -	  is to help catch accidental or malicious attempts to change the
> -	  kernel's executable code. Additionally splits rodata from kernel
> -	  text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
> 	depends on DEBUG_RODATA && ARM64_4K_PAGES
> 	bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
> 
> 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
> 		page = vmalloc_to_page(addr);
> -	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +	else if (!module)
> 		page = virt_to_page(addr);
> 	else
> 		return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
> #endif
> 
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
> 				phys, virt, size, prot, late_alloc);
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
> 	/*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> 	}
> 
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -	create_mapping(start, __phys_to_virt(start), end - start,
> -			PAGE_KERNEL_EXEC);
> -}
> -#endif
> 
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
> 
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
> 	/* now that we are actually fully mapped, make the start/end more fine grained */
> 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
> 		unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
> 				aligned_end - __pa(__init_end),
> 				PAGE_KERNEL);
> 	}
> -#endif
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
> 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
> 				PAGE_KERNEL_ROX);
> 
> }
> -#endif
> 
> void fixup_init(void)
> {
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 11:06           ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 11:06 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

On Wed, Jan 27, 2016 at 05:09:06PM -0700, David Brown wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
> 
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
> 
> Signed-off-by: David Brown <david.brown@linaro.org>

As Ard notes, my pagetable rework series [1] changes this code quite
significantly, and this will need to be rebased atop of that (and
possibly Ard's kASLR changes [2]).

I certainly want to always have the kernel text RO. I was waiting for
those two series to settle first.

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).

With my series [1] the text/rodata "chunk" of the kernel is always
mapped with page-granular boundaries (using sections internally).
Previously we always carved out the init area, so the kernel mapping was
always split.

Atop of my series this change should not increase memory usage or TLB
pressure given that it should only change the permissions.

One thing I would like to do is to avoid the need for fixup_executable
entirely, by mapping the kernel text RO from the outset. However, that
requires rework of the alternatives patching (to use a temporary RW
alias), and I haven't had the time to look into that yet.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/397095.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/402066.html

> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
> 
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
> 	def_bool y
> 
> +config DEBUG_RODATA
> +	def_bool y
> +
> config PGTABLE_LEVELS
> 	int
> 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
> 
> -config DEBUG_RODATA
> -	bool "Make kernel text and rodata read-only"
> -	help
> -	  If this is set, kernel text and rodata will be made read-only. This
> -	  is to help catch accidental or malicious attempts to change the
> -	  kernel's executable code. Additionally splits rodata from kernel
> -	  text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
> 	depends on DEBUG_RODATA && ARM64_4K_PAGES
> 	bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
> 
> 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
> 		page = vmalloc_to_page(addr);
> -	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +	else if (!module)
> 		page = virt_to_page(addr);
> 	else
> 		return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
> #endif
> 
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
> 				phys, virt, size, prot, late_alloc);
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
> 	/*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> 	}
> 
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -	create_mapping(start, __phys_to_virt(start), end - start,
> -			PAGE_KERNEL_EXEC);
> -}
> -#endif
> 
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
> 
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
> 	/* now that we are actually fully mapped, make the start/end more fine grained */
> 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
> 		unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
> 				aligned_end - __pa(__init_end),
> 				PAGE_KERNEL);
> 	}
> -#endif
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
> 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
> 				PAGE_KERNEL_ROX);
> 
> }
> -#endif
> 
> void fixup_init(void)
> {
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 11:06           ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 11:06 UTC (permalink / raw)
  To: David Brown
  Cc: Kees Cook, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

Hi,

On Wed, Jan 27, 2016 at 05:09:06PM -0700, David Brown wrote:
> From 2efef8aa0f8f7f6277ffebe4ea6744fc93d54644 Mon Sep 17 00:00:00 2001
> From: David Brown <david.brown@linaro.org>
> Date: Wed, 27 Jan 2016 13:58:44 -0800
> 
> This removes the CONFIG_DEBUG_RODATA option and makes it always
> enabled.
> 
> Signed-off-by: David Brown <david.brown@linaro.org>

As Ard notes, my pagetable rework series [1] changes this code quite
significantly, and this will need to be rebased atop of that (and
possibly Ard's kASLR changes [2]).

I certainly want to always have the kernel text RO. I was waiting for
those two series to settle first.

> ---
> v1: This is in the same spirit as the x86 patch, removing allowing
> this option to be config selected.  The associated patch series adds a
> runtime option for the same thing.  However, it does affect the way
> some things are mapped, and could possibly result in either increased
> memory usage, or a performance hit (due to TLB misses from 4K pages).

With my series [1] the text/rodata "chunk" of the kernel is always
mapped with page-granular boundaries (using sections internally).
Previously we always carved out the init area, so the kernel mapping was
always split.

Atop of my series this change should not increase memory usage or TLB
pressure given that it should only change the permissions.

One thing I would like to do is to avoid the need for fixup_executable
entirely, by mapping the kernel text RO from the outset. However, that
requires rework of the alternatives patching (to use a temporary RW
alias), and I haven't had the time to look into that yet.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/397095.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/402066.html

> I've tested this on a Hikey 96board (hi6220-hikey.dtb), both with and
> without 'rodata=off' on the command line.
> 
> arch/arm64/Kconfig              |  3 +++
> arch/arm64/Kconfig.debug        | 10 ----------
> arch/arm64/kernel/insn.c        |  2 +-
> arch/arm64/kernel/vmlinux.lds.S |  5 +----
> arch/arm64/mm/mmu.c             | 12 ------------
> 5 files changed, 5 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 8cc6228..ffa617a 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -201,6 +201,9 @@ config KERNEL_MODE_NEON
> config FIX_EARLYCON_MEM
> 	def_bool y
> 
> +config DEBUG_RODATA
> +	def_bool y
> +
> config PGTABLE_LEVELS
> 	int
> 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index e13c4bf..db994ec 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -48,16 +48,6 @@ config DEBUG_SET_MODULE_RONX
>           against certain classes of kernel exploits.
>           If in doubt, say "N".
> 
> -config DEBUG_RODATA
> -	bool "Make kernel text and rodata read-only"
> -	help
> -	  If this is set, kernel text and rodata will be made read-only. This
> -	  is to help catch accidental or malicious attempts to change the
> -	  kernel's executable code. Additionally splits rodata from kernel
> -	  text so it can be made explicitly non-executable.
> -
> -          If in doubt, say Y
> -
> config DEBUG_ALIGN_RODATA
> 	depends on DEBUG_RODATA && ARM64_4K_PAGES
> 	bool "Align linker sections up to SECTION_SIZE"
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 7371455..a04bdef 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -95,7 +95,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
> 
> 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
> 		page = vmalloc_to_page(addr);
> -	else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
> +	else if (!module)
> 		page = virt_to_page(addr);
> 	else
> 		return addr;
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index e3928f5..f80903c 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -65,12 +65,9 @@ PECOFF_FILE_ALIGNMENT = 0x200;
> #if defined(CONFIG_DEBUG_ALIGN_RODATA)
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<SECTION_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#elif defined(CONFIG_DEBUG_RODATA)
> +#else
> #define ALIGN_DEBUG_RO			. = ALIGN(1<<PAGE_SHIFT);
> #define ALIGN_DEBUG_RO_MIN(min)		ALIGN_DEBUG_RO
> -#else
> -#define ALIGN_DEBUG_RO
> -#define ALIGN_DEBUG_RO_MIN(min)		. = ALIGN(min);
> #endif
> 
> SECTIONS
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 58faeaa..3b411b7 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -313,7 +313,6 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
> 				phys, virt, size, prot, late_alloc);
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> {
> 	/*
> @@ -347,13 +346,6 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> 	}
> 
> }
> -#else
> -static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
> -{
> -	create_mapping(start, __phys_to_virt(start), end - start,
> -			PAGE_KERNEL_EXEC);
> -}
> -#endif
> 
> static void __init map_mem(void)
> {
> @@ -410,7 +402,6 @@ static void __init map_mem(void)
> 
> static void __init fixup_executable(void)
> {
> -#ifdef CONFIG_DEBUG_RODATA
> 	/* now that we are actually fully mapped, make the start/end more fine grained */
> 	if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
> 		unsigned long aligned_start = round_down(__pa(_stext),
> @@ -428,10 +419,8 @@ static void __init fixup_executable(void)
> 				aligned_end - __pa(__init_end),
> 				PAGE_KERNEL);
> 	}
> -#endif
> }
> 
> -#ifdef CONFIG_DEBUG_RODATA
> void mark_rodata_ro(void)
> {
> 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
> @@ -439,7 +428,6 @@ void mark_rodata_ro(void)
> 				PAGE_KERNEL_ROX);
> 
> }
> -#endif
> 
> void fixup_init(void)
> {
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-28 11:06           ` Mark Rutland
                               ` (2 preceding siblings ...)
  (?)
@ 2016-01-28 14:06             ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 14:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> One thing I would like to do is to avoid the need for fixup_executable
> entirely, by mapping the kernel text RO from the outset. However, that
> requires rework of the alternatives patching (to use a temporary RW
> alias), and I haven't had the time to look into that yet.

This makes perfect sense for the rodata section, but the (future)
postinit_rodata section we'll still want to mark RO after init
finishes. x86 and ARM cheat by marking both RO after init, and they
don't have to pad sections. parisc will need to solve this too.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:06             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 14:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott

On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> One thing I would like to do is to avoid the need for fixup_executable
> entirely, by mapping the kernel text RO from the outset. However, that
> requires rework of the alternatives patching (to use a temporary RW
> alias), and I haven't had the time to look into that yet.

This makes perfect sense for the rodata section, but the (future)
postinit_rodata section we'll still want to mark RO after init
finishes. x86 and ARM cheat by marking both RO after init, and they
don't have to pad sections. parisc will need to solve this too.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:06             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 14:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> One thing I would like to do is to avoid the need for fixup_executable
> entirely, by mapping the kernel text RO from the outset. However, that
> requires rework of the alternatives patching (to use a temporary RW
> alias), and I haven't had the time to look into that yet.

This makes perfect sense for the rodata section, but the (future)
postinit_rodata section we'll still want to mark RO after init
finishes. x86 and ARM cheat by marking both RO after init, and they
don't have to pad sections. parisc will need to solve this too.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:06             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 14:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> One thing I would like to do is to avoid the need for fixup_executable
> entirely, by mapping the kernel text RO from the outset. However, that
> requires rework of the alternatives patching (to use a temporary RW
> alias), and I haven't had the time to look into that yet.

This makes perfect sense for the rodata section, but the (future)
postinit_rodata section we'll still want to mark RO after init
finishes. x86 and ARM cheat by marking both RO after init, and they
don't have to pad sections. parisc will need to solve this too.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:06             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 14:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> One thing I would like to do is to avoid the need for fixup_executable
> entirely, by mapping the kernel text RO from the outset. However, that
> requires rework of the alternatives patching (to use a temporary RW
> alias), and I haven't had the time to look into that yet.

This makes perfect sense for the rodata section, but the (future)
postinit_rodata section we'll still want to mark RO after init
finishes. x86 and ARM cheat by marking both RO after init, and they
don't have to pad sections. parisc will need to solve this too.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-28 14:06             ` Kees Cook
                                 ` (2 preceding siblings ...)
  (?)
@ 2016-01-28 14:59               ` Mark Rutland
  -1 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 14:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > One thing I would like to do is to avoid the need for fixup_executable
> > entirely, by mapping the kernel text RO from the outset. However, that
> > requires rework of the alternatives patching (to use a temporary RW
> > alias), and I haven't had the time to look into that yet.
> 
> This makes perfect sense for the rodata section, but the (future)
> postinit_rodata section we'll still want to mark RO after init
> finishes. x86 and ARM cheat by marking both RO after init, and they
> don't have to pad sections. parisc will need to solve this too.

Depending on how many postinit_rodata variables there are, we might be
able to drop those in .rodata, have them RO always, and initialise them
via a temporary RW alias (e.g. something in the vmalloc area).

The only requirement for that is that we use a helper to initialise any
__postinit_ro variables via a temporary RW alias, e.g.

#define SET_POST_INIT_RO(ptr, v) ({ 		\\
	typeof(ptr) __ptr_rw = (ptr)		\\
	BUG_ON(initcalls_done);			\\
	__ptr_rw = create_rw_alias(__ptr);	\\
	__ptr_rw = v;				\\
	destroy_rw_alias(__ptr_rw);		\\
})

...

__postinit_ro void *thing;

void __init some_init_func(void) {
	void *__thing = some_ranodomized_allocator();
	SET_POSTINIT_RO(thing, thing);
}


Mark.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:59               ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 14:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose

On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > One thing I would like to do is to avoid the need for fixup_executable
> > entirely, by mapping the kernel text RO from the outset. However, that
> > requires rework of the alternatives patching (to use a temporary RW
> > alias), and I haven't had the time to look into that yet.
> 
> This makes perfect sense for the rodata section, but the (future)
> postinit_rodata section we'll still want to mark RO after init
> finishes. x86 and ARM cheat by marking both RO after init, and they
> don't have to pad sections. parisc will need to solve this too.

Depending on how many postinit_rodata variables there are, we might be
able to drop those in .rodata, have them RO always, and initialise them
via a temporary RW alias (e.g. something in the vmalloc area).

The only requirement for that is that we use a helper to initialise any
__postinit_ro variables via a temporary RW alias, e.g.

#define SET_POST_INIT_RO(ptr, v) ({ 		\\
	typeof(ptr) __ptr_rw = (ptr)		\\
	BUG_ON(initcalls_done);			\\
	__ptr_rw = create_rw_alias(__ptr);	\\
	__ptr_rw = v;				\\
	destroy_rw_alias(__ptr_rw);		\\
})

...

__postinit_ro void *thing;

void __init some_init_func(void) {
	void *__thing = some_ranodomized_allocator();
	SET_POSTINIT_RO(thing, thing);
}


Mark.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:59               ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 14:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > One thing I would like to do is to avoid the need for fixup_executable
> > entirely, by mapping the kernel text RO from the outset. However, that
> > requires rework of the alternatives patching (to use a temporary RW
> > alias), and I haven't had the time to look into that yet.
> 
> This makes perfect sense for the rodata section, but the (future)
> postinit_rodata section we'll still want to mark RO after init
> finishes. x86 and ARM cheat by marking both RO after init, and they
> don't have to pad sections. parisc will need to solve this too.

Depending on how many postinit_rodata variables there are, we might be
able to drop those in .rodata, have them RO always, and initialise them
via a temporary RW alias (e.g. something in the vmalloc area).

The only requirement for that is that we use a helper to initialise any
__postinit_ro variables via a temporary RW alias, e.g.

#define SET_POST_INIT_RO(ptr, v) ({ 		\\
	typeof(ptr) __ptr_rw = (ptr)		\\
	BUG_ON(initcalls_done);			\\
	__ptr_rw = create_rw_alias(__ptr);	\\
	__ptr_rw = v;				\\
	destroy_rw_alias(__ptr_rw);		\\
})

...

__postinit_ro void *thing;

void __init some_init_func(void) {
	void *__thing = some_ranodomized_allocator();
	SET_POSTINIT_RO(thing, thing);
}


Mark.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:59               ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 14:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > One thing I would like to do is to avoid the need for fixup_executable
> > entirely, by mapping the kernel text RO from the outset. However, that
> > requires rework of the alternatives patching (to use a temporary RW
> > alias), and I haven't had the time to look into that yet.
> 
> This makes perfect sense for the rodata section, but the (future)
> postinit_rodata section we'll still want to mark RO after init
> finishes. x86 and ARM cheat by marking both RO after init, and they
> don't have to pad sections. parisc will need to solve this too.

Depending on how many postinit_rodata variables there are, we might be
able to drop those in .rodata, have them RO always, and initialise them
via a temporary RW alias (e.g. something in the vmalloc area).

The only requirement for that is that we use a helper to initialise any
__postinit_ro variables via a temporary RW alias, e.g.

#define SET_POST_INIT_RO(ptr, v) ({ 		\\
	typeof(ptr) __ptr_rw = (ptr)		\\
	BUG_ON(initcalls_done);			\\
	__ptr_rw = create_rw_alias(__ptr);	\\
	__ptr_rw = v;				\\
	destroy_rw_alias(__ptr_rw);		\\
})

...

__postinit_ro void *thing;

void __init some_init_func(void) {
	void *__thing = some_ranodomized_allocator();
	SET_POSTINIT_RO(thing, thing);
}


Mark.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 14:59               ` Mark Rutland
  0 siblings, 0 replies; 104+ messages in thread
From: Mark Rutland @ 2016-01-28 14:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > One thing I would like to do is to avoid the need for fixup_executable
> > entirely, by mapping the kernel text RO from the outset. However, that
> > requires rework of the alternatives patching (to use a temporary RW
> > alias), and I haven't had the time to look into that yet.
> 
> This makes perfect sense for the rodata section, but the (future)
> postinit_rodata section we'll still want to mark RO after init
> finishes. x86 and ARM cheat by marking both RO after init, and they
> don't have to pad sections. parisc will need to solve this too.

Depending on how many postinit_rodata variables there are, we might be
able to drop those in .rodata, have them RO always, and initialise them
via a temporary RW alias (e.g. something in the vmalloc area).

The only requirement for that is that we use a helper to initialise any
__postinit_ro variables via a temporary RW alias, e.g.

#define SET_POST_INIT_RO(ptr, v) ({ 		\\
	typeof(ptr) __ptr_rw = (ptr)		\\
	BUG_ON(initcalls_done);			\\
	__ptr_rw = create_rw_alias(__ptr);	\\
	__ptr_rw = v;				\\
	destroy_rw_alias(__ptr_rw);		\\
})

...

__postinit_ro void *thing;

void __init some_init_func(void) {
	void *__thing = some_ranodomized_allocator();
	SET_POSTINIT_RO(thing, thing);
}


Mark.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
  2016-01-28 14:59               ` Mark Rutland
                                   ` (2 preceding siblings ...)
  (?)
@ 2016-01-28 15:17                 ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 15:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 6:59 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
>> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > One thing I would like to do is to avoid the need for fixup_executable
>> > entirely, by mapping the kernel text RO from the outset. However, that
>> > requires rework of the alternatives patching (to use a temporary RW
>> > alias), and I haven't had the time to look into that yet.
>>
>> This makes perfect sense for the rodata section, but the (future)
>> postinit_rodata section we'll still want to mark RO after init
>> finishes. x86 and ARM cheat y marking both RO after init, and they
>> don't have to pad sections. parisc will need to solve this too.
>
> Depending on how many postinit_rodata variables there are, we might be
> able to drop those in .rodata, have them RO always, and initialise them
> via a temporary RW alias (e.g. something in the vmalloc area).
>
> The only requirement for that is that we use a helper to initialise any
> __postinit_ro variables via a temporary RW alias, e.g.
>
> #define SET_POST_INIT_RO(ptr, v) ({             \\
>         typeof(ptr) __ptr_rw = (ptr)            \\
>         BUG_ON(initcalls_done);                 \\
>         __ptr_rw = create_rw_alias(__ptr);      \\
>         __ptr_rw = v;                           \\
>         destroy_rw_alias(__ptr_rw);             \\
> })
>
> ...
>
> __postinit_ro void *thing;
>
> void __init some_init_func(void) {
>         void *__thing = some_ranodomized_allocator();
>         SET_POSTINIT_RO(thing, thing);
> }

Well, that certainly would make their usage explicit, but I'd really
like to avoid that, especially in the face of trying to const-ify as
much of the kernel as possible to reduce attack surface. I don't want
to have to both mark the variable and its writes, since that would
make the constification gcc plugin a bit more complex.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 15:17                 ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 15:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose

On Thu, Jan 28, 2016 at 6:59 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
>> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > One thing I would like to do is to avoid the need for fixup_executable
>> > entirely, by mapping the kernel text RO from the outset. However, that
>> > requires rework of the alternatives patching (to use a temporary RW
>> > alias), and I haven't had the time to look into that yet.
>>
>> This makes perfect sense for the rodata section, but the (future)
>> postinit_rodata section we'll still want to mark RO after init
>> finishes. x86 and ARM cheat y marking both RO after init, and they
>> don't have to pad sections. parisc will need to solve this too.
>
> Depending on how many postinit_rodata variables there are, we might be
> able to drop those in .rodata, have them RO always, and initialise them
> via a temporary RW alias (e.g. something in the vmalloc area).
>
> The only requirement for that is that we use a helper to initialise any
> __postinit_ro variables via a temporary RW alias, e.g.
>
> #define SET_POST_INIT_RO(ptr, v) ({             \\
>         typeof(ptr) __ptr_rw = (ptr)            \\
>         BUG_ON(initcalls_done);                 \\
>         __ptr_rw = create_rw_alias(__ptr);      \\
>         __ptr_rw = v;                           \\
>         destroy_rw_alias(__ptr_rw);             \\
> })
>
> ...
>
> __postinit_ro void *thing;
>
> void __init some_init_func(void) {
>         void *__thing = some_ranodomized_allocator();
>         SET_POSTINIT_RO(thing, thing);
> }

Well, that certainly would make their usage explicit, but I'd really
like to avoid that, especially in the face of trying to const-ify as
much of the kernel as possible to reduce attack surface. I don't want
to have to both mark the variable and its writes, since that would
make the constification gcc plugin a bit more complex.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 15:17                 ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 15:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 6:59 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
>> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > One thing I would like to do is to avoid the need for fixup_executable
>> > entirely, by mapping the kernel text RO from the outset. However, that
>> > requires rework of the alternatives patching (to use a temporary RW
>> > alias), and I haven't had the time to look into that yet.
>>
>> This makes perfect sense for the rodata section, but the (future)
>> postinit_rodata section we'll still want to mark RO after init
>> finishes. x86 and ARM cheat y marking both RO after init, and they
>> don't have to pad sections. parisc will need to solve this too.
>
> Depending on how many postinit_rodata variables there are, we might be
> able to drop those in .rodata, have them RO always, and initialise them
> via a temporary RW alias (e.g. something in the vmalloc area).
>
> The only requirement for that is that we use a helper to initialise any
> __postinit_ro variables via a temporary RW alias, e.g.
>
> #define SET_POST_INIT_RO(ptr, v) ({             \\
>         typeof(ptr) __ptr_rw = (ptr)            \\
>         BUG_ON(initcalls_done);                 \\
>         __ptr_rw = create_rw_alias(__ptr);      \\
>         __ptr_rw = v;                           \\
>         destroy_rw_alias(__ptr_rw);             \\
> })
>
> ...
>
> __postinit_ro void *thing;
>
> void __init some_init_func(void) {
>         void *__thing = some_ranodomized_allocator();
>         SET_POSTINIT_RO(thing, thing);
> }

Well, that certainly would make their usage explicit, but I'd really
like to avoid that, especially in the face of trying to const-ify as
much of the kernel as possible to reduce attack surface. I don't want
to have to both mark the variable and its writes, since that would
make the constification gcc plugin a bit more complex.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 15:17                 ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 15:17 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 28, 2016 at 6:59 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
>> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > One thing I would like to do is to avoid the need for fixup_executable
>> > entirely, by mapping the kernel text RO from the outset. However, that
>> > requires rework of the alternatives patching (to use a temporary RW
>> > alias), and I haven't had the time to look into that yet.
>>
>> This makes perfect sense for the rodata section, but the (future)
>> postinit_rodata section we'll still want to mark RO after init
>> finishes. x86 and ARM cheat y marking both RO after init, and they
>> don't have to pad sections. parisc will need to solve this too.
>
> Depending on how many postinit_rodata variables there are, we might be
> able to drop those in .rodata, have them RO always, and initialise them
> via a temporary RW alias (e.g. something in the vmalloc area).
>
> The only requirement for that is that we use a helper to initialise any
> __postinit_ro variables via a temporary RW alias, e.g.
>
> #define SET_POST_INIT_RO(ptr, v) ({             \\
>         typeof(ptr) __ptr_rw = (ptr)            \\
>         BUG_ON(initcalls_done);                 \\
>         __ptr_rw = create_rw_alias(__ptr);      \\
>         __ptr_rw = v;                           \\
>         destroy_rw_alias(__ptr_rw);             \\
> })
>
> ...
>
> __postinit_ro void *thing;
>
> void __init some_init_func(void) {
>         void *__thing = some_ranodomized_allocator();
>         SET_POSTINIT_RO(thing, thing);
> }

Well, that certainly would make their usage explicit, but I'd really
like to avoid that, especially in the face of trying to const-ify as
much of the kernel as possible to reduce attack surface. I don't want
to have to both mark the variable and its writes, since that would
make the constification gcc plugin a bit more complex.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional
@ 2016-01-28 15:17                 ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-01-28 15:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Brown, kernel-hardening, Ingo Molnar, Andy Lutomirski,
	H. Peter Anvin, Michael Ellerman, Mathias Krause,
	Thomas Gleixner, x86, Arnd Bergmann, PaX Team, Emese Revfy, LKML,
	linux-arch, Catalin Marinas, Will Deacon, Marc Zyngier,
	yalin wang, Zi Shen Lim, Yang Shi, Ard Biesheuvel, Laura Abbott,
	Suzuki K. Poulose, Steve Capper, Jeremy Linton, Mark Salter,
	linux-arm-kernel

On Thu, Jan 28, 2016 at 6:59 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jan 28, 2016 at 06:06:53AM -0800, Kees Cook wrote:
>> On Thu, Jan 28, 2016 at 3:06 AM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > One thing I would like to do is to avoid the need for fixup_executable
>> > entirely, by mapping the kernel text RO from the outset. However, that
>> > requires rework of the alternatives patching (to use a temporary RW
>> > alias), and I haven't had the time to look into that yet.
>>
>> This makes perfect sense for the rodata section, but the (future)
>> postinit_rodata section we'll still want to mark RO after init
>> finishes. x86 and ARM cheat y marking both RO after init, and they
>> don't have to pad sections. parisc will need to solve this too.
>
> Depending on how many postinit_rodata variables there are, we might be
> able to drop those in .rodata, have them RO always, and initialise them
> via a temporary RW alias (e.g. something in the vmalloc area).
>
> The only requirement for that is that we use a helper to initialise any
> __postinit_ro variables via a temporary RW alias, e.g.
>
> #define SET_POST_INIT_RO(ptr, v) ({             \\
>         typeof(ptr) __ptr_rw = (ptr)            \\
>         BUG_ON(initcalls_done);                 \\
>         __ptr_rw = create_rw_alias(__ptr);      \\
>         __ptr_rw = v;                           \\
>         destroy_rw_alias(__ptr_rw);             \\
> })
>
> ...
>
> __postinit_ro void *thing;
>
> void __init some_init_func(void) {
>         void *__thing = some_ranodomized_allocator();
>         SET_POSTINIT_RO(thing, thing);
> }

Well, that certainly would make their usage explicit, but I'd really
like to avoid that, especially in the face of trying to const-ify as
much of the kernel as possible to reduce attack surface. I don't want
to have to both mark the variable and its writes, since that would
make the constification gcc plugin a bit more complex.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
  (?)
@ 2016-02-16 21:36   ` David Brown
  -1 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-16 21:36 UTC (permalink / raw)
  To: Russell King
  Cc: linux-arm-kernel, kernel-hardening, Ingo Molnar, Kees Cook,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, linux-kernel, linux-arch

Although the arm vDSO is cleanly separated by code/data with the code
being read-only in userspace mappings, the code page is still writable
from the kernel.  There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.

Prevent this specific exploit on arm by putting the vDSO code page in
post-init read-only memory as well.

Before:
vdso: 1 text pages at base 80927000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

After:
vdso: 1 text pages at base 8072b000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
---
This patch depends on Kees Cook's series
https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
section.

 arch/arm/vdso/vdso.S | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
index b2b97e3..a62a7b6 100644
--- a/arch/arm/vdso/vdso.S
+++ b/arch/arm/vdso/vdso.S
@@ -23,9 +23,8 @@
 #include <linux/const.h>
 #include <asm/page.h>
 
-	__PAGE_ALIGNED_DATA
-
 	.globl vdso_start, vdso_end
+	.section .data..ro_after_init
 	.balign PAGE_SIZE
 vdso_start:
 	.incbin "arch/arm/vdso/vdso.so"
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-16 21:36   ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-16 21:36 UTC (permalink / raw)
  To: linux-arm-kernel

Although the arm vDSO is cleanly separated by code/data with the code
being read-only in userspace mappings, the code page is still writable
from the kernel.  There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.

Prevent this specific exploit on arm by putting the vDSO code page in
post-init read-only memory as well.

Before:
vdso: 1 text pages at base 80927000
root at Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

After:
vdso: 1 text pages at base 8072b000
root at Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
---
This patch depends on Kees Cook's series
https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
section.

 arch/arm/vdso/vdso.S | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
index b2b97e3..a62a7b6 100644
--- a/arch/arm/vdso/vdso.S
+++ b/arch/arm/vdso/vdso.S
@@ -23,9 +23,8 @@
 #include <linux/const.h>
 #include <asm/page.h>
 
-	__PAGE_ALIGNED_DATA
-
 	.globl vdso_start, vdso_end
+	.section .data..ro_after_init
 	.balign PAGE_SIZE
 vdso_start:
 	.incbin "arch/arm/vdso/vdso.so"
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* [kernel-hardening] [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-16 21:36   ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-16 21:36 UTC (permalink / raw)
  To: Russell King
  Cc: linux-arm-kernel, kernel-hardening, Ingo Molnar, Kees Cook,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, linux-kernel, linux-arch

Although the arm vDSO is cleanly separated by code/data with the code
being read-only in userspace mappings, the code page is still writable
from the kernel.  There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.

Prevent this specific exploit on arm by putting the vDSO code page in
post-init read-only memory as well.

Before:
vdso: 1 text pages at base 80927000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

After:
vdso: 1 text pages at base 8072b000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
---
This patch depends on Kees Cook's series
https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
section.

 arch/arm/vdso/vdso.S | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
index b2b97e3..a62a7b6 100644
--- a/arch/arm/vdso/vdso.S
+++ b/arch/arm/vdso/vdso.S
@@ -23,9 +23,8 @@
 #include <linux/const.h>
 #include <asm/page.h>
 
-	__PAGE_ALIGNED_DATA
-
 	.globl vdso_start, vdso_end
+	.section .data..ro_after_init
 	.balign PAGE_SIZE
 vdso_start:
 	.incbin "arch/arm/vdso/vdso.so"
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-02-16 21:36   ` David Brown
  (?)
  (?)
@ 2016-02-16 21:52     ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-16 21:52 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
> Although the arm vDSO is cleanly separated by code/data with the code
> being read-only in userspace mappings, the code page is still writable
> from the kernel.  There have been exploits (such as
> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
> from a bad kernel write to full root.
>
> Prevent this specific exploit on arm by putting the vDSO code page in
> post-init read-only memory as well.

Is the vdso dynamically built at init time like on x86, or can this
just use .rodata directly?

-Kees

>
> Before:
> vdso: 1 text pages at base 80927000
> root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> After:
> vdso: 1 text pages at base 8072b000
> root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
> PaX Team, Brad Spengler, and Kees Cook.
>
> Signed-off-by: David Brown <david.brown@linaro.org>
> ---
> This patch depends on Kees Cook's series
> https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
> section.
>
> arch/arm/vdso/vdso.S | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
> index b2b97e3..a62a7b6 100644
> --- a/arch/arm/vdso/vdso.S
> +++ b/arch/arm/vdso/vdso.S
> @@ -23,9 +23,8 @@
> #include <linux/const.h>
> #include <asm/page.h>
>
> -       __PAGE_ALIGNED_DATA
> -
>         .globl vdso_start, vdso_end
> +       .section .data..ro_after_init
>         .balign PAGE_SIZE
> vdso_start:
>         .incbin "arch/arm/vdso/vdso.so"
> --
> 2.7.1
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-16 21:52     ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-16 21:52 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
> Although the arm vDSO is cleanly separated by code/data with the code
> being read-only in userspace mappings, the code page is still writable
> from the kernel.  There have been exploits (such as
> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
> from a bad kernel write to full root.
>
> Prevent this specific exploit on arm by putting the vDSO code page in
> post-init read-only memory as well.

Is the vdso dynamically built at init time like on x86, or can this
just use .rodata directly?

-Kees

>
> Before:
> vdso: 1 text pages at base 80927000
> root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> After:
> vdso: 1 text pages at base 8072b000
> root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
> PaX Team, Brad Spengler, and Kees Cook.
>
> Signed-off-by: David Brown <david.brown@linaro.org>
> ---
> This patch depends on Kees Cook's series
> https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
> section.
>
> arch/arm/vdso/vdso.S | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
> index b2b97e3..a62a7b6 100644
> --- a/arch/arm/vdso/vdso.S
> +++ b/arch/arm/vdso/vdso.S
> @@ -23,9 +23,8 @@
> #include <linux/const.h>
> #include <asm/page.h>
>
> -       __PAGE_ALIGNED_DATA
> -
>         .globl vdso_start, vdso_end
> +       .section .data..ro_after_init
>         .balign PAGE_SIZE
> vdso_start:
>         .incbin "arch/arm/vdso/vdso.so"
> --
> 2.7.1
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-16 21:52     ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-16 21:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
> Although the arm vDSO is cleanly separated by code/data with the code
> being read-only in userspace mappings, the code page is still writable
> from the kernel.  There have been exploits (such as
> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
> from a bad kernel write to full root.
>
> Prevent this specific exploit on arm by putting the vDSO code page in
> post-init read-only memory as well.

Is the vdso dynamically built at init time like on x86, or can this
just use .rodata directly?

-Kees

>
> Before:
> vdso: 1 text pages at base 80927000
> root at Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> After:
> vdso: 1 text pages at base 8072b000
> root at Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
> PaX Team, Brad Spengler, and Kees Cook.
>
> Signed-off-by: David Brown <david.brown@linaro.org>
> ---
> This patch depends on Kees Cook's series
> https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
> section.
>
> arch/arm/vdso/vdso.S | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
> index b2b97e3..a62a7b6 100644
> --- a/arch/arm/vdso/vdso.S
> +++ b/arch/arm/vdso/vdso.S
> @@ -23,9 +23,8 @@
> #include <linux/const.h>
> #include <asm/page.h>
>
> -       __PAGE_ALIGNED_DATA
> -
>         .globl vdso_start, vdso_end
> +       .section .data..ro_after_init
>         .balign PAGE_SIZE
> vdso_start:
>         .incbin "arch/arm/vdso/vdso.so"
> --
> 2.7.1
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-16 21:52     ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-16 21:52 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
> Although the arm vDSO is cleanly separated by code/data with the code
> being read-only in userspace mappings, the code page is still writable
> from the kernel.  There have been exploits (such as
> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
> from a bad kernel write to full root.
>
> Prevent this specific exploit on arm by putting the vDSO code page in
> post-init read-only memory as well.

Is the vdso dynamically built at init time like on x86, or can this
just use .rodata directly?

-Kees

>
> Before:
> vdso: 1 text pages at base 80927000
> root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> After:
> vdso: 1 text pages at base 8072b000
> root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
> ---[ Modules ]---
> ---[ Kernel Mapping ]---
> 0x80000000-0x80100000           1M     RW NX SHD
> 0x80100000-0x80600000           5M     ro x  SHD
> 0x80600000-0x80800000           2M     ro NX SHD
> 0x80800000-0xbe000000         984M     RW NX SHD
>
> Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
> PaX Team, Brad Spengler, and Kees Cook.
>
> Signed-off-by: David Brown <david.brown@linaro.org>
> ---
> This patch depends on Kees Cook's series
> https://lkml.org/lkml/2016/1/19/497 which adds the ro_after_init
> section.
>
> arch/arm/vdso/vdso.S | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm/vdso/vdso.S b/arch/arm/vdso/vdso.S
> index b2b97e3..a62a7b6 100644
> --- a/arch/arm/vdso/vdso.S
> +++ b/arch/arm/vdso/vdso.S
> @@ -23,9 +23,8 @@
> #include <linux/const.h>
> #include <asm/page.h>
>
> -       __PAGE_ALIGNED_DATA
> -
>         .globl vdso_start, vdso_end
> +       .section .data..ro_after_init
>         .balign PAGE_SIZE
> vdso_start:
>         .incbin "arch/arm/vdso/vdso.so"
> --
> 2.7.1
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-02-16 21:52     ` Kees Cook
  (?)
  (?)
@ 2016-02-17  5:20       ` David Brown
  -1 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17  5:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
>> Although the arm vDSO is cleanly separated by code/data with the code
>> being read-only in userspace mappings, the code page is still writable
>> from the kernel.  There have been exploits (such as
>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>> from a bad kernel write to full root.
>>
>> Prevent this specific exploit on arm by putting the vDSO code page in
>> post-init read-only memory as well.
>
>Is the vdso dynamically built at init time like on x86, or can this
>just use .rodata directly?

On ARM, it is patched during init.  Arm64's is just plain read-only.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17  5:20       ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17  5:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
>> Although the arm vDSO is cleanly separated by code/data with the code
>> being read-only in userspace mappings, the code page is still writable
>> from the kernel.  There have been exploits (such as
>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>> from a bad kernel write to full root.
>>
>> Prevent this specific exploit on arm by putting the vDSO code page in
>> post-init read-only memory as well.
>
>Is the vdso dynamically built at init time like on x86, or can this
>just use .rodata directly?

On ARM, it is patched during init.  Arm64's is just plain read-only.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17  5:20       ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17  5:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
>> Although the arm vDSO is cleanly separated by code/data with the code
>> being read-only in userspace mappings, the code page is still writable
>> from the kernel.  There have been exploits (such as
>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>> from a bad kernel write to full root.
>>
>> Prevent this specific exploit on arm by putting the vDSO code page in
>> post-init read-only memory as well.
>
>Is the vdso dynamically built at init time like on x86, or can this
>just use .rodata directly?

On ARM, it is patched during init.  Arm64's is just plain read-only.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17  5:20       ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17  5:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org> wrote:
>> Although the arm vDSO is cleanly separated by code/data with the code
>> being read-only in userspace mappings, the code page is still writable
>> from the kernel.  There have been exploits (such as
>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>> from a bad kernel write to full root.
>>
>> Prevent this specific exploit on arm by putting the vDSO code page in
>> post-init read-only memory as well.
>
>Is the vdso dynamically built at init time like on x86, or can this
>just use .rodata directly?

On ARM, it is patched during init.  Arm64's is just plain read-only.

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-02-17  5:20       ` David Brown
  (?)
  (?)
@ 2016-02-17 23:00         ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:00 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> Although the arm vDSO is cleanly separated by code/data with the code
>>> being read-only in userspace mappings, the code page is still writable
>>> from the kernel.  There have been exploits (such as
>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>> from a bad kernel write to full root.
>>>
>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>> post-init read-only memory as well.
>>
>>
>> Is the vdso dynamically built at init time like on x86, or can this
>> just use .rodata directly?
>
>
> On ARM, it is patched during init.  Arm64's is just plain read-only.

Okay, great. I've added this to my postinit-readonly series (which I
just refreshed and sent out again...)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:00         ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:00 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> Although the arm vDSO is cleanly separated by code/data with the code
>>> being read-only in userspace mappings, the code page is still writable
>>> from the kernel.  There have been exploits (such as
>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>> from a bad kernel write to full root.
>>>
>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>> post-init read-only memory as well.
>>
>>
>> Is the vdso dynamically built at init time like on x86, or can this
>> just use .rodata directly?
>
>
> On ARM, it is patched during init.  Arm64's is just plain read-only.

Okay, great. I've added this to my postinit-readonly series (which I
just refreshed and sent out again...)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:00         ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> Although the arm vDSO is cleanly separated by code/data with the code
>>> being read-only in userspace mappings, the code page is still writable
>>> from the kernel.  There have been exploits (such as
>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>> from a bad kernel write to full root.
>>>
>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>> post-init read-only memory as well.
>>
>>
>> Is the vdso dynamically built at init time like on x86, or can this
>> just use .rodata directly?
>
>
> On ARM, it is patched during init.  Arm64's is just plain read-only.

Okay, great. I've added this to my postinit-readonly series (which I
just refreshed and sent out again...)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:00         ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:00 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> Although the arm vDSO is cleanly separated by code/data with the code
>>> being read-only in userspace mappings, the code page is still writable
>>> from the kernel.  There have been exploits (such as
>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>> from a bad kernel write to full root.
>>>
>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>> post-init read-only memory as well.
>>
>>
>> Is the vdso dynamically built at init time like on x86, or can this
>> just use .rodata directly?
>
>
> On ARM, it is patched during init.  Arm64's is just plain read-only.

Okay, great. I've added this to my postinit-readonly series (which I
just refreshed and sent out again...)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-02-17 23:00         ` Kees Cook
  (?)
  (?)
@ 2016-02-17 23:43           ` David Brown
  -1 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17 23:43 UTC (permalink / raw)
  To: Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>
>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>> wrote:
>>>>
>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>> being read-only in userspace mappings, the code page is still writable
>>>> from the kernel.  There have been exploits (such as
>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>> from a bad kernel write to full root.
>>>>
>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>> post-init read-only memory as well.
>>>
>>>
>>> Is the vdso dynamically built at init time like on x86, or can this
>>> just use .rodata directly?
>>
>>
>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>
>Okay, great. I've added this to my postinit-readonly series (which I
>just refreshed and sent out again...)

However, this distinction between .rodata and .data..ro_after_init is
kind of fuzzy, anyway, since they both get made actually read-only at
the same time (post init).  The patch actually does work fine with the
vDSO page in .rodata, since the patching happens during init.

Is there a possible future consideration to perhaps make .rodata read
only much earlier?

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:43           ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17 23:43 UTC (permalink / raw)
  To: Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>
>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>> wrote:
>>>>
>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>> being read-only in userspace mappings, the code page is still writable
>>>> from the kernel.  There have been exploits (such as
>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>> from a bad kernel write to full root.
>>>>
>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>> post-init read-only memory as well.
>>>
>>>
>>> Is the vdso dynamically built at init time like on x86, or can this
>>> just use .rodata directly?
>>
>>
>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>
>Okay, great. I've added this to my postinit-readonly series (which I
>just refreshed and sent out again...)

However, this distinction between .rodata and .data..ro_after_init is
kind of fuzzy, anyway, since they both get made actually read-only at
the same time (post init).  The patch actually does work fine with the
vDSO page in .rodata, since the patching happens during init.

Is there a possible future consideration to perhaps make .rodata read
only much earlier?

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:43           ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17 23:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>
>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>> wrote:
>>>>
>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>> being read-only in userspace mappings, the code page is still writable
>>>> from the kernel.  There have been exploits (such as
>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>> from a bad kernel write to full root.
>>>>
>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>> post-init read-only memory as well.
>>>
>>>
>>> Is the vdso dynamically built at init time like on x86, or can this
>>> just use .rodata directly?
>>
>>
>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>
>Okay, great. I've added this to my postinit-readonly series (which I
>just refreshed and sent out again...)

However, this distinction between .rodata and .data..ro_after_init is
kind of fuzzy, anyway, since they both get made actually read-only at
the same time (post init).  The patch actually does work fine with the
vDSO page in .rodata, since the patching happens during init.

Is there a possible future consideration to perhaps make .rodata read
only much earlier?

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:43           ` David Brown
  0 siblings, 0 replies; 104+ messages in thread
From: David Brown @ 2016-02-17 23:43 UTC (permalink / raw)
  To: Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org> wrote:
>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>
>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>> wrote:
>>>>
>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>> being read-only in userspace mappings, the code page is still writable
>>>> from the kernel.  There have been exploits (such as
>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>> from a bad kernel write to full root.
>>>>
>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>> post-init read-only memory as well.
>>>
>>>
>>> Is the vdso dynamically built at init time like on x86, or can this
>>> just use .rodata directly?
>>
>>
>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>
>Okay, great. I've added this to my postinit-readonly series (which I
>just refreshed and sent out again...)

However, this distinction between .rodata and .data..ro_after_init is
kind of fuzzy, anyway, since they both get made actually read-only at
the same time (post init).  The patch actually does work fine with the
vDSO page in .rodata, since the patching happens during init.

Is there a possible future consideration to perhaps make .rodata read
only much earlier?

David

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-02-17 23:43           ` David Brown
  (?)
  (?)
@ 2016-02-17 23:48             ` Kees Cook
  -1 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:48 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:
> On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>>
>>>>
>>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>>> wrote:
>>>>>
>>>>>
>>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>>> being read-only in userspace mappings, the code page is still writable
>>>>> from the kernel.  There have been exploits (such as
>>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>>> from a bad kernel write to full root.
>>>>>
>>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>>> post-init read-only memory as well.
>>>>
>>>>
>>>>
>>>> Is the vdso dynamically built at init time like on x86, or can this
>>>> just use .rodata directly?
>>>
>>>
>>>
>>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>>
>>
>> Okay, great. I've added this to my postinit-readonly series (which I
>> just refreshed and sent out again...)
>
>
> However, this distinction between .rodata and .data..ro_after_init is
> kind of fuzzy, anyway, since they both get made actually read-only at
> the same time (post init).  The patch actually does work fine with the
> vDSO page in .rodata, since the patching happens during init.

Yeah, in the ARM case, that's true. I think we should probably keep it
marked "correctly" though.

> Is there a possible future consideration to perhaps make .rodata read
> only much earlier?

Yeah, this will likely be a future improvement. Some architectures
already mark .rodata before the mark_rodata_ro() call. Once we start
to have more use of postinit-readonly, I suspect we'll see more
clarification of when those things happen.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:48             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:48 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:
> On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>>
>>>>
>>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>>> wrote:
>>>>>
>>>>>
>>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>>> being read-only in userspace mappings, the code page is still writable
>>>>> from the kernel.  There have been exploits (such as
>>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>>> from a bad kernel write to full root.
>>>>>
>>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>>> post-init read-only memory as well.
>>>>
>>>>
>>>>
>>>> Is the vdso dynamically built at init time like on x86, or can this
>>>> just use .rodata directly?
>>>
>>>
>>>
>>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>>
>>
>> Okay, great. I've added this to my postinit-readonly series (which I
>> just refreshed and sent out again...)
>
>
> However, this distinction between .rodata and .data..ro_after_init is
> kind of fuzzy, anyway, since they both get made actually read-only at
> the same time (post init).  The patch actually does work fine with the
> vDSO page in .rodata, since the patching happens during init.

Yeah, in the ARM case, that's true. I think we should probably keep it
marked "correctly" though.

> Is there a possible future consideration to perhaps make .rodata read
> only much earlier?

Yeah, this will likely be a future improvement. Some architectures
already mark .rodata before the mark_rodata_ro() call. Once we start
to have more use of postinit-readonly, I suspect we'll see more
clarification of when those things happen.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:48             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:
> On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>>
>>>>
>>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>>> wrote:
>>>>>
>>>>>
>>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>>> being read-only in userspace mappings, the code page is still writable
>>>>> from the kernel.  There have been exploits (such as
>>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>>> from a bad kernel write to full root.
>>>>>
>>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>>> post-init read-only memory as well.
>>>>
>>>>
>>>>
>>>> Is the vdso dynamically built at init time like on x86, or can this
>>>> just use .rodata directly?
>>>
>>>
>>>
>>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>>
>>
>> Okay, great. I've added this to my postinit-readonly series (which I
>> just refreshed and sent out again...)
>
>
> However, this distinction between .rodata and .data..ro_after_init is
> kind of fuzzy, anyway, since they both get made actually read-only at
> the same time (post init).  The patch actually does work fine with the
> vDSO page in .rodata, since the patching happens during init.

Yeah, in the ARM case, that's true. I think we should probably keep it
marked "correctly" though.

> Is there a possible future consideration to perhaps make .rodata read
> only much earlier?

Yeah, this will likely be a future improvement. Some architectures
already mark .rodata before the mark_rodata_ro() call. Once we start
to have more use of postinit-readonly, I suspect we'll see more
clarification of when those things happen.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-17 23:48             ` Kees Cook
  0 siblings, 0 replies; 104+ messages in thread
From: Kees Cook @ 2016-02-17 23:48 UTC (permalink / raw)
  To: David Brown
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, PaX Team,
	Emese Revfy, LKML, linux-arch

On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:
> On Wed, Feb 17, 2016 at 03:00:52PM -0800, Kees Cook wrote:
>>
>> On Tue, Feb 16, 2016 at 9:20 PM, David Brown <david.brown@linaro.org>
>> wrote:
>>>
>>> On Tue, Feb 16, 2016 at 01:52:33PM -0800, Kees Cook wrote:
>>>>
>>>>
>>>> On Tue, Feb 16, 2016 at 1:36 PM, David Brown <david.brown@linaro.org>
>>>> wrote:
>>>>>
>>>>>
>>>>> Although the arm vDSO is cleanly separated by code/data with the code
>>>>> being read-only in userspace mappings, the code page is still writable
>>>>> from the kernel.  There have been exploits (such as
>>>>> http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
>>>>> from a bad kernel write to full root.
>>>>>
>>>>> Prevent this specific exploit on arm by putting the vDSO code page in
>>>>> post-init read-only memory as well.
>>>>
>>>>
>>>>
>>>> Is the vdso dynamically built at init time like on x86, or can this
>>>> just use .rodata directly?
>>>
>>>
>>>
>>> On ARM, it is patched during init.  Arm64's is just plain read-only.
>>
>>
>> Okay, great. I've added this to my postinit-readonly series (which I
>> just refreshed and sent out again...)
>
>
> However, this distinction between .rodata and .data..ro_after_init is
> kind of fuzzy, anyway, since they both get made actually read-only at
> the same time (post init).  The patch actually does work fine with the
> vDSO page in .rodata, since the patching happens during init.

Yeah, in the ARM case, that's true. I think we should probably keep it
marked "correctly" though.

> Is there a possible future consideration to perhaps make .rodata read
> only much earlier?

Yeah, this will likely be a future improvement. Some architectures
already mark .rodata before the mark_rodata_ro() call. Once we start
to have more use of postinit-readonly, I suspect we'll see more
clarification of when those things happen.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
  2016-02-17 23:48             ` Kees Cook
  (?)
  (?)
@ 2016-02-18 10:46               ` PaX Team
  -1 siblings, 0 replies; 104+ messages in thread
From: PaX Team @ 2016-02-18 10:46 UTC (permalink / raw)
  To: David Brown, Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, Emese Revfy,
	LKML, linux-arch

On 17 Feb 2016 at 15:48, Kees Cook wrote:

> On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:

> > Is there a possible future consideration to perhaps make .rodata read
> > only much earlier?
> 
> Yeah, this will likely be a future improvement. Some architectures
> already mark .rodata before the mark_rodata_ro() call. Once we start
> to have more use of postinit-readonly, I suspect we'll see more
> clarification of when those things happen.

FYI, PaX had enforced early rodata on i386 during the 2.4 series (i.e.,
decade+ ago) but i abandoned it for 2.6 due to the maintenance burden
coupled with its low benefit...

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-18 10:46               ` PaX Team
  0 siblings, 0 replies; 104+ messages in thread
From: PaX Team @ 2016-02-18 10:46 UTC (permalink / raw)
  To: David Brown, Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, Emese Revfy,
	LKML, linux-arch

On 17 Feb 2016 at 15:48, Kees Cook wrote:

> On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:

> > Is there a possible future consideration to perhaps make .rodata read
> > only much earlier?
> 
> Yeah, this will likely be a future improvement. Some architectures
> already mark .rodata before the mark_rodata_ro() call. Once we start
> to have more use of postinit-readonly, I suspect we'll see more
> clarification of when those things happen.

FYI, PaX had enforced early rodata on i386 during the 2.4 series (i.e.,
decade+ ago) but i abandoned it for 2.6 due to the maintenance burden
coupled with its low benefit...

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-18 10:46               ` PaX Team
  0 siblings, 0 replies; 104+ messages in thread
From: PaX Team @ 2016-02-18 10:46 UTC (permalink / raw)
  To: linux-arm-kernel

On 17 Feb 2016 at 15:48, Kees Cook wrote:

> On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:

> > Is there a possible future consideration to perhaps make .rodata read
> > only much earlier?
> 
> Yeah, this will likely be a future improvement. Some architectures
> already mark .rodata before the mark_rodata_ro() call. Once we start
> to have more use of postinit-readonly, I suspect we'll see more
> clarification of when those things happen.

FYI, PaX had enforced early rodata on i386 during the 2.4 series (i.e.,
decade+ ago) but i abandoned it for 2.6 due to the maintenance burden
coupled with its low benefit...

^ permalink raw reply	[flat|nested] 104+ messages in thread

* [kernel-hardening] Re: [PATCH] ARM: vdso: Mark vDSO code as read-only
@ 2016-02-18 10:46               ` PaX Team
  0 siblings, 0 replies; 104+ messages in thread
From: PaX Team @ 2016-02-18 10:46 UTC (permalink / raw)
  To: David Brown, Kees Cook
  Cc: Russell King, linux-arm-kernel, kernel-hardening, Ingo Molnar,
	Andy Lutomirski, H. Peter Anvin, Michael Ellerman,
	Mathias Krause, Thomas Gleixner, x86, Arnd Bergmann, Emese Revfy,
	LKML, linux-arch

On 17 Feb 2016 at 15:48, Kees Cook wrote:

> On Wed, Feb 17, 2016 at 3:43 PM, David Brown <david.brown@linaro.org> wrote:

> > Is there a possible future consideration to perhaps make .rodata read
> > only much earlier?
> 
> Yeah, this will likely be a future improvement. Some architectures
> already mark .rodata before the mark_rodata_ro() call. Once we start
> to have more use of postinit-readonly, I suspect we'll see more
> clarification of when those things happen.

FYI, PaX had enforced early rodata on i386 during the 2.4 series (i.e.,
decade+ ago) but i abandoned it for 2.6 due to the maintenance burden
coupled with its low benefit...

^ permalink raw reply	[flat|nested] 104+ messages in thread

end of thread, other threads:[~2016-02-18 10:48 UTC | newest]

Thread overview: 104+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-19 18:08 [PATCH v4 0/8] introduce post-init read-only memory Kees Cook
2016-01-19 18:08 ` [kernel-hardening] " Kees Cook
2016-01-19 18:08 ` [PATCH v4 1/8] asm-generic: consolidate mark_rodata_ro() Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-19 18:08 ` [PATCH v4 2/8] lib: add "on" and "off" to strtobool Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-20  2:09   ` Joe Perches
2016-01-20  2:09     ` [kernel-hardening] " Joe Perches
2016-01-22 23:29     ` Kees Cook
2016-01-22 23:29       ` [kernel-hardening] " Kees Cook
2016-01-22 23:29       ` Kees Cook
2016-01-19 18:08 ` [PATCH v4 3/8] param: convert some "on"/"off" users " Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-27 21:11   ` David Brown
2016-01-27 21:11     ` David Brown
2016-01-27 21:19     ` [kernel-hardening] " Kees Cook
2016-01-27 21:19       ` Kees Cook
2016-01-28  0:09       ` [PATCH] arm64: make CONFIG_DEBUG_RODATA non-optional David Brown
2016-01-28  0:09         ` [kernel-hardening] " David Brown
2016-01-28  0:09         ` David Brown
2016-01-28  0:09         ` David Brown
2016-01-28  0:09         ` David Brown
2016-01-28  0:14         ` Kees Cook
2016-01-28  0:14           ` [kernel-hardening] " Kees Cook
2016-01-28  0:14           ` Kees Cook
2016-01-28  0:14           ` Kees Cook
2016-01-28  0:14           ` Kees Cook
2016-01-28  8:20           ` Ard Biesheuvel
2016-01-28  8:20             ` [kernel-hardening] " Ard Biesheuvel
2016-01-28  8:20             ` Ard Biesheuvel
2016-01-28  8:20             ` Ard Biesheuvel
2016-01-28  8:20             ` Ard Biesheuvel
2016-01-28 11:06         ` Mark Rutland
2016-01-28 11:06           ` [kernel-hardening] " Mark Rutland
2016-01-28 11:06           ` Mark Rutland
2016-01-28 11:06           ` Mark Rutland
2016-01-28 11:06           ` Mark Rutland
2016-01-28 14:06           ` Kees Cook
2016-01-28 14:06             ` [kernel-hardening] " Kees Cook
2016-01-28 14:06             ` Kees Cook
2016-01-28 14:06             ` Kees Cook
2016-01-28 14:06             ` Kees Cook
2016-01-28 14:59             ` Mark Rutland
2016-01-28 14:59               ` [kernel-hardening] " Mark Rutland
2016-01-28 14:59               ` Mark Rutland
2016-01-28 14:59               ` Mark Rutland
2016-01-28 14:59               ` Mark Rutland
2016-01-28 15:17               ` Kees Cook
2016-01-28 15:17                 ` [kernel-hardening] " Kees Cook
2016-01-28 15:17                 ` Kees Cook
2016-01-28 15:17                 ` Kees Cook
2016-01-28 15:17                 ` Kees Cook
2016-01-19 18:08 ` [PATCH v4 4/8] init: create cmdline param to disable readonly Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-19 18:08 ` [PATCH v4 5/8] x86: make CONFIG_DEBUG_RODATA non-optional Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-19 18:08 ` [PATCH v4 6/8] introduce post-init read-only memory Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-19 18:08 ` [PATCH v4 7/8] lkdtm: verify that __ro_after_init works correctly Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-19 18:08 ` [PATCH v4 8/8] x86, vdso: mark vDSO read-only after init Kees Cook
2016-01-19 18:08   ` [kernel-hardening] " Kees Cook
2016-01-19 19:09   ` Andy Lutomirski
2016-01-19 19:09     ` [kernel-hardening] " Andy Lutomirski
2016-01-19 19:09     ` Andy Lutomirski
2016-01-20  2:51   ` H. Peter Anvin
2016-01-20  2:51     ` [kernel-hardening] " H. Peter Anvin
2016-01-20  2:56   ` Andy Lutomirski
2016-01-20  2:56     ` [kernel-hardening] " Andy Lutomirski
2016-01-20  2:56     ` Andy Lutomirski
2016-01-22 17:19 ` [kernel-hardening] [PATCH v4 0/8] introduce post-init read-only memory David Brown
2016-01-22 17:19   ` David Brown
2016-01-22 19:16   ` [kernel-hardening] " Laura Abbott
2016-01-22 19:57     ` Kees Cook
2016-01-22 19:57       ` Kees Cook
2016-01-23  9:49       ` Geert Uytterhoeven
2016-01-23  9:49         ` Geert Uytterhoeven
2016-02-16 21:36 ` [PATCH] ARM: vdso: Mark vDSO code as read-only David Brown
2016-02-16 21:36   ` [kernel-hardening] " David Brown
2016-02-16 21:36   ` David Brown
2016-02-16 21:52   ` Kees Cook
2016-02-16 21:52     ` [kernel-hardening] " Kees Cook
2016-02-16 21:52     ` Kees Cook
2016-02-16 21:52     ` Kees Cook
2016-02-17  5:20     ` David Brown
2016-02-17  5:20       ` [kernel-hardening] " David Brown
2016-02-17  5:20       ` David Brown
2016-02-17  5:20       ` David Brown
2016-02-17 23:00       ` Kees Cook
2016-02-17 23:00         ` [kernel-hardening] " Kees Cook
2016-02-17 23:00         ` Kees Cook
2016-02-17 23:00         ` Kees Cook
2016-02-17 23:43         ` David Brown
2016-02-17 23:43           ` [kernel-hardening] " David Brown
2016-02-17 23:43           ` David Brown
2016-02-17 23:43           ` David Brown
2016-02-17 23:48           ` Kees Cook
2016-02-17 23:48             ` [kernel-hardening] " Kees Cook
2016-02-17 23:48             ` Kees Cook
2016-02-17 23:48             ` Kees Cook
2016-02-18 10:46             ` PaX Team
2016-02-18 10:46               ` [kernel-hardening] " PaX Team
2016-02-18 10:46               ` PaX Team
2016-02-18 10:46               ` PaX Team

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.