All of lore.kernel.org
 help / color / mirror / Atom feed
* [kernel-hardening] [PATCH] add the option of fortified string.h functions
@ 2017-05-04 14:24 Daniel Micay
  2017-05-04 14:51 ` [kernel-hardening] " Daniel Micay
                   ` (5 more replies)
  0 siblings, 6 replies; 25+ messages in thread
From: Daniel Micay @ 2017-05-04 14:24 UTC (permalink / raw)
  To: Kees Cook, kernel-hardening; +Cc: Daniel Micay

This adds support for compiling with a rough equivalent to the glibc
_FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
overflow checks for string.h functions when the compiler determines the
size of the source or destination buffer at compile-time. Unlike glibc,
it covers buffer reads in addition to writes.

GNU C __builtin_*_chk intrinsics are avoided because they're only
designed to detect write overflows and are overly complex. A single
inline branch works for everything but strncat while those intrinsics
would force the creation of a bunch of extra non-inline wrappers that
aren't able to receive the detected source buffer size.

This detects various undefined uses of memcpy, etc. at compile-time
outside of non-core kernel code, and in the arm64 vdso code, so there
will need to be a series of fixes applied (mainly memcpy -> strncpy for
copying string constants to avoid copying past the end of the source).
It isn't particularly bad, but there are likely some issues that occur
during regular use at runtime (none found so far).

Future improvements left out of initial implementation for simplicity,
as it's all quite optional and can be done incrementally:

The fortified string functions should place a limit on reads from the
source. For strcat/strcpy, these could just be a variant of strlcat /
strlcpy using the size limit as a bound on both the source and
destination, with the return value only reporting whether truncation
occurred rather than providing the source length. It would be an easier
API for developers to use too and not that it really matters but it
would be more efficient for the case where truncation is intended.

It should be possible to optionally use __builtin_object_size(x, 1) for
some functions (C strings) to detect intra-object overflows (like
glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
approach to avoid likely compatibility issues.

The error reporting could be made friendlier by splitting up the
compile-time error for reads and writes. The runtime error could also
directly report the buffer and copy sizes.

It may make sense to have the compile-time checks in a separate
configuration option since they wouldn't have a runtime performance cost
if there was an ifdef for the runtime check. However, the runtime cost
of these checks is already quite low (much lower than SSP) and as long
as some people are using it with allyesconfig, the issues will be found
for any in-tree code.

Signed-off-by: Daniel Micay <danielmicay@gmail.com>
---
 arch/x86/boot/compressed/misc.c |   5 ++
 include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
 lib/string.c                    |   8 ++
 security/Kconfig                |   6 ++
 4 files changed, 180 insertions(+)

diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index b3c5a5f030ce..43691238a21d 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	debug_putstr("done.\nBooting the kernel.\n");
 	return output;
 }
+
+void fortify_panic(const char *name)
+{
+	error("detected buffer overflow");
+}
diff --git a/include/linux/string.h b/include/linux/string.h
index 26b6f6a66f83..3bd429c9593a 100644
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
 	return tail ? tail + 1 : path;
 }
 
+#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
+#define __RENAME(x) __asm__(#x)
+
+void fortify_panic(const char *name) __noreturn __cold;
+void __buffer_overflow(void) __compiletime_error("buffer overflow");
+
+#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
+__FORTIFY_INLINE char *strcpy(char *p, const char *q)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (p_size == (size_t)-1)
+		return __builtin_strcpy(p, q);
+	if (strlcpy(p, q, p_size) >= p_size)
+		fortify_panic(__func__);
+	return p;
+}
+
+__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (__builtin_constant_p(size) && p_size < size)
+		__buffer_overflow();
+	if (p_size < size)
+		fortify_panic(__func__);
+	return __builtin_strncpy(p, q, size);
+}
+
+__FORTIFY_INLINE char *strcat(char *p, const char *q)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (p_size == (size_t)-1)
+		return __builtin_strcat(p, q);
+	if (strlcat(p, q, p_size) >= p_size)
+		fortify_panic(__func__);
+	return p;
+}
+
+__FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
+{
+	size_t p_len, copy_len;
+	size_t p_size = __builtin_object_size(p, 0);
+	if (p_size == (size_t)-1)
+		return __builtin_strncat(p, q, count);
+	p_len = __builtin_strlen(p);
+	copy_len = strnlen(q, count);
+	if (p_size < p_len + copy_len + 1)
+		fortify_panic(__func__);
+	__builtin_memcpy(p + p_len, q, copy_len);
+	p[p_len + copy_len] = '\0';
+	return p;
+}
+
+__FORTIFY_INLINE __kernel_size_t strlen(const char *p)
+{
+	__kernel_size_t ret;
+	size_t p_size = __builtin_object_size(p, 0);
+	if (p_size == (size_t)-1)
+		return __builtin_strlen(p);
+	ret = strnlen(p, p_size);
+	if (p_size <= ret)
+		fortify_panic(__func__);
+	return ret;
+}
+
+extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
+__FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	__kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size);
+	if (p_size <= ret)
+		fortify_panic(__func__);
+	return ret;
+}
+
+__FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (__builtin_constant_p(size) && p_size < size)
+		__buffer_overflow();
+	if (p_size < size)
+		fortify_panic(__func__);
+	return __builtin_memset(p, c, size);
+}
+
+__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	size_t q_size = __builtin_object_size(q, 0);
+	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
+		__buffer_overflow();
+	if (p_size < size || q_size < size)
+		fortify_panic(__func__);
+	return __builtin_memcpy(p, q, size);
+}
+
+__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	size_t q_size = __builtin_object_size(q, 0);
+	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
+		__buffer_overflow();
+	if (p_size < size || q_size < size)
+		fortify_panic(__func__);
+	return __builtin_memmove(p, q, size);
+}
+
+extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
+__FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (__builtin_constant_p(size) && p_size < size)
+		__buffer_overflow();
+	if (p_size < size)
+		fortify_panic(__func__);
+	return __real_memscan(p, c, size);
+}
+
+__FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	size_t q_size = __builtin_object_size(q, 0);
+	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
+		__buffer_overflow();
+	if (p_size < size || q_size < size)
+		fortify_panic(__func__);
+	return __builtin_memcmp(p, q, size);
+}
+
+__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (__builtin_constant_p(size) && p_size < size)
+		__buffer_overflow();
+	if (p_size < size)
+		fortify_panic(__func__);
+	return __builtin_memchr(p, c, size);
+}
+
+void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
+__FORTIFY_INLINE void *memchr_inv(const void *p, int c, size_t size)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (__builtin_constant_p(size) && p_size < size)
+		__buffer_overflow();
+	if (p_size < size)
+		fortify_panic(__func__);
+	return __real_memchr_inv(p, c, size);
+}
+
+extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
+__FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp)
+{
+	size_t p_size = __builtin_object_size(p, 0);
+	if (__builtin_constant_p(size) && p_size < size)
+		__buffer_overflow();
+	if (p_size < size)
+		fortify_panic(__func__);
+	return __real_kmemdup(p, size, gfp);
+}
+#endif
+
 #endif /* _LINUX_STRING_H_ */
diff --git a/lib/string.c b/lib/string.c
index b5c9a1168d3a..ca356e537e24 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -19,6 +19,8 @@
  * -  Kissed strtok() goodbye
  */
 
+#define __NO_FORTIFY
+
 #include <linux/types.h>
 #include <linux/string.h>
 #include <linux/ctype.h>
@@ -952,3 +954,9 @@ char *strreplace(char *s, char old, char new)
 	return s;
 }
 EXPORT_SYMBOL(strreplace);
+
+void fortify_panic(const char *name)
+{
+	panic("detected buffer overflow in %s", name);
+}
+EXPORT_SYMBOL(fortify_panic);
diff --git a/security/Kconfig b/security/Kconfig
index 93027fdf47d1..0e5035d720ce 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -154,6 +154,12 @@ config HARDENED_USERCOPY_PAGESPAN
 	  been removed. This config is intended to be used only while
 	  trying to find such users.
 
+config FORTIFY_SOURCE
+	bool "Harden common functions against buffer overflows"
+	help
+	  Detect overflows of buffers in common functions where the compiler
+	  can determine the buffer size.
+
 config STATIC_USERMODEHELPER
 	bool "Force all usermode helper calls through a single binary"
 	help
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kernel-hardening] Re: [PATCH] add the option of fortified string.h functions
  2017-05-04 14:24 [kernel-hardening] [PATCH] add the option of fortified string.h functions Daniel Micay
@ 2017-05-04 14:51 ` Daniel Micay
  2017-05-04 15:48 ` [kernel-hardening] " Mark Rutland
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 25+ messages in thread
From: Daniel Micay @ 2017-05-04 14:51 UTC (permalink / raw)
  To: Kees Cook, kernel-hardening; +Cc: Daniel Micay

There are temporary workarounds for the overflows this found in
https://github.com/thestinger/linux-hardened/commits/4.11, but not
real fixes yet. There are some mostly harmless misuses of memcpy
instead of strncpy and memcmp instead of strncmp where the source is a
string constant. The arm64 vdso code uses memcmp with the address of
'char vdso_start' so perhaps that can become 'char
vdso_start[PAGE_SIZE]' instead. One of the issues looks a bit more
concerning.

I haven't found any issues at runtime but that doesn't mean much since
none of the compile-time issues were in code that's used on my
desktop. This has been used on a 3.18 LTS arm64 kernel targeting the
Pixel / Pixel XL on CopperheadOS for a while and found a real runtime
stack buffer write overflow there in an out-of-tree driver. I'm
somewhat surprised by the fact that there are real buffer overflows
lying around that are this easily found without any fuzzing, etc.
rather than this feature only providing a fair bit of runtime coverage
to mitigate bugs that require edge cases.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-04 14:24 [kernel-hardening] [PATCH] add the option of fortified string.h functions Daniel Micay
  2017-05-04 14:51 ` [kernel-hardening] " Daniel Micay
@ 2017-05-04 15:48 ` Mark Rutland
  2017-05-04 17:49   ` Daniel Micay
  2017-05-05 16:42   ` [kernel-hardening] " Kees Cook
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2017-05-04 15:48 UTC (permalink / raw)
  To: Daniel Micay; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

Hi,

On Thu, May 04, 2017 at 10:24:35AM -0400, Daniel Micay wrote:
> This adds support for compiling with a rough equivalent to the glibc
> _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
> overflow checks for string.h functions when the compiler determines the
> size of the source or destination buffer at compile-time. Unlike glibc,
> it covers buffer reads in addition to writes.
> 
> GNU C __builtin_*_chk intrinsics are avoided because they're only
> designed to detect write overflows and are overly complex. A single
> inline branch works for everything but strncat while those intrinsics
> would force the creation of a bunch of extra non-inline wrappers that
> aren't able to receive the detected source buffer size.
> 
> This detects various undefined uses of memcpy, etc. at compile-time
> outside of non-core kernel code, and in the arm64 vdso code, so there
> will need to be a series of fixes applied (mainly memcpy -> strncpy for
> copying string constants to avoid copying past the end of the source).

>From a glance, in the arm64 vdso case, that's due to the definition of
vdso_start as a char giving it a single byte size.

We can/should probably use char[] for vdso_{start,end} on arm/arm64 as
we do for other linker symbols (and x86 and tile do for
vdso_{start,end}), i.e.

---->8----
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 41b6e31..ae35f18 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -37,7 +37,7 @@
 #include <asm/vdso.h>
 #include <asm/vdso_datapage.h>
 
-extern char vdso_start, vdso_end;
+extern char vdso_start[], vdso_end[];
 static unsigned long vdso_pages __ro_after_init;
 
 /*
@@ -125,14 +125,14 @@ static int __init vdso_init(void)
        struct page **vdso_pagelist;
        unsigned long pfn;
 
-       if (memcmp(&vdso_start, "\177ELF", 4)) {
+       if (memcmp(vdso_start, "\177ELF", 4)) {
                pr_err("vDSO is not a valid ELF object!\n");
                return -EINVAL;
        }
 
-       vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT;
+       vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
        pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n",
-               vdso_pages + 1, vdso_pages, &vdso_start, 1L, vdso_data);
+               vdso_pages + 1, vdso_pages, vdso_start, 1L, vdso_data);
 
        /* Allocate the vDSO pagelist, plus a page for the data. */
        vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *),
@@ -145,7 +145,7 @@ static int __init vdso_init(void)
 
 
        /* Grab the vDSO code pages. */
-       pfn = sym_to_pfn(&vdso_start);
+       pfn = sym_to_pfn(vdso_start);
 
        for (i = 0; i < vdso_pages; i++)
                vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
---->8----

With that fixed, I see we also need a fortify_panic() for the EFI stub.

I'm not sure if the x86 EFI stub gets linked with the
boot/compressed/misc.c version below, and/or whether it's safe for the
EFI stub to call that.

... with an EFI stub fortify_panic() hacked in, I can build an arm64 kernel
with this applied. It dies at some point after exiting EFI boot services; i
don't know whether it made it out of the stub and into the kernel proper.

> It isn't particularly bad, but there are likely some issues that occur
> during regular use at runtime (none found so far).

It might be worth seeing if anyone can throw syzkaller and friends at
this.

Thanks,
Mark.

> Future improvements left out of initial implementation for simplicity,
> as it's all quite optional and can be done incrementally:
> 
> The fortified string functions should place a limit on reads from the
> source. For strcat/strcpy, these could just be a variant of strlcat /
> strlcpy using the size limit as a bound on both the source and
> destination, with the return value only reporting whether truncation
> occurred rather than providing the source length. It would be an easier
> API for developers to use too and not that it really matters but it
> would be more efficient for the case where truncation is intended.
> 
> It should be possible to optionally use __builtin_object_size(x, 1) for
> some functions (C strings) to detect intra-object overflows (like
> glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
> approach to avoid likely compatibility issues.
> 
> The error reporting could be made friendlier by splitting up the
> compile-time error for reads and writes. The runtime error could also
> directly report the buffer and copy sizes.
> 
> It may make sense to have the compile-time checks in a separate
> configuration option since they wouldn't have a runtime performance cost
> if there was an ifdef for the runtime check. However, the runtime cost
> of these checks is already quite low (much lower than SSP) and as long
> as some people are using it with allyesconfig, the issues will be found
> for any in-tree code.
> 
> Signed-off-by: Daniel Micay <danielmicay@gmail.com>
> ---
>  arch/x86/boot/compressed/misc.c |   5 ++
>  include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
>  lib/string.c                    |   8 ++
>  security/Kconfig                |   6 ++
>  4 files changed, 180 insertions(+)
> 
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index b3c5a5f030ce..43691238a21d 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>  	debug_putstr("done.\nBooting the kernel.\n");
>  	return output;
>  }
> +
> +void fortify_panic(const char *name)
> +{
> +	error("detected buffer overflow");
> +}
> diff --git a/include/linux/string.h b/include/linux/string.h
> index 26b6f6a66f83..3bd429c9593a 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
>  	return tail ? tail + 1 : path;
>  }
>  
> +#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
> +#define __RENAME(x) __asm__(#x)
> +
> +void fortify_panic(const char *name) __noreturn __cold;
> +void __buffer_overflow(void) __compiletime_error("buffer overflow");
> +
> +#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
> +__FORTIFY_INLINE char *strcpy(char *p, const char *q)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strcpy(p, q);
> +	if (strlcpy(p, q, p_size) >= p_size)
> +		fortify_panic(__func__);
> +	return p;
> +}
> +
> +__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_strncpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE char *strcat(char *p, const char *q)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strcat(p, q);
> +	if (strlcat(p, q, p_size) >= p_size)
> +		fortify_panic(__func__);
> +	return p;
> +}
> +
> +__FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
> +{
> +	size_t p_len, copy_len;
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strncat(p, q, count);
> +	p_len = __builtin_strlen(p);
> +	copy_len = strnlen(q, count);
> +	if (p_size < p_len + copy_len + 1)
> +		fortify_panic(__func__);
> +	__builtin_memcpy(p + p_len, q, copy_len);
> +	p[p_len + copy_len] = '\0';
> +	return p;
> +}
> +
> +__FORTIFY_INLINE __kernel_size_t strlen(const char *p)
> +{
> +	__kernel_size_t ret;
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strlen(p);
> +	ret = strnlen(p, p_size);
> +	if (p_size <= ret)
> +		fortify_panic(__func__);
> +	return ret;
> +}
> +
> +extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
> +__FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	__kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size);
> +	if (p_size <= ret)
> +		fortify_panic(__func__);
> +	return ret;
> +}
> +
> +__FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memset(p, c, size);
> +}
> +
> +__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memcpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memmove(p, q, size);
> +}
> +
> +extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
> +__FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_memscan(p, c, size);
> +}
> +
> +__FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memcmp(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memchr(p, c, size);
> +}
> +
> +void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
> +__FORTIFY_INLINE void *memchr_inv(const void *p, int c, size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_memchr_inv(p, c, size);
> +}
> +
> +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
> +__FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_kmemdup(p, size, gfp);
> +}
> +#endif
> +
>  #endif /* _LINUX_STRING_H_ */
> diff --git a/lib/string.c b/lib/string.c
> index b5c9a1168d3a..ca356e537e24 100644
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -19,6 +19,8 @@
>   * -  Kissed strtok() goodbye
>   */
>  
> +#define __NO_FORTIFY
> +
>  #include <linux/types.h>
>  #include <linux/string.h>
>  #include <linux/ctype.h>
> @@ -952,3 +954,9 @@ char *strreplace(char *s, char old, char new)
>  	return s;
>  }
>  EXPORT_SYMBOL(strreplace);
> +
> +void fortify_panic(const char *name)
> +{
> +	panic("detected buffer overflow in %s", name);
> +}
> +EXPORT_SYMBOL(fortify_panic);
> diff --git a/security/Kconfig b/security/Kconfig
> index 93027fdf47d1..0e5035d720ce 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -154,6 +154,12 @@ config HARDENED_USERCOPY_PAGESPAN
>  	  been removed. This config is intended to be used only while
>  	  trying to find such users.
>  
> +config FORTIFY_SOURCE
> +	bool "Harden common functions against buffer overflows"
> +	help
> +	  Detect overflows of buffers in common functions where the compiler
> +	  can determine the buffer size.
> +
>  config STATIC_USERMODEHELPER
>  	bool "Force all usermode helper calls through a single binary"
>  	help
> -- 
> 2.12.2
> 

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-04 15:48 ` [kernel-hardening] " Mark Rutland
@ 2017-05-04 17:49   ` Daniel Micay
  2017-05-04 18:09     ` Mark Rutland
  0 siblings, 1 reply; 25+ messages in thread
From: Daniel Micay @ 2017-05-04 17:49 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> Hi,
> 
> From a glance, in the arm64 vdso case, that's due to the definition of
> vdso_start as a char giving it a single byte size.
> 
> We can/should probably use char[] for vdso_{start,end} on arm/arm64 as
> we do for other linker symbols (and x86 and tile do for
> vdso_{start,end}), i.e.

Yeah, I think that's the right approach, and this also applies to
features like -fsanitize=object-size in UBSan. I worked around it by
bypassing the function with __builtin_memcmp as I did for the other
cases I ran into, but they should all be fixed properly upstream.

> ---->8----
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 41b6e31..ae35f18 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -37,7 +37,7 @@
>  #include <asm/vdso.h>
>  #include <asm/vdso_datapage.h>
>  
> -extern char vdso_start, vdso_end;
> +extern char vdso_start[], vdso_end[];
>  static unsigned long vdso_pages __ro_after_init;
>  
>  /*
> @@ -125,14 +125,14 @@ static int __init vdso_init(void)
>         struct page **vdso_pagelist;
>         unsigned long pfn;
>  
> -       if (memcmp(&vdso_start, "\177ELF", 4)) {
> +       if (memcmp(vdso_start, "\177ELF", 4)) {
>                 pr_err("vDSO is not a valid ELF object!\n");
>                 return -EINVAL;
>         }
>  
> -       vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT;
> +       vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
>         pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n",
> -               vdso_pages + 1, vdso_pages, &vdso_start, 1L,
> vdso_data);
> +               vdso_pages + 1, vdso_pages, vdso_start, 1L,
> vdso_data);
>  
>         /* Allocate the vDSO pagelist, plus a page for the data. */
>         vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *),
> @@ -145,7 +145,7 @@ static int __init vdso_init(void)
>  
>  
>         /* Grab the vDSO code pages. */
> -       pfn = sym_to_pfn(&vdso_start);
> +       pfn = sym_to_pfn(vdso_start);
>  
>         for (i = 0; i < vdso_pages; i++)
>                 vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
> ---->8----
> 
> With that fixed, I see we also need a fortify_panic() for the EFI
> stub.
> 
> I'm not sure if the x86 EFI stub gets linked with the
> boot/compressed/misc.c version below, and/or whether it's safe for the
> EFI stub to call that.
> 
> ... with an EFI stub fortify_panic() hacked in, I can build an arm64
> kernel
> with this applied. It dies at some point after exiting EFI boot
> services; i
> don't know whether it made it out of the stub and into the kernel
> proper.

Could start with #define __NO_FORTIFY above the #include sections there
instead (or -D__NO_FORTIFY as a compiler flag), which will skip
fortifying those for now. I'm successfully using this on a non-EFI ARM64
3.18 LTS kernel, so it should be close to working on other systems (but
not necessarily with messy drivers). The x86 EFI workaround works.

> > It isn't particularly bad, but there are likely some issues that
> > occur
> > during regular use at runtime (none found so far).
> 
> It might be worth seeing if anyone can throw syzkaller and friends at
> this.

It tends to find stack buffer overflows, etc. not detected by ASan, so
that'd be nice. Can expand coverage a bit to some heap allocations with these, but I expect slab debugging and ASan already found most of what these would uncover:

https://github.com/thestinger/linux-hardened/commit/6efe84cdb88f73e8b8c59b59a8ea46fa4b1bdab1.patch
https://github.com/thestinger/linux-hardened/commit/d342da362c5f852c1666dce461bc82521b6711e4.patch

Unfortunately, ksize means alloc_size on kmalloc is not 100% correct
since the extra space from size class rounding falls outside of what it
will claim to be the size of the allocation. C standard libraries with
_FORTIFY_SOURCE seem to ignore this problem for malloc_usable_size. It
doesn't have many uses though.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-04 17:49   ` Daniel Micay
@ 2017-05-04 18:09     ` Mark Rutland
  2017-05-04 19:03       ` Daniel Micay
  2017-05-05 10:38       ` Mark Rutland
  0 siblings, 2 replies; 25+ messages in thread
From: Mark Rutland @ 2017-05-04 18:09 UTC (permalink / raw)
  To: Daniel Micay; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > Hi,
> > 
> > From a glance, in the arm64 vdso case, that's due to the definition of
> > vdso_start as a char giving it a single byte size.
> > 
> > We can/should probably use char[] for vdso_{start,end} on arm/arm64 as
> > we do for other linker symbols (and x86 and tile do for
> > vdso_{start,end}), i.e.
> 
> Yeah, I think that's the right approach, and this also applies to
> features like -fsanitize=object-size in UBSan. I worked around it by
> bypassing the function with __builtin_memcmp as I did for the other
> cases I ran into, but they should all be fixed properly upstream.

Sure.

> > With that fixed, I see we also need a fortify_panic() for the EFI
> > stub.
> > 
> > I'm not sure if the x86 EFI stub gets linked with the
> > boot/compressed/misc.c version below, and/or whether it's safe for
> > the EFI stub to call that.
> > 
> > ... with an EFI stub fortify_panic() hacked in, I can build an arm64
> > kernel with this applied. It dies at some point after exiting EFI
> > boot services; i don't know whether it made it out of the stub and
> > into the kernel proper.
> 
> Could start with #define __NO_FORTIFY above the #include sections there
> instead (or -D__NO_FORTIFY as a compiler flag), which will skip
> fortifying those for now.

Neat. Given there are a few files, doing the latter for the stub is the
simplest option.

> I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so it
> should be close to working on other systems (but not necessarily with
> messy drivers). The x86 EFI workaround works.

FWIW, I've been playing atop of next-20170504, with a tonne of other
debug options enabled (including KASAN_INLINE).

>From a quick look with a JTAG debugger, the CPU got out of the stub and
into the kernel. It looks like it's dying initialising KASAN, where the
vectors appear to have been corrupted.

I have a rough idea of why that might be.

> > > It isn't particularly bad, but there are likely some issues that
> > > occur
> > > during regular use at runtime (none found so far).
> > 
> > It might be worth seeing if anyone can throw syzkaller and friends at
> > this.
> 
> It tends to find stack buffer overflows, etc. not detected by ASan, so
> that'd be nice. Can expand coverage a bit to some heap allocations
> with these, but I expect slab debugging and ASan already found most of
> what these would uncover:
> 
> https://github.com/thestinger/linux-hardened/commit/6efe84cdb88f73e8b8c59b59a8ea46fa4b1bdab1.patch
> https://github.com/thestinger/linux-hardened/commit/d342da362c5f852c1666dce461bc82521b6711e4.patch
> 
> Unfortunately, ksize means alloc_size on kmalloc is not 100% correct
> since the extra space from size class rounding falls outside of what it
> will claim to be the size of the allocation. C standard libraries with
> _FORTIFY_SOURCE seem to ignore this problem for malloc_usable_size. It
> doesn't have many uses though.

Perhaps I've misunderstood, but does that matter?

If a caller is relying on accessing padding, I'd say that's a bug.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-04 18:09     ` Mark Rutland
@ 2017-05-04 19:03       ` Daniel Micay
  2017-05-05 10:38       ` Mark Rutland
  1 sibling, 0 replies; 25+ messages in thread
From: Daniel Micay @ 2017-05-04 19:03 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

> > https://github.com/thestinger/linux-
> > hardened/commit/6efe84cdb88f73e8b8c59b59a8ea46fa4b1bdab1.patch
> > https://github.com/thestinger/linux-hardened/commit/d342da362c5f852c
> > 1666dce461bc82521b6711e4.patch
> > 
> > Unfortunately, ksize means alloc_size on kmalloc is not 100% correct
> > since the extra space from size class rounding falls outside of what
> > it
> > will claim to be the size of the allocation. C standard libraries
> > with
> > _FORTIFY_SOURCE seem to ignore this problem for malloc_usable_size.
> > It
> > doesn't have many uses though.
> 
> Perhaps I've misunderstood, but does that matter?
> 
> If a caller is relying on accessing padding, I'd say that's a bug.

I think it's gross, but it's essentially what ksize provides: exposing
how much usable padding is available. If the size class rounding padding
is being used by slab debugging red zones, etc. ksize doesn't expose it
as part of the size. It's definitely not widely used. I think the main
use case is for dynamic arrays, to take advantage of the space added to
round to the next size class. There are also likely some users of it
that are not tracking sizes themselves but rather relying on ksize, and
then might end up using that extra space.

I think the glibc authors decided to start considering it a bug to make
real use of malloc_usable_size, and the man page discourages it but
doesn't explicitly document that:

NOTES
       The  value  returned  by malloc_usable_size() may be greater than the requested size of the
       allocation because of alignment and minimum size constraints.  Although  the  excess  bytes
       can  be  overwritten  by  the application without ill effects, this is not good programming
       practice: the number of excess bytes in an allocation depends on the underlying implementa‐
       tion.

       The main use of this function is for debugging and introspection.

It's mostly safe to use alloc_size like glibc... but not entirely if
using ksize to make use of extra padding is permitted, and it seems like
it is. I don't think it's particularly useful even for dynamic arrays,
but unfortunately it exists / is used.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-04 18:09     ` Mark Rutland
  2017-05-04 19:03       ` Daniel Micay
@ 2017-05-05 10:38       ` Mark Rutland
  2017-05-05 10:52         ` Mark Rutland
                           ` (2 more replies)
  1 sibling, 3 replies; 25+ messages in thread
From: Mark Rutland @ 2017-05-05 10:38 UTC (permalink / raw)
  To: Daniel Micay; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:

> > > ... with an EFI stub fortify_panic() hacked in, I can build an arm64
> > > kernel with this applied. It dies at some point after exiting EFI
> > > boot services; i don't know whether it made it out of the stub and
> > > into the kernel proper.
> > 
> > Could start with #define __NO_FORTIFY above the #include sections there
> > instead (or -D__NO_FORTIFY as a compiler flag), which will skip
> > fortifying those for now.
> 
> Neat. Given there are a few files, doing the latter for the stub is the
> simplest option.
> 
> > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so it
> > should be close to working on other systems (but not necessarily with
> > messy drivers). The x86 EFI workaround works.
> 
> FWIW, I've been playing atop of next-20170504, with a tonne of other
> debug options enabled (including KASAN_INLINE).
> 
> From a quick look with a JTAG debugger, the CPU got out of the stub and
> into the kernel. It looks like it's dying initialising KASAN, where the
> vectors appear to have been corrupted.

>From a walk up the call chain, I saw mm/kasan/kasan.c's memcpy was being
called recursively. Somehow the fortified memcpy() instrumentation
results in kasan's memcpy() calling itself rather than __memcpy().

The resulting stack overflow ends up clobbering the vectors (adn
everythigg else) as this is happening early at boot when everything is
mapepd RW.

That can be avoided with:

---->8----
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index f742596..b5327f5 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
 
 KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
                                   $(call cc-option,-ffreestanding) \
-                                  $(call cc-option,-fno-stack-protector)
+                                  $(call cc-option,-fno-stack-protector) \
+                                  -D__NO_FORTIFY
 
 GCOV_PROFILE                   := n
 KASAN_SANITIZE                 := n
---->8----

... though it's a worring that __memcpy() is overridden. I think we need
to be more careful with the way we instrument the string functions.

FWIW, with that, and the previous bits, I can boot a next-20170504
kernel with this applied.

However, I get a KASAN splat from the SLUB init code, even though that's
deliberately not instrumented by KASAN:

[    0.000000] ==================================================================
[    0.000000] BUG: KASAN: slab-out-of-bounds in kmem_cache_alloc+0x2ec/0x438
[    0.000000] Write of size 352 at addr ffff800936802000 by task swapper/0
[    0.000000] 
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.11.0-next-20170504-00002-g760cfdb-dirty #15
[    0.000000] Hardware name: ARM Juno development board (r1) (DT)
[    0.000000] Call trace:
[    0.000000] [<ffff2000080949c8>] dump_backtrace+0x0/0x538
[    0.000000] [<ffff2000080951a0>] show_stack+0x20/0x30
[    0.000000] [<ffff200008c125a0>] dump_stack+0x120/0x188
[    0.000000] [<ffff20000857ac04>] print_address_description+0x10c/0x380
[    0.000000] [<ffff20000857b354>] kasan_report+0x12c/0x3b8
[    0.000000] [<ffff200008579d54>] check_memory_region+0x144/0x1a0
[    0.000000] [<ffff20000857a1f4>] memset+0x2c/0x50
[    0.000000] [<ffff2000085730bc>] kmem_cache_alloc+0x2ec/0x438
[    0.000000] [<ffff20000a937528>] bootstrap+0x34/0x28c
[    0.000000] [<ffff20000a937804>] kmem_cache_init+0x84/0x118
[    0.000000] [<ffff20000a9014bc>] start_kernel+0x2f8/0x644
[    0.000000] [<ffff20000a9001e8>] __primary_switched+0x6c/0x74
[    0.000000] 
[    0.000000] Allocated by task 0:
[    0.000000] (stack is not available)
[    0.000000] 
[    0.000000] Freed by task 0:
[    0.000000] (stack is not available)
[    0.000000] 
[    0.000000] The buggy address belongs to the object at ffff800936802000
[    0.000000]  which belongs to the cache kmem_cache of size 352
[    0.000000] The buggy address is located 0 bytes inside of
[    0.000000]  352-byte region [ffff800936802000, ffff800936802160)
[    0.000000] The buggy address belongs to the page:
[    0.000000] page:ffff7e0024da0080 count:1 mapcount:0 mapping:          (null) index:0x0 compound_mapcount: 0
[    0.000000] flags: 0x1fffc00000008100(slab|head)
[    0.000000] raw: 1fffc00000008100 0000000000000000 0000000000000000 0000000180100010
[    0.000000] raw: dead000000000100 dead000000000200 ffff20000aa2c068 0000000000000000
[    0.000000] page dumped because: kasan: bad access detected
[    0.000000] 
[    0.000000] Memory state around the buggy address:
[    0.000000]  ffff800936801f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[    0.000000]  ffff800936801f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[    0.000000] >ffff800936802000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[    0.000000]                    ^
[    0.000000]  ffff800936802080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[    0.000000]  ffff800936802100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[    0.000000] ==================================================================

Thanks,
Mark.

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-05 10:38       ` Mark Rutland
@ 2017-05-05 10:52         ` Mark Rutland
  2017-05-05 13:44           ` Kees Cook
  2017-05-05 17:38         ` Daniel Micay
  2017-05-09 20:39         ` Kees Cook
  2 siblings, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2017-05-05 10:52 UTC (permalink / raw)
  To: Daniel Micay; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Fri, May 05, 2017 at 11:38:39AM +0100, Mark Rutland wrote:
> On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> From a walk up the call chain, I saw mm/kasan/kasan.c's memcpy was being
> called recursively. Somehow the fortified memcpy() instrumentation
> results in kasan's memcpy() calling itself rather than __memcpy().
> 
> The resulting stack overflow ends up clobbering the vectors (adn
> everythigg else) as this is happening early at boot when everything is
> mapepd RW.
> 
> That can be avoided with:
> 
> ---->8----
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index f742596..b5327f5 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
>  
>  KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
>                                    $(call cc-option,-ffreestanding) \
> -                                  $(call cc-option,-fno-stack-protector)
> +                                  $(call cc-option,-fno-stack-protector) \
> +                                  -D__NO_FORTIFY
>  
>  GCOV_PROFILE                   := n
>  KASAN_SANITIZE                 := n
> ---->8----

Whoops; wrong diff. That should have been:

---->8----
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 2976a9e..747423b 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -5,6 +5,7 @@ KCOV_INSTRUMENT := n
 CFLAGS_REMOVE_kasan.o = -pg
 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
-CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) \
+                 -D__NO_FORTIFY
 
 obj-y := kasan.o report.o kasan_init.o quarantine.o
---->8----

Thanks,
Mark.

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-05 10:52         ` Mark Rutland
@ 2017-05-05 13:44           ` Kees Cook
  0 siblings, 0 replies; 25+ messages in thread
From: Kees Cook @ 2017-05-05 13:44 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Daniel Micay, kernel-hardening, Ard Biesheuvel, Matt Fleming

On Fri, May 5, 2017 at 3:52 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, May 05, 2017 at 11:38:39AM +0100, Mark Rutland wrote:
>> On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
>> From a walk up the call chain, I saw mm/kasan/kasan.c's memcpy was being
>> called recursively. Somehow the fortified memcpy() instrumentation
>> results in kasan's memcpy() calling itself rather than __memcpy().
>>
>> The resulting stack overflow ends up clobbering the vectors (adn
>> everythigg else) as this is happening early at boot when everything is
>> mapepd RW.
>>
>> That can be avoided with:
>>
>> ---->8----
>> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
>> index f742596..b5327f5 100644
>> --- a/drivers/firmware/efi/libstub/Makefile
>> +++ b/drivers/firmware/efi/libstub/Makefile
>> @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
>>
>>  KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
>>                                    $(call cc-option,-ffreestanding) \
>> -                                  $(call cc-option,-fno-stack-protector)
>> +                                  $(call cc-option,-fno-stack-protector) \
>> +                                  -D__NO_FORTIFY
>>
>>  GCOV_PROFILE                   := n
>>  KASAN_SANITIZE                 := n
>> ---->8----
>
> Whoops; wrong diff. That should have been:
>
> ---->8----
> diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
> index 2976a9e..747423b 100644
> --- a/mm/kasan/Makefile
> +++ b/mm/kasan/Makefile
> @@ -5,6 +5,7 @@ KCOV_INSTRUMENT := n
>  CFLAGS_REMOVE_kasan.o = -pg
>  # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
>  # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
> -CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
> +CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) \
> +                 -D__NO_FORTIFY
>
>  obj-y := kasan.o report.o kasan_init.o quarantine.o

I love this protection! It would have blocked a couple exploitable
bugs I saw recently.

Seems like a v2 could include an ARCH_HAS_FORTIFY or something to note
the architectures that have been build/run tested to deal with the
corner cases?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH] add the option of fortified string.h functions
  2017-05-04 14:24 [kernel-hardening] [PATCH] add the option of fortified string.h functions Daniel Micay
@ 2017-05-05 16:42   ` Kees Cook
  2017-05-04 15:48 ` [kernel-hardening] " Mark Rutland
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 25+ messages in thread
From: Kees Cook @ 2017-05-05 16:42 UTC (permalink / raw)
  To: Daniel Micay; +Cc: kernel-hardening, Rasmus Villemoes, LKML

[adding lkml and rasmus to CC...]

On Thu, May 4, 2017 at 7:24 AM, Daniel Micay <danielmicay@gmail.com> wrote:
> This adds support for compiling with a rough equivalent to the glibc
> _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
> overflow checks for string.h functions when the compiler determines the
> size of the source or destination buffer at compile-time. Unlike glibc,
> it covers buffer reads in addition to writes.
>
> GNU C __builtin_*_chk intrinsics are avoided because they're only
> designed to detect write overflows and are overly complex. A single
> inline branch works for everything but strncat while those intrinsics
> would force the creation of a bunch of extra non-inline wrappers that
> aren't able to receive the detected source buffer size.
>
> This detects various undefined uses of memcpy, etc. at compile-time
> outside of non-core kernel code, and in the arm64 vdso code, so there
> will need to be a series of fixes applied (mainly memcpy -> strncpy for
> copying string constants to avoid copying past the end of the source).
> It isn't particularly bad, but there are likely some issues that occur
> during regular use at runtime (none found so far).
>
> Future improvements left out of initial implementation for simplicity,
> as it's all quite optional and can be done incrementally:
>
> The fortified string functions should place a limit on reads from the
> source. For strcat/strcpy, these could just be a variant of strlcat /
> strlcpy using the size limit as a bound on both the source and
> destination, with the return value only reporting whether truncation
> occurred rather than providing the source length. It would be an easier
> API for developers to use too and not that it really matters but it
> would be more efficient for the case where truncation is intended.
>
> It should be possible to optionally use __builtin_object_size(x, 1) for
> some functions (C strings) to detect intra-object overflows (like
> glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
> approach to avoid likely compatibility issues.
>
> The error reporting could be made friendlier by splitting up the
> compile-time error for reads and writes. The runtime error could also
> directly report the buffer and copy sizes.
>
> It may make sense to have the compile-time checks in a separate
> configuration option since they wouldn't have a runtime performance cost
> if there was an ifdef for the runtime check. However, the runtime cost
> of these checks is already quite low (much lower than SSP) and as long
> as some people are using it with allyesconfig, the issues will be found
> for any in-tree code.
>
> Signed-off-by: Daniel Micay <danielmicay@gmail.com>
> ---
>  arch/x86/boot/compressed/misc.c |   5 ++
>  include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
>  lib/string.c                    |   8 ++
>  security/Kconfig                |   6 ++
>  4 files changed, 180 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index b3c5a5f030ce..43691238a21d 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>         debug_putstr("done.\nBooting the kernel.\n");
>         return output;
>  }
> +
> +void fortify_panic(const char *name)
> +{
> +       error("detected buffer overflow");
> +}
> diff --git a/include/linux/string.h b/include/linux/string.h
> index 26b6f6a66f83..3bd429c9593a 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
>         return tail ? tail + 1 : path;
>  }
>
> +#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
> +#define __RENAME(x) __asm__(#x)
> +
> +void fortify_panic(const char *name) __noreturn __cold;
> +void __buffer_overflow(void) __compiletime_error("buffer overflow");

I've been playing with this with allyesconfig on x86_64, and I've got
some suggestions I think would make things nicer for others looking at
this. How about splitting the compile-time buffer_overflow() report
into the write and read overflow pieces:

void __dst_overflow(void) __compiletime_error("writes beyond end of
destination buffer");
void __src_overflow(void) __compiletime_error("reads beyond end of
source buffer");

With things like memcpy split up:

        if (__builtin_constant_p(size)) {
                if (p_size < size)
                        __dst_overflow();
                if (q_size < size)
                        __src_overflow();
        }

I don't think it's worth doing the same for the runtime side since
that'll just increase text size, etc. (Or, I wasn't creative enough to
think of a way to distinguish them without doing so.)

I tried to figure out a way to use __same_type(q, (char *)0) to
trigger strncpy automatically and issuing a warning, but I couldn't
get it to work for some reason. It never tripped.

> +#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
> +__FORTIFY_INLINE char *strcpy(char *p, const char *q)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strcpy(p, q);
> +       if (strlcpy(p, q, p_size) >= p_size)
> +               fortify_panic(__func__);
> +       return p;
> +}

The other thing, which is cosmetic, would be to swap "p" and "q" for
"dst" and "src" everywhere, just to make these more readable.

The allyesconfig has tripped over a bunch of places where memcpy needs
to be strncpy. I'll send those patches in a bit...

-Kees

> +
> +__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_strncpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE char *strcat(char *p, const char *q)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strcat(p, q);
> +       if (strlcat(p, q, p_size) >= p_size)
> +               fortify_panic(__func__);
> +       return p;
> +}
> +
> +__FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
> +{
> +       size_t p_len, copy_len;
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strncat(p, q, count);
> +       p_len = __builtin_strlen(p);
> +       copy_len = strnlen(q, count);
> +       if (p_size < p_len + copy_len + 1)
> +               fortify_panic(__func__);
> +       __builtin_memcpy(p + p_len, q, copy_len);
> +       p[p_len + copy_len] = '\0';
> +       return p;
> +}
> +
> +__FORTIFY_INLINE __kernel_size_t strlen(const char *p)
> +{
> +       __kernel_size_t ret;
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strlen(p);
> +       ret = strnlen(p, p_size);
> +       if (p_size <= ret)
> +               fortify_panic(__func__);
> +       return ret;
> +}
> +
> +extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
> +__FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       __kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size);
> +       if (p_size <= ret)
> +               fortify_panic(__func__);
> +       return ret;
> +}
> +
> +__FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memset(p, c, size);
> +}
> +
> +__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       size_t q_size = __builtin_object_size(q, 0);
> +       if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +               __buffer_overflow();
> +       if (p_size < size || q_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memcpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       size_t q_size = __builtin_object_size(q, 0);
> +       if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +               __buffer_overflow();
> +       if (p_size < size || q_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memmove(p, q, size);
> +}
> +
> +extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
> +__FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __real_memscan(p, c, size);
> +}
> +
> +__FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       size_t q_size = __builtin_object_size(q, 0);
> +       if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +               __buffer_overflow();
> +       if (p_size < size || q_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memcmp(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memchr(p, c, size);
> +}
> +
> +void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
> +__FORTIFY_INLINE void *memchr_inv(const void *p, int c, size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __real_memchr_inv(p, c, size);
> +}
> +
> +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
> +__FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __real_kmemdup(p, size, gfp);
> +}
> +#endif
> +
>  #endif /* _LINUX_STRING_H_ */
> diff --git a/lib/string.c b/lib/string.c
> index b5c9a1168d3a..ca356e537e24 100644
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -19,6 +19,8 @@
>   * -  Kissed strtok() goodbye
>   */
>
> +#define __NO_FORTIFY
> +
>  #include <linux/types.h>
>  #include <linux/string.h>
>  #include <linux/ctype.h>
> @@ -952,3 +954,9 @@ char *strreplace(char *s, char old, char new)
>         return s;
>  }
>  EXPORT_SYMBOL(strreplace);
> +
> +void fortify_panic(const char *name)
> +{
> +       panic("detected buffer overflow in %s", name);
> +}
> +EXPORT_SYMBOL(fortify_panic);
> diff --git a/security/Kconfig b/security/Kconfig
> index 93027fdf47d1..0e5035d720ce 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -154,6 +154,12 @@ config HARDENED_USERCOPY_PAGESPAN
>           been removed. This config is intended to be used only while
>           trying to find such users.
>
> +config FORTIFY_SOURCE
> +       bool "Harden common functions against buffer overflows"
> +       help
> +         Detect overflows of buffers in common functions where the compiler
> +         can determine the buffer size.
> +
>  config STATIC_USERMODEHELPER
>         bool "Force all usermode helper calls through a single binary"
>         help
> --
> 2.12.2
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [kernel-hardening] Re: [PATCH] add the option of fortified string.h functions
@ 2017-05-05 16:42   ` Kees Cook
  0 siblings, 0 replies; 25+ messages in thread
From: Kees Cook @ 2017-05-05 16:42 UTC (permalink / raw)
  To: Daniel Micay; +Cc: kernel-hardening, Rasmus Villemoes, LKML

[adding lkml and rasmus to CC...]

On Thu, May 4, 2017 at 7:24 AM, Daniel Micay <danielmicay@gmail.com> wrote:
> This adds support for compiling with a rough equivalent to the glibc
> _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
> overflow checks for string.h functions when the compiler determines the
> size of the source or destination buffer at compile-time. Unlike glibc,
> it covers buffer reads in addition to writes.
>
> GNU C __builtin_*_chk intrinsics are avoided because they're only
> designed to detect write overflows and are overly complex. A single
> inline branch works for everything but strncat while those intrinsics
> would force the creation of a bunch of extra non-inline wrappers that
> aren't able to receive the detected source buffer size.
>
> This detects various undefined uses of memcpy, etc. at compile-time
> outside of non-core kernel code, and in the arm64 vdso code, so there
> will need to be a series of fixes applied (mainly memcpy -> strncpy for
> copying string constants to avoid copying past the end of the source).
> It isn't particularly bad, but there are likely some issues that occur
> during regular use at runtime (none found so far).
>
> Future improvements left out of initial implementation for simplicity,
> as it's all quite optional and can be done incrementally:
>
> The fortified string functions should place a limit on reads from the
> source. For strcat/strcpy, these could just be a variant of strlcat /
> strlcpy using the size limit as a bound on both the source and
> destination, with the return value only reporting whether truncation
> occurred rather than providing the source length. It would be an easier
> API for developers to use too and not that it really matters but it
> would be more efficient for the case where truncation is intended.
>
> It should be possible to optionally use __builtin_object_size(x, 1) for
> some functions (C strings) to detect intra-object overflows (like
> glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
> approach to avoid likely compatibility issues.
>
> The error reporting could be made friendlier by splitting up the
> compile-time error for reads and writes. The runtime error could also
> directly report the buffer and copy sizes.
>
> It may make sense to have the compile-time checks in a separate
> configuration option since they wouldn't have a runtime performance cost
> if there was an ifdef for the runtime check. However, the runtime cost
> of these checks is already quite low (much lower than SSP) and as long
> as some people are using it with allyesconfig, the issues will be found
> for any in-tree code.
>
> Signed-off-by: Daniel Micay <danielmicay@gmail.com>
> ---
>  arch/x86/boot/compressed/misc.c |   5 ++
>  include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
>  lib/string.c                    |   8 ++
>  security/Kconfig                |   6 ++
>  4 files changed, 180 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index b3c5a5f030ce..43691238a21d 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>         debug_putstr("done.\nBooting the kernel.\n");
>         return output;
>  }
> +
> +void fortify_panic(const char *name)
> +{
> +       error("detected buffer overflow");
> +}
> diff --git a/include/linux/string.h b/include/linux/string.h
> index 26b6f6a66f83..3bd429c9593a 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
>         return tail ? tail + 1 : path;
>  }
>
> +#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
> +#define __RENAME(x) __asm__(#x)
> +
> +void fortify_panic(const char *name) __noreturn __cold;
> +void __buffer_overflow(void) __compiletime_error("buffer overflow");

I've been playing with this with allyesconfig on x86_64, and I've got
some suggestions I think would make things nicer for others looking at
this. How about splitting the compile-time buffer_overflow() report
into the write and read overflow pieces:

void __dst_overflow(void) __compiletime_error("writes beyond end of
destination buffer");
void __src_overflow(void) __compiletime_error("reads beyond end of
source buffer");

With things like memcpy split up:

        if (__builtin_constant_p(size)) {
                if (p_size < size)
                        __dst_overflow();
                if (q_size < size)
                        __src_overflow();
        }

I don't think it's worth doing the same for the runtime side since
that'll just increase text size, etc. (Or, I wasn't creative enough to
think of a way to distinguish them without doing so.)

I tried to figure out a way to use __same_type(q, (char *)0) to
trigger strncpy automatically and issuing a warning, but I couldn't
get it to work for some reason. It never tripped.

> +#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
> +__FORTIFY_INLINE char *strcpy(char *p, const char *q)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strcpy(p, q);
> +       if (strlcpy(p, q, p_size) >= p_size)
> +               fortify_panic(__func__);
> +       return p;
> +}

The other thing, which is cosmetic, would be to swap "p" and "q" for
"dst" and "src" everywhere, just to make these more readable.

The allyesconfig has tripped over a bunch of places where memcpy needs
to be strncpy. I'll send those patches in a bit...

-Kees

> +
> +__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_strncpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE char *strcat(char *p, const char *q)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strcat(p, q);
> +       if (strlcat(p, q, p_size) >= p_size)
> +               fortify_panic(__func__);
> +       return p;
> +}
> +
> +__FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
> +{
> +       size_t p_len, copy_len;
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strncat(p, q, count);
> +       p_len = __builtin_strlen(p);
> +       copy_len = strnlen(q, count);
> +       if (p_size < p_len + copy_len + 1)
> +               fortify_panic(__func__);
> +       __builtin_memcpy(p + p_len, q, copy_len);
> +       p[p_len + copy_len] = '\0';
> +       return p;
> +}
> +
> +__FORTIFY_INLINE __kernel_size_t strlen(const char *p)
> +{
> +       __kernel_size_t ret;
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (p_size == (size_t)-1)
> +               return __builtin_strlen(p);
> +       ret = strnlen(p, p_size);
> +       if (p_size <= ret)
> +               fortify_panic(__func__);
> +       return ret;
> +}
> +
> +extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
> +__FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       __kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size);
> +       if (p_size <= ret)
> +               fortify_panic(__func__);
> +       return ret;
> +}
> +
> +__FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memset(p, c, size);
> +}
> +
> +__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       size_t q_size = __builtin_object_size(q, 0);
> +       if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +               __buffer_overflow();
> +       if (p_size < size || q_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memcpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       size_t q_size = __builtin_object_size(q, 0);
> +       if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +               __buffer_overflow();
> +       if (p_size < size || q_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memmove(p, q, size);
> +}
> +
> +extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
> +__FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __real_memscan(p, c, size);
> +}
> +
> +__FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       size_t q_size = __builtin_object_size(q, 0);
> +       if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +               __buffer_overflow();
> +       if (p_size < size || q_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memcmp(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __builtin_memchr(p, c, size);
> +}
> +
> +void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
> +__FORTIFY_INLINE void *memchr_inv(const void *p, int c, size_t size)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __real_memchr_inv(p, c, size);
> +}
> +
> +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
> +__FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp)
> +{
> +       size_t p_size = __builtin_object_size(p, 0);
> +       if (__builtin_constant_p(size) && p_size < size)
> +               __buffer_overflow();
> +       if (p_size < size)
> +               fortify_panic(__func__);
> +       return __real_kmemdup(p, size, gfp);
> +}
> +#endif
> +
>  #endif /* _LINUX_STRING_H_ */
> diff --git a/lib/string.c b/lib/string.c
> index b5c9a1168d3a..ca356e537e24 100644
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -19,6 +19,8 @@
>   * -  Kissed strtok() goodbye
>   */
>
> +#define __NO_FORTIFY
> +
>  #include <linux/types.h>
>  #include <linux/string.h>
>  #include <linux/ctype.h>
> @@ -952,3 +954,9 @@ char *strreplace(char *s, char old, char new)
>         return s;
>  }
>  EXPORT_SYMBOL(strreplace);
> +
> +void fortify_panic(const char *name)
> +{
> +       panic("detected buffer overflow in %s", name);
> +}
> +EXPORT_SYMBOL(fortify_panic);
> diff --git a/security/Kconfig b/security/Kconfig
> index 93027fdf47d1..0e5035d720ce 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -154,6 +154,12 @@ config HARDENED_USERCOPY_PAGESPAN
>           been removed. This config is intended to be used only while
>           trying to find such users.
>
> +config FORTIFY_SOURCE
> +       bool "Harden common functions against buffer overflows"
> +       help
> +         Detect overflows of buffers in common functions where the compiler
> +         can determine the buffer size.
> +
>  config STATIC_USERMODEHELPER
>         bool "Force all usermode helper calls through a single binary"
>         help
> --
> 2.12.2
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-05 10:38       ` Mark Rutland
  2017-05-05 10:52         ` Mark Rutland
@ 2017-05-05 17:38         ` Daniel Micay
  2017-05-08 11:41           ` Mark Rutland
  2017-05-09 20:39         ` Kees Cook
  2 siblings, 1 reply; 25+ messages in thread
From: Daniel Micay @ 2017-05-05 17:38 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Fri, 2017-05-05 at 11:38 +0100, Mark Rutland wrote:
> On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> > On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > > > ... with an EFI stub fortify_panic() hacked in, I can build an
> > > > arm64
> > > > kernel with this applied. It dies at some point after exiting
> > > > EFI
> > > > boot services; i don't know whether it made it out of the stub
> > > > and
> > > > into the kernel proper.
> > > 
> > > Could start with #define __NO_FORTIFY above the #include sections
> > > there
> > > instead (or -D__NO_FORTIFY as a compiler flag), which will skip
> > > fortifying those for now.
> > 
> > Neat. Given there are a few files, doing the latter for the stub is
> > the
> > simplest option.
> > 
> > > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so
> > > it
> > > should be close to working on other systems (but not necessarily
> > > with
> > > messy drivers). The x86 EFI workaround works.
> > 
> > FWIW, I've been playing atop of next-20170504, with a tonne of other
> > debug options enabled (including KASAN_INLINE).
> > 
> > From a quick look with a JTAG debugger, the CPU got out of the stub
> > and
> > into the kernel. It looks like it's dying initialising KASAN, where
> > the
> > vectors appear to have been corrupted.
> 
> 
> ... though it's a worring that __memcpy() is overridden. I think we
> need
> to be more careful with the way we instrument the string functions.

I don't think there's any way for the fortify code to be intercepting
__memcpy. There's a memcpy function in arch/x86/boot/compressed/string.c
defined via __memcpy and that appears to be working.

A shot in the dark is that it might not happen if a __real_memcpy alias
via __RENAME is used instead of __builtin_memcpy, but I'm not sure how
or why this is breaking in the first place.

> FWIW, with that, and the previous bits, I can boot a next-20170504
> kernel with this applied.
> 
> However, I get a KASAN splat from the SLUB init code, even though
> that's
> deliberately not instrumented by KASAN:
> 
> [    0.000000]
> ==================================================================
> [    0.000000] BUG: KASAN: slab-out-of-bounds in
> kmem_cache_alloc+0x2ec/0x438
> [    0.000000] Write of size 352 at addr ffff800936802000 by task
> swapper/0
> [    0.000000] 
> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.11.0-next-
> 20170504-00002-g760cfdb-dirty #15
> [    0.000000] Hardware name: ARM Juno development board (r1) (DT)
> [    0.000000] Call trace:
> [    0.000000] [<ffff2000080949c8>] dump_backtrace+0x0/0x538
> [    0.000000] [<ffff2000080951a0>] show_stack+0x20/0x30
> [    0.000000] [<ffff200008c125a0>] dump_stack+0x120/0x188
> [    0.000000] [<ffff20000857ac04>]
> print_address_description+0x10c/0x380
> [    0.000000] [<ffff20000857b354>] kasan_report+0x12c/0x3b8
> [    0.000000] [<ffff200008579d54>] check_memory_region+0x144/0x1a0
> [    0.000000] [<ffff20000857a1f4>] memset+0x2c/0x50
> [    0.000000] [<ffff2000085730bc>] kmem_cache_alloc+0x2ec/0x438
> [    0.000000] [<ffff20000a937528>] bootstrap+0x34/0x28c
> [    0.000000] [<ffff20000a937804>] kmem_cache_init+0x84/0x118
> [    0.000000] [<ffff20000a9014bc>] start_kernel+0x2f8/0x644
> [    0.000000] [<ffff20000a9001e8>] __primary_switched+0x6c/0x74
> [    0.000000] 
> [    0.000000] Allocated by task 0:
> [    0.000000] (stack is not available)
> [    0.000000] 
> [    0.000000] Freed by task 0:
> [    0.000000] (stack is not available)
> [    0.000000] 
> [    0.000000] The buggy address belongs to the object at
> ffff800936802000
> [    0.000000]  which belongs to the cache kmem_cache of size 352
> [    0.000000] The buggy address is located 0 bytes inside of
> [    0.000000]  352-byte region [ffff800936802000, ffff800936802160)
> [    0.000000] The buggy address belongs to the page:
> [    0.000000] page:ffff7e0024da0080 count:1 mapcount:0
> mapping:          (null) index:0x0 compound_mapcount: 0
> [    0.000000] flags: 0x1fffc00000008100(slab|head)
> [    0.000000] raw: 1fffc00000008100 0000000000000000 0000000000000000
> 0000000180100010
> [    0.000000] raw: dead000000000100 dead000000000200 ffff20000aa2c068
> 0000000000000000
> [    0.000000] page dumped because: kasan: bad access detected
> [    0.000000] 
> [    0.000000] Memory state around the buggy address:
> [    0.000000]  ffff800936801f00: ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff
> [    0.000000]  ffff800936801f80: ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff
> [    0.000000] >ffff800936802000: fc fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc
> [    0.000000]                    ^
> [    0.000000]  ffff800936802080: fc fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc
> [    0.000000]  ffff800936802100: fc fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc
> [    0.000000]
> ==================================================================

I'm not sure about this either. I'd like to avoid needing !KASAN since
these are useful when paired together for finding bugs...

ASan is incompatible with glibc _FORTIFY_SOURCE, but this is much
different (no _chk functions) and it *should* just work already.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [kernel-hardening] Re: [PATCH] add the option of fortified string.h functions
  2017-05-04 14:24 [kernel-hardening] [PATCH] add the option of fortified string.h functions Daniel Micay
                   ` (2 preceding siblings ...)
  2017-05-05 16:42   ` [kernel-hardening] " Kees Cook
@ 2017-05-06  2:12 ` Kees Cook
  2017-05-08 17:57   ` Daniel Axtens
       [not found] ` <20170508175723.448CCAC043@b01ledav006.gho.pok.ibm.com>
  5 siblings, 0 replies; 25+ messages in thread
From: Kees Cook @ 2017-05-06  2:12 UTC (permalink / raw)
  To: Daniel Micay; +Cc: kernel-hardening

On Thu, May 4, 2017 at 7:24 AM, Daniel Micay <danielmicay@gmail.com> wrote:
> This adds support for compiling with a rough equivalent to the glibc
> _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
> overflow checks for string.h functions when the compiler determines the
> size of the source or destination buffer at compile-time. Unlike glibc,
> it covers buffer reads in addition to writes.
>
> GNU C __builtin_*_chk intrinsics are avoided because they're only
> designed to detect write overflows and are overly complex. A single
> inline branch works for everything but strncat while those intrinsics
> would force the creation of a bunch of extra non-inline wrappers that
> aren't able to receive the detected source buffer size.
>
> This detects various undefined uses of memcpy, etc. at compile-time
> outside of non-core kernel code, and in the arm64 vdso code, so there
> will need to be a series of fixes applied (mainly memcpy -> strncpy for
> copying string constants to avoid copying past the end of the source).
> It isn't particularly bad, but there are likely some issues that occur
> during regular use at runtime (none found so far).
>
> Future improvements left out of initial implementation for simplicity,
> as it's all quite optional and can be done incrementally:
>
> The fortified string functions should place a limit on reads from the
> source. For strcat/strcpy, these could just be a variant of strlcat /
> strlcpy using the size limit as a bound on both the source and
> destination, with the return value only reporting whether truncation
> occurred rather than providing the source length. It would be an easier
> API for developers to use too and not that it really matters but it
> would be more efficient for the case where truncation is intended.
>
> It should be possible to optionally use __builtin_object_size(x, 1) for
> some functions (C strings) to detect intra-object overflows (like
> glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
> approach to avoid likely compatibility issues.
>
> The error reporting could be made friendlier by splitting up the
> compile-time error for reads and writes. The runtime error could also
> directly report the buffer and copy sizes.
>
> It may make sense to have the compile-time checks in a separate
> configuration option since they wouldn't have a runtime performance cost
> if there was an ifdef for the runtime check. However, the runtime cost
> of these checks is already quite low (much lower than SSP) and as long
> as some people are using it with allyesconfig, the issues will be found
> for any in-tree code.
>
> Signed-off-by: Daniel Micay <danielmicay@gmail.com>
> ---
>  arch/x86/boot/compressed/misc.c |   5 ++
>  include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
>  lib/string.c                    |   8 ++
>  security/Kconfig                |   6 ++
>  4 files changed, 180 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index b3c5a5f030ce..43691238a21d 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>         debug_putstr("done.\nBooting the kernel.\n");
>         return output;
>  }
> +
> +void fortify_panic(const char *name)
> +{
> +       error("detected buffer overflow");
> +}
> diff --git a/include/linux/string.h b/include/linux/string.h
> index 26b6f6a66f83..3bd429c9593a 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
>         return tail ? tail + 1 : path;
>  }
>
> +#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
> +#define __RENAME(x) __asm__(#x)
> +
> +void fortify_panic(const char *name) __noreturn __cold;

To avoid a warning about the compressed boot fortify_panic() not
returning, this annotation is needed:

diff --git a/arch/x86/boot/compressed/error.h b/arch/x86/boot/compressed/error.h
index 2e59dac07f9e..d732e608e3af 100644
--- a/arch/x86/boot/compressed/error.h
+++ b/arch/x86/boot/compressed/error.h
@@ -1,7 +1,9 @@
 #ifndef BOOT_COMPRESSED_ERROR_H
 #define BOOT_COMPRESSED_ERROR_H

+#include <linux/compiler.h>
+
 void warn(char *m);
-void error(char *m);
+void error(char *m) __noreturn;

 #endif /* BOOT_COMPRESSED_ERROR_H */


-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-05 17:38         ` Daniel Micay
@ 2017-05-08 11:41           ` Mark Rutland
  2017-05-08 16:08             ` Daniel Micay
  0 siblings, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2017-05-08 11:41 UTC (permalink / raw)
  To: Daniel Micay; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Fri, May 05, 2017 at 01:38:04PM -0400, Daniel Micay wrote:
> On Fri, 2017-05-05 at 11:38 +0100, Mark Rutland wrote:
> > On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> > > On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > > > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > > > > ... with an EFI stub fortify_panic() hacked in, I can build an
> > > > > arm64 kernel with this applied. It dies at some point after
> > > > > exiting EFI boot services; i don't know whether it made it out
> > > > > of the stub and into the kernel proper.
> > > > 
> > > > Could start with #define __NO_FORTIFY above the #include
> > > > sections there instead (or -D__NO_FORTIFY as a compiler flag),
> > > > which will skip fortifying those for now.
> > > 
> > > Neat. Given there are a few files, doing the latter for the stub
> > > is the simplest option.
> > > 
> > > > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel,
> > > > so it should be close to working on other systems (but not
> > > > necessarily with messy drivers). The x86 EFI workaround works.
> > > 
> > > FWIW, I've been playing atop of next-20170504, with a tonne of
> > > other debug options enabled (including KASAN_INLINE).
> > > 
> > > From a quick look with a JTAG debugger, the CPU got out of the
> > > stub and into the kernel. It looks like it's dying initialising
> > > KASAN, where the vectors appear to have been corrupted.
> > 
> > ... though it's a worring that __memcpy() is overridden. I think we
> > need to be more careful with the way we instrument the string
> > functions.
> 
> I don't think there's any way for the fortify code to be intercepting
> __memcpy. There's a memcpy function in arch/x86/boot/compressed/string.c
> defined via __memcpy and that appears to be working.

Just to check, are there additional patches to disable fortification of
the KASAN code? With that, things seem fine.

> A shot in the dark is that it might not happen if a __real_memcpy
> alias via __RENAME is used instead of __builtin_memcpy, but I'm not
> sure how or why this is breaking in the first place.

Using a RENAME(__real_memcpy), and a call to that didn't help.

With the rename removed (i.e. just an extern __real_memcpy()), it called
__real_memcpy as expected.

I think there's some unintended interaction with <asm/string.h>:

---->8----
#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)

/*
 * For files that are not instrumented (e.g. mm/slub.c) we
 * should use not instrumented version of mem* functions.
 */

#define memcpy(dst, src, len) __memcpy(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n)
#endif
---->8----

If I *only* comment out the memcpy define above, KASAN's memcpy() calls
__memcpy() as we expect.

Looking at the assembly, I see memmove() and memset() have the same
issue, and messing with their <asm/string.h> defintion helps.

Looking at the preprocessed source, the fortified memcpy ends up as:

extern inline __attribute__((always_inline)) __attribute__((no_instrument_function)) __attribute__((always_inline)) __attribute__((gnu_inline)) void *__memcpy(void *p, const void *q, __kernel_size_t size)
{
 size_t p_size = __builtin_object_size(p, 0);
 size_t q_size = __builtin_object_size(q, 0);
 if (__builtin_constant_p(size) && (p_size < size || q_size < size))
  __buffer_overflow();
 if (p_size < size || q_size < size)
  fortify_panic(__func__);
 return __builtin_memcpy(p, q, size);
}

... i.e. we override __memcpy() rather than memcpy().

In KASAN, we undef memcpy before providing KASAN's version, so it keeps
its intended name, ending up as:

void *memcpy(void *dest, const void *src, size_t len)
{
 check_memory_region((unsigned long)src, len, false, (unsigned long)__builtin_return_address(0));
 check_memory_region((unsigned long)dest, len, true, (unsigned long)__builtin_return_address(0));

 return __memcpy(dest, src, len);
}

... then __memcpy() gets inlined and the builtin stuff resolved, calling
memcpy().

This'll require some thought.

> > FWIW, with that, and the previous bits, I can boot a next-20170504
> > kernel with this applied.
> > 
> > However, I get a KASAN splat from the SLUB init code, even though
> > that's deliberately not instrumented by KASAN:

[...]

> I'm not sure about this either. I'd like to avoid needing !KASAN since
> these are useful when paired together for finding bugs...

Likewise! I'd like to have both enabled for my fuzzing config.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-08 11:41           ` Mark Rutland
@ 2017-05-08 16:08             ` Daniel Micay
  0 siblings, 0 replies; 25+ messages in thread
From: Daniel Micay @ 2017-05-08 16:08 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Kees Cook, kernel-hardening, ard.biesheuvel, matt

On Mon, 2017-05-08 at 12:41 +0100, Mark Rutland wrote:
> On Fri, May 05, 2017 at 01:38:04PM -0400, Daniel Micay wrote:
> > On Fri, 2017-05-05 at 11:38 +0100, Mark Rutland wrote:
> > > On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> > > > On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > > > > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > > > > > ... with an EFI stub fortify_panic() hacked in, I can build
> > > > > > an
> > > > > > arm64 kernel with this applied. It dies at some point after
> > > > > > exiting EFI boot services; i don't know whether it made it
> > > > > > out
> > > > > > of the stub and into the kernel proper.
> > > > > 
> > > > > Could start with #define __NO_FORTIFY above the #include
> > > > > sections there instead (or -D__NO_FORTIFY as a compiler flag),
> > > > > which will skip fortifying those for now.
> > > > 
> > > > Neat. Given there are a few files, doing the latter for the stub
> > > > is the simplest option.
> > > > 
> > > > > I'm successfully using this on a non-EFI ARM64 3.18 LTS
> > > > > kernel,
> > > > > so it should be close to working on other systems (but not
> > > > > necessarily with messy drivers). The x86 EFI workaround works.
> > > > 
> > > > FWIW, I've been playing atop of next-20170504, with a tonne of
> > > > other debug options enabled (including KASAN_INLINE).
> > > > 
> > > > From a quick look with a JTAG debugger, the CPU got out of the
> > > > stub and into the kernel. It looks like it's dying initialising
> > > > KASAN, where the vectors appear to have been corrupted.
> > > 
> > > ... though it's a worring that __memcpy() is overridden. I think
> > > we
> > > need to be more careful with the way we instrument the string
> > > functions.
> > 
> > I don't think there's any way for the fortify code to be
> > intercepting
> > __memcpy. There's a memcpy function in
> > arch/x86/boot/compressed/string.c
> > defined via __memcpy and that appears to be working.
> 
> Just to check, are there additional patches to disable fortification
> of
> the KASAN code? With that, things seem fine.
> 
> > A shot in the dark is that it might not happen if a __real_memcpy
> > alias via __RENAME is used instead of __builtin_memcpy, but I'm not
> > sure how or why this is breaking in the first place.
> 
> Using a RENAME(__real_memcpy), and a call to that didn't help.
> 
> With the rename removed (i.e. just an extern __real_memcpy()), it
> called
> __real_memcpy as expected.
> 
> I think there's some unintended interaction with <asm/string.h>:
> 
> ---->8----
> #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
> 
> /*
>  * For files that are not instrumented (e.g. mm/slub.c) we
>  * should use not instrumented version of mem* functions.
>  */
> 
> #define memcpy(dst, src, len) __memcpy(dst, src, len)
> #define memmove(dst, src, len) __memmove(dst, src, len)
> #define memset(s, c, n) __memset(s, c, n)
> #endif
> ---->8----
> 
> If I *only* comment out the memcpy define above, KASAN's memcpy()
> calls
> __memcpy() as we expect.
> 
> Looking at the assembly, I see memmove() and memset() have the same
> issue, and messing with their <asm/string.h> defintion helps.
> 
> Looking at the preprocessed source, the fortified memcpy ends up as:
> 
> extern inline __attribute__((always_inline))
> __attribute__((no_instrument_function)) __attribute__((always_inline))
> __attribute__((gnu_inline)) void *__memcpy(void *p, const void *q,
> __kernel_size_t size)
> {
>  size_t p_size = __builtin_object_size(p, 0);
>  size_t q_size = __builtin_object_size(q, 0);
>  if (__builtin_constant_p(size) && (p_size < size || q_size < size))
>   __buffer_overflow();
>  if (p_size < size || q_size < size)
>   fortify_panic(__func__);
>  return __builtin_memcpy(p, q, size);
> }
> 
> ... i.e. we override __memcpy() rather than memcpy().
> 
> In KASAN, we undef memcpy before providing KASAN's version, so it
> keeps
> its intended name, ending up as:
> 
> void *memcpy(void *dest, const void *src, size_t len)
> {
>  check_memory_region((unsigned long)src, len, false, (unsigned
> long)__builtin_return_address(0));
>  check_memory_region((unsigned long)dest, len, true, (unsigned
> long)__builtin_return_address(0));
> 
>  return __memcpy(dest, src, len);
> }
> 
> ... then __memcpy() gets inlined and the builtin stuff resolved,
> calling
> memcpy().
> 
> This'll require some thought.
>
> > > FWIW, with that, and the previous bits, I can boot a next-20170504
> > > kernel with this applied.
> > > 
> > > However, I get a KASAN splat from the SLUB init code, even though
> > > that's deliberately not instrumented by KASAN:
> 
> [...]
> 
> > I'm not sure about this either. I'd like to avoid needing !KASAN
> > since
> > these are useful when paired together for finding bugs...
> 
> Likewise! I'd like to have both enabled for my fuzzing config.

Ah, it's happening because of that same #define issue. The wrapper ends
up overriding __memcpy in uninstrumented functions but then it ends up
calling __builtin_memcpy which makes it instrumented again.

An initial solution would be adding #define __NO_FORTIFY to that #if
block where memcpy and friends are redefined to disable KASan. It isn't
ideal, but it only impacts KASan builds and only where KASan is being
disabled. Alternatively, it could #define something for the fortify
wrappers to force them to use an alias for __memcpy instead of
__builtin_memcpy but I don't think it's worth the complexity for a tiny
bit of extra coverage in KASan builds. I'll take the easy path.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-04 14:24 [kernel-hardening] [PATCH] add the option of fortified string.h functions Daniel Micay
@ 2017-05-08 17:57   ` Daniel Axtens
  2017-05-04 15:48 ` [kernel-hardening] " Mark Rutland
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 25+ messages in thread
From: Daniel Axtens @ 2017-05-08 17:57 UTC (permalink / raw)
  To: Daniel Micay, Kees Cook, kernel-hardening, linuxppc-dev
  Cc: Daniel Micay, andrew.donnellan

Hi Daniel and ppc people,

(ppc people: this does some compile and run time bounds checking on
string functions. It's cool - currently it picks up a lot of random
things so it will require some more work across the tree, but hopefully
it will eventually hit mainline.)

I've tested this on ppc with pseries_le_defconfig.

I needed a couple of the fixes from github
(https://github.com/thestinger/linux-hardened/commits/4.11) in order to
build, specifically
https://github.com/thestinger/linux-hardened/commit/c65d6a6f309b06703584a23ac2b2bda4bb363143
https://github.com/thestinger/linux-hardened/commit/adcec4756574a8c7f7cb5b6fa51ebeaeeae71aae

Once those were added, I needed to disable fortification in prom_init.c,
as we apparently can't have new symbols there. (I don't understand that
file so I haven't dug into it.)

We also have problems with the feature fixup tests leading to a panic on
boot. It relates to getting what I think are asm labels(?) and how we
address them. I have just disabled fortify here for now; I think the
code could be rewritten to take the labels as unsigned char *, but I
haven't dug into it.

With the following fixups, I can boot a LE buildroot initrd (per
https://github.com/linuxppc/linux/wiki/Booting-with-Qemu). Sadly I don't
have access to real hardware any more, so I can't say anything more than
that. (ajd - perhaps relevant to your interests?)

Regards,
Daniel

>From 33db928b21e6bcb78f93b7883b423282d65af609 Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 9 May 2017 03:15:05 +1000
Subject: [PATCH] powerpc fixes for fortify

Signed-off-by: Daniel Axtens <daniel.axtens@canonical.com>
---
 arch/powerpc/kernel/prom_init.c   | 3 +++
 arch/powerpc/lib/feature-fixups.c | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index dd8a04f3053a..613f79f03877 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -15,6 +15,9 @@
 
 #undef DEBUG_PROM
 
+/* we cannot use FORTIFY as it brings in new symbols */
+#define __NO_FORTIFY
+
 #include <stdarg.h>
 #include <linux/kernel.h>
 #include <linux/string.h>
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index f3917705c686..2eee8558df61 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -12,6 +12,12 @@
  *  2 of the License, or (at your option) any later version.
  */
 
+/*
+ * feature fixup tests do memcmp with raw addresses rather than
+ * objects, which panics on boot with fortify on. TODO FIXME
+ */
+#define __NO_FORTIFY
+
 #include <linux/types.h>
 #include <linux/jump_label.h>
 #include <linux/kernel.h>
-- 
2.11.0


Daniel Micay <danielmicay@gmail.com> writes:

> This adds support for compiling with a rough equivalent to the glibc
> _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
> overflow checks for string.h functions when the compiler determines the
> size of the source or destination buffer at compile-time. Unlike glibc,
> it covers buffer reads in addition to writes.
>
> GNU C __builtin_*_chk intrinsics are avoided because they're only
> designed to detect write overflows and are overly complex. A single
> inline branch works for everything but strncat while those intrinsics
> would force the creation of a bunch of extra non-inline wrappers that
> aren't able to receive the detected source buffer size.
>
> This detects various undefined uses of memcpy, etc. at compile-time
> outside of non-core kernel code, and in the arm64 vdso code, so there
> will need to be a series of fixes applied (mainly memcpy -> strncpy for
> copying string constants to avoid copying past the end of the source).
> It isn't particularly bad, but there are likely some issues that occur
> during regular use at runtime (none found so far).
>
> Future improvements left out of initial implementation for simplicity,
> as it's all quite optional and can be done incrementally:
>
> The fortified string functions should place a limit on reads from the
> source. For strcat/strcpy, these could just be a variant of strlcat /
> strlcpy using the size limit as a bound on both the source and
> destination, with the return value only reporting whether truncation
> occurred rather than providing the source length. It would be an easier
> API for developers to use too and not that it really matters but it
> would be more efficient for the case where truncation is intended.
>
> It should be possible to optionally use __builtin_object_size(x, 1) for
> some functions (C strings) to detect intra-object overflows (like
> glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
> approach to avoid likely compatibility issues.
>
> The error reporting could be made friendlier by splitting up the
> compile-time error for reads and writes. The runtime error could also
> directly report the buffer and copy sizes.
>
> It may make sense to have the compile-time checks in a separate
> configuration option since they wouldn't have a runtime performance cost
> if there was an ifdef for the runtime check. However, the runtime cost
> of these checks is already quite low (much lower than SSP) and as long
> as some people are using it with allyesconfig, the issues will be found
> for any in-tree code.
>
> Signed-off-by: Daniel Micay <danielmicay@gmail.com>
> ---
>  arch/x86/boot/compressed/misc.c |   5 ++
>  include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
>  lib/string.c                    |   8 ++
>  security/Kconfig                |   6 ++
>  4 files changed, 180 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index b3c5a5f030ce..43691238a21d 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>  	debug_putstr("done.\nBooting the kernel.\n");
>  	return output;
>  }
> +
> +void fortify_panic(const char *name)
> +{
> +	error("detected buffer overflow");
> +}
> diff --git a/include/linux/string.h b/include/linux/string.h
> index 26b6f6a66f83..3bd429c9593a 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
>  	return tail ? tail + 1 : path;
>  }
>  
> +#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
> +#define __RENAME(x) __asm__(#x)
> +
> +void fortify_panic(const char *name) __noreturn __cold;
> +void __buffer_overflow(void) __compiletime_error("buffer overflow");
> +
> +#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
> +__FORTIFY_INLINE char *strcpy(char *p, const char *q)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strcpy(p, q);
> +	if (strlcpy(p, q, p_size) >= p_size)
> +		fortify_panic(__func__);
> +	return p;
> +}
> +
> +__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_strncpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE char *strcat(char *p, const char *q)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strcat(p, q);
> +	if (strlcat(p, q, p_size) >= p_size)
> +		fortify_panic(__func__);
> +	return p;
> +}
> +
> +__FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
> +{
> +	size_t p_len, copy_len;
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strncat(p, q, count);
> +	p_len = __builtin_strlen(p);
> +	copy_len = strnlen(q, count);
> +	if (p_size < p_len + copy_len + 1)
> +		fortify_panic(__func__);
> +	__builtin_memcpy(p + p_len, q, copy_len);
> +	p[p_len + copy_len] = '\0';
> +	return p;
> +}
> +
> +__FORTIFY_INLINE __kernel_size_t strlen(const char *p)
> +{
> +	__kernel_size_t ret;
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strlen(p);
> +	ret = strnlen(p, p_size);
> +	if (p_size <= ret)
> +		fortify_panic(__func__);
> +	return ret;
> +}
> +
> +extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
> +__FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	__kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size);
> +	if (p_size <= ret)
> +		fortify_panic(__func__);
> +	return ret;
> +}
> +
> +__FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memset(p, c, size);
> +}
> +
> +__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memcpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memmove(p, q, size);
> +}
> +
> +extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
> +__FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_memscan(p, c, size);
> +}
> +
> +__FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memcmp(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memchr(p, c, size);
> +}
> +
> +void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
> +__FORTIFY_INLINE void *memchr_inv(const void *p, int c, size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_memchr_inv(p, c, size);
> +}
> +
> +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
> +__FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_kmemdup(p, size, gfp);
> +}
> +#endif
> +
>  #endif /* _LINUX_STRING_H_ */
> diff --git a/lib/string.c b/lib/string.c
> index b5c9a1168d3a..ca356e537e24 100644
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -19,6 +19,8 @@
>   * -  Kissed strtok() goodbye
>   */
>  
> +#define __NO_FORTIFY
> +
>  #include <linux/types.h>
>  #include <linux/string.h>
>  #include <linux/ctype.h>
> @@ -952,3 +954,9 @@ char *strreplace(char *s, char old, char new)
>  	return s;
>  }
>  EXPORT_SYMBOL(strreplace);
> +
> +void fortify_panic(const char *name)
> +{
> +	panic("detected buffer overflow in %s", name);
> +}
> +EXPORT_SYMBOL(fortify_panic);
> diff --git a/security/Kconfig b/security/Kconfig
> index 93027fdf47d1..0e5035d720ce 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -154,6 +154,12 @@ config HARDENED_USERCOPY_PAGESPAN
>  	  been removed. This config is intended to be used only while
>  	  trying to find such users.
>  
> +config FORTIFY_SOURCE
> +	bool "Harden common functions against buffer overflows"
> +	help
> +	  Detect overflows of buffers in common functions where the compiler
> +	  can determine the buffer size.
> +
>  config STATIC_USERMODEHELPER
>  	bool "Force all usermode helper calls through a single binary"
>  	help
> -- 
> 2.12.2

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
@ 2017-05-08 17:57   ` Daniel Axtens
  0 siblings, 0 replies; 25+ messages in thread
From: Daniel Axtens @ 2017-05-08 17:57 UTC (permalink / raw)
  To: Daniel Micay, Kees Cook, kernel-hardening, linuxppc-dev; +Cc: andrew.donnellan

Hi Daniel and ppc people,

(ppc people: this does some compile and run time bounds checking on
string functions. It's cool - currently it picks up a lot of random
things so it will require some more work across the tree, but hopefully
it will eventually hit mainline.)

I've tested this on ppc with pseries_le_defconfig.

I needed a couple of the fixes from github
(https://github.com/thestinger/linux-hardened/commits/4.11) in order to
build, specifically
https://github.com/thestinger/linux-hardened/commit/c65d6a6f309b06703584a23ac2b2bda4bb363143
https://github.com/thestinger/linux-hardened/commit/adcec4756574a8c7f7cb5b6fa51ebeaeeae71aae

Once those were added, I needed to disable fortification in prom_init.c,
as we apparently can't have new symbols there. (I don't understand that
file so I haven't dug into it.)

We also have problems with the feature fixup tests leading to a panic on
boot. It relates to getting what I think are asm labels(?) and how we
address them. I have just disabled fortify here for now; I think the
code could be rewritten to take the labels as unsigned char *, but I
haven't dug into it.

With the following fixups, I can boot a LE buildroot initrd (per
https://github.com/linuxppc/linux/wiki/Booting-with-Qemu). Sadly I don't
have access to real hardware any more, so I can't say anything more than
that. (ajd - perhaps relevant to your interests?)

Regards,
Daniel

>From 33db928b21e6bcb78f93b7883b423282d65af609 Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 9 May 2017 03:15:05 +1000
Subject: [PATCH] powerpc fixes for fortify

Signed-off-by: Daniel Axtens <daniel.axtens@canonical.com>
---
 arch/powerpc/kernel/prom_init.c   | 3 +++
 arch/powerpc/lib/feature-fixups.c | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index dd8a04f3053a..613f79f03877 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -15,6 +15,9 @@
 
 #undef DEBUG_PROM
 
+/* we cannot use FORTIFY as it brings in new symbols */
+#define __NO_FORTIFY
+
 #include <stdarg.h>
 #include <linux/kernel.h>
 #include <linux/string.h>
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index f3917705c686..2eee8558df61 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -12,6 +12,12 @@
  *  2 of the License, or (at your option) any later version.
  */
 
+/*
+ * feature fixup tests do memcmp with raw addresses rather than
+ * objects, which panics on boot with fortify on. TODO FIXME
+ */
+#define __NO_FORTIFY
+
 #include <linux/types.h>
 #include <linux/jump_label.h>
 #include <linux/kernel.h>
-- 
2.11.0


Daniel Micay <danielmicay@gmail.com> writes:

> This adds support for compiling with a rough equivalent to the glibc
> _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
> overflow checks for string.h functions when the compiler determines the
> size of the source or destination buffer at compile-time. Unlike glibc,
> it covers buffer reads in addition to writes.
>
> GNU C __builtin_*_chk intrinsics are avoided because they're only
> designed to detect write overflows and are overly complex. A single
> inline branch works for everything but strncat while those intrinsics
> would force the creation of a bunch of extra non-inline wrappers that
> aren't able to receive the detected source buffer size.
>
> This detects various undefined uses of memcpy, etc. at compile-time
> outside of non-core kernel code, and in the arm64 vdso code, so there
> will need to be a series of fixes applied (mainly memcpy -> strncpy for
> copying string constants to avoid copying past the end of the source).
> It isn't particularly bad, but there are likely some issues that occur
> during regular use at runtime (none found so far).
>
> Future improvements left out of initial implementation for simplicity,
> as it's all quite optional and can be done incrementally:
>
> The fortified string functions should place a limit on reads from the
> source. For strcat/strcpy, these could just be a variant of strlcat /
> strlcpy using the size limit as a bound on both the source and
> destination, with the return value only reporting whether truncation
> occurred rather than providing the source length. It would be an easier
> API for developers to use too and not that it really matters but it
> would be more efficient for the case where truncation is intended.
>
> It should be possible to optionally use __builtin_object_size(x, 1) for
> some functions (C strings) to detect intra-object overflows (like
> glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
> approach to avoid likely compatibility issues.
>
> The error reporting could be made friendlier by splitting up the
> compile-time error for reads and writes. The runtime error could also
> directly report the buffer and copy sizes.
>
> It may make sense to have the compile-time checks in a separate
> configuration option since they wouldn't have a runtime performance cost
> if there was an ifdef for the runtime check. However, the runtime cost
> of these checks is already quite low (much lower than SSP) and as long
> as some people are using it with allyesconfig, the issues will be found
> for any in-tree code.
>
> Signed-off-by: Daniel Micay <danielmicay@gmail.com>
> ---
>  arch/x86/boot/compressed/misc.c |   5 ++
>  include/linux/string.h          | 161 ++++++++++++++++++++++++++++++++++++++++
>  lib/string.c                    |   8 ++
>  security/Kconfig                |   6 ++
>  4 files changed, 180 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index b3c5a5f030ce..43691238a21d 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -409,3 +409,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>  	debug_putstr("done.\nBooting the kernel.\n");
>  	return output;
>  }
> +
> +void fortify_panic(const char *name)
> +{
> +	error("detected buffer overflow");
> +}
> diff --git a/include/linux/string.h b/include/linux/string.h
> index 26b6f6a66f83..3bd429c9593a 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -169,4 +169,165 @@ static inline const char *kbasename(const char *path)
>  	return tail ? tail + 1 : path;
>  }
>  
> +#define __FORTIFY_INLINE extern __always_inline __attribute__((gnu_inline))
> +#define __RENAME(x) __asm__(#x)
> +
> +void fortify_panic(const char *name) __noreturn __cold;
> +void __buffer_overflow(void) __compiletime_error("buffer overflow");
> +
> +#if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
> +__FORTIFY_INLINE char *strcpy(char *p, const char *q)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strcpy(p, q);
> +	if (strlcpy(p, q, p_size) >= p_size)
> +		fortify_panic(__func__);
> +	return p;
> +}
> +
> +__FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_strncpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE char *strcat(char *p, const char *q)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strcat(p, q);
> +	if (strlcat(p, q, p_size) >= p_size)
> +		fortify_panic(__func__);
> +	return p;
> +}
> +
> +__FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
> +{
> +	size_t p_len, copy_len;
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strncat(p, q, count);
> +	p_len = __builtin_strlen(p);
> +	copy_len = strnlen(q, count);
> +	if (p_size < p_len + copy_len + 1)
> +		fortify_panic(__func__);
> +	__builtin_memcpy(p + p_len, q, copy_len);
> +	p[p_len + copy_len] = '\0';
> +	return p;
> +}
> +
> +__FORTIFY_INLINE __kernel_size_t strlen(const char *p)
> +{
> +	__kernel_size_t ret;
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (p_size == (size_t)-1)
> +		return __builtin_strlen(p);
> +	ret = strnlen(p, p_size);
> +	if (p_size <= ret)
> +		fortify_panic(__func__);
> +	return ret;
> +}
> +
> +extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen);
> +__FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	__kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size);
> +	if (p_size <= ret)
> +		fortify_panic(__func__);
> +	return ret;
> +}
> +
> +__FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memset(p, c, size);
> +}
> +
> +__FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memcpy(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memmove(p, q, size);
> +}
> +
> +extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
> +__FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_memscan(p, c, size);
> +}
> +
> +__FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	size_t q_size = __builtin_object_size(q, 0);
> +	if (__builtin_constant_p(size) && (p_size < size || q_size < size))
> +		__buffer_overflow();
> +	if (p_size < size || q_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memcmp(p, q, size);
> +}
> +
> +__FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __builtin_memchr(p, c, size);
> +}
> +
> +void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
> +__FORTIFY_INLINE void *memchr_inv(const void *p, int c, size_t size)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_memchr_inv(p, c, size);
> +}
> +
> +extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup);
> +__FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp)
> +{
> +	size_t p_size = __builtin_object_size(p, 0);
> +	if (__builtin_constant_p(size) && p_size < size)
> +		__buffer_overflow();
> +	if (p_size < size)
> +		fortify_panic(__func__);
> +	return __real_kmemdup(p, size, gfp);
> +}
> +#endif
> +
>  #endif /* _LINUX_STRING_H_ */
> diff --git a/lib/string.c b/lib/string.c
> index b5c9a1168d3a..ca356e537e24 100644
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -19,6 +19,8 @@
>   * -  Kissed strtok() goodbye
>   */
>  
> +#define __NO_FORTIFY
> +
>  #include <linux/types.h>
>  #include <linux/string.h>
>  #include <linux/ctype.h>
> @@ -952,3 +954,9 @@ char *strreplace(char *s, char old, char new)
>  	return s;
>  }
>  EXPORT_SYMBOL(strreplace);
> +
> +void fortify_panic(const char *name)
> +{
> +	panic("detected buffer overflow in %s", name);
> +}
> +EXPORT_SYMBOL(fortify_panic);
> diff --git a/security/Kconfig b/security/Kconfig
> index 93027fdf47d1..0e5035d720ce 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -154,6 +154,12 @@ config HARDENED_USERCOPY_PAGESPAN
>  	  been removed. This config is intended to be used only while
>  	  trying to find such users.
>  
> +config FORTIFY_SOURCE
> +	bool "Harden common functions against buffer overflows"
> +	help
> +	  Detect overflows of buffers in common functions where the compiler
> +	  can determine the buffer size.
> +
>  config STATIC_USERMODEHELPER
>  	bool "Force all usermode helper calls through a single binary"
>  	help
> -- 
> 2.12.2

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-08 17:57   ` Daniel Axtens
  (?)
@ 2017-05-08 20:50   ` Daniel Micay
  2017-05-08 21:53     ` Kees Cook
  -1 siblings, 1 reply; 25+ messages in thread
From: Daniel Micay @ 2017-05-08 20:50 UTC (permalink / raw)
  To: Daniel Axtens, Kees Cook, kernel-hardening, linuxppc-dev; +Cc: andrew.donnellan

On Tue, 2017-05-09 at 03:57 +1000, Daniel Axtens wrote:
> Hi Daniel and ppc people,
> 
> (ppc people: this does some compile and run time bounds checking on
> string functions. It's cool - currently it picks up a lot of random
> things so it will require some more work across the tree, but
> hopefully
> it will eventually hit mainline.)
> 
> I've tested this on ppc with pseries_le_defconfig.
> 
> I needed a couple of the fixes from github
> (https://github.com/thestinger/linux-hardened/commits/4.11) in order
> to
> build, specifically
> https://github.com/thestinger/linux-hardened/commit/c65d6a6f309b067035
> 84a23ac2b2bda4bb363143
> https://github.com/thestinger/linux-
> hardened/commit/adcec4756574a8c7f7cb5b6fa51ebeaeeae71aae

By the way, Kees is working on landing these upstream as proper fixes
rather than the workarounds I did there. In most cases, it's a matter of
migrating from memcpy past the end of a constant string to strncpy (to
make sure that the destination is still fully filled but with zeroes
instead of arbitrary junk from rodata vs strcpy/strlcpy which won't
cover that). A few of them are a bit weirder though. I haven't seen any
false positives yet though, due to sticking to __builtin_object_size(p,
0) for now.

> Once those were added, I needed to disable fortification in
> prom_init.c,
> as we apparently can't have new symbols there. (I don't understand
> that
> file so I haven't dug into it.)
> 
> We also have problems with the feature fixup tests leading to a panic
> on
> boot. It relates to getting what I think are asm labels(?) and how we
> address them. I have just disabled fortify here for now; I think the
> code could be rewritten to take the labels as unsigned char *, but I
> haven't dug into it.

They probably need to be changed to use 'unsigned char foo[]' instead of
 'char foo' with &foo taken as an address and used as a larger buffer.
That's undefined and the compiler can assume it's limited to the size of
the type used to define it which then gets enforced by these fortified
wrappers rather than just used for optimization (in practice, it won't
break much without these, but it could).

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-08 20:50   ` Daniel Micay
@ 2017-05-08 21:53     ` Kees Cook
  0 siblings, 0 replies; 25+ messages in thread
From: Kees Cook @ 2017-05-08 21:53 UTC (permalink / raw)
  To: Daniel Micay
  Cc: Daniel Axtens, kernel-hardening, linuxppc-dev, Andrew Donnellan

On Mon, May 8, 2017 at 1:50 PM, Daniel Micay <danielmicay@gmail.com> wrote:
> On Tue, 2017-05-09 at 03:57 +1000, Daniel Axtens wrote:
>> Hi Daniel and ppc people,
>>
>> (ppc people: this does some compile and run time bounds checking on
>> string functions. It's cool - currently it picks up a lot of random
>> things so it will require some more work across the tree, but
>> hopefully
>> it will eventually hit mainline.)
>>
>> I've tested this on ppc with pseries_le_defconfig.

Yay! Thanks very much!

>>
>> I needed a couple of the fixes from github
>> (https://github.com/thestinger/linux-hardened/commits/4.11) in order
>> to
>> build, specifically
>> https://github.com/thestinger/linux-hardened/commit/c65d6a6f309b067035
>> 84a23ac2b2bda4bb363143
>> https://github.com/thestinger/linux-
>> hardened/commit/adcec4756574a8c7f7cb5b6fa51ebeaeeae71aae
>
> By the way, Kees is working on landing these upstream as proper fixes
> rather than the workarounds I did there. In most cases, it's a matter of
> migrating from memcpy past the end of a constant string to strncpy (to
> make sure that the destination is still fully filled but with zeroes
> instead of arbitrary junk from rodata vs strcpy/strlcpy which won't
> cover that). A few of them are a bit weirder though. I haven't seen any
> false positives yet though, due to sticking to __builtin_object_size(p,
> 0) for now.

FWIW, all the net-dev ones have been applied for -next now.

>> Once those were added, I needed to disable fortification in
>> prom_init.c,
>> as we apparently can't have new symbols there. (I don't understand
>> that
>> file so I haven't dug into it.)
>>
>> We also have problems with the feature fixup tests leading to a panic
>> on
>> boot. It relates to getting what I think are asm labels(?) and how we
>> address them. I have just disabled fortify here for now; I think the
>> code could be rewritten to take the labels as unsigned char *, but I
>> haven't dug into it.
>
> They probably need to be changed to use 'unsigned char foo[]' instead of
>  'char foo' with &foo taken as an address and used as a larger buffer.
> That's undefined and the compiler can assume it's limited to the size of
> the type used to define it which then gets enforced by these fortified
> wrappers rather than just used for optimization (in practice, it won't
> break much without these, but it could).

We'd need something to actually extract the sizes of the asm
functions. Right now, that kind of thing is done in the linker
scripts, but that may be too late.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
       [not found] ` <20170508175723.448CCAC043@b01ledav006.gho.pok.ibm.com>
@ 2017-05-09  6:24   ` Andrew Donnellan
  0 siblings, 0 replies; 25+ messages in thread
From: Andrew Donnellan @ 2017-05-09  6:24 UTC (permalink / raw)
  To: Daniel Axtens, Daniel Micay, Kees Cook, kernel-hardening, linuxppc-dev

On 09/05/17 03:57, Daniel Axtens wrote:
> (ppc people: this does some compile and run time bounds checking on
> string functions. It's cool - currently it picks up a lot of random
> things so it will require some more work across the tree, but hopefully
> it will eventually hit mainline.)

Ooh, nice!

>
> I've tested this on ppc with pseries_le_defconfig.
>
> I needed a couple of the fixes from github
> (https://github.com/thestinger/linux-hardened/commits/4.11) in order to
> build, specifically
> https://github.com/thestinger/linux-hardened/commit/c65d6a6f309b06703584a23ac2b2bda4bb363143
> https://github.com/thestinger/linux-hardened/commit/adcec4756574a8c7f7cb5b6fa51ebeaeeae71aae
>
> Once those were added, I needed to disable fortification in prom_init.c,
> as we apparently can't have new symbols there. (I don't understand that
> file so I haven't dug into it.)
>
> We also have problems with the feature fixup tests leading to a panic on
> boot. It relates to getting what I think are asm labels(?) and how we
> address them. I have just disabled fortify here for now; I think the
> code could be rewritten to take the labels as unsigned char *, but I
> haven't dug into it.
>
> With the following fixups, I can boot a LE buildroot initrd (per
> https://github.com/linuxppc/linux/wiki/Booting-with-Qemu). Sadly I don't
> have access to real hardware any more, so I can't say anything more than
> that. (ajd - perhaps relevant to your interests?)

I'll test it baremetal when I get the chance, and I'll see if I can 
investigate the issues you've raised.

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-05 10:38       ` Mark Rutland
  2017-05-05 10:52         ` Mark Rutland
  2017-05-05 17:38         ` Daniel Micay
@ 2017-05-09 20:39         ` Kees Cook
  2017-05-09 23:02           ` Daniel Micay
  2017-05-10 11:12           ` Mark Rutland
  2 siblings, 2 replies; 25+ messages in thread
From: Kees Cook @ 2017-05-09 20:39 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Daniel Micay, kernel-hardening, Ard Biesheuvel, Matt Fleming

On Fri, May 5, 2017 at 3:38 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> ---->8----
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index f742596..b5327f5 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
>
>  KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
>                                    $(call cc-option,-ffreestanding) \
> -                                  $(call cc-option,-fno-stack-protector)
> +                                  $(call cc-option,-fno-stack-protector) \
> +                                  -D__NO_FORTIFY
>
>  GCOV_PROFILE                   := n
>  KASAN_SANITIZE                 := n
> ---->8----

Can we split the compile time from runtime checks so the efi stub is
still covered by the build-time checks? (Or was there a compile
failure I missed?)

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-09 20:39         ` Kees Cook
@ 2017-05-09 23:02           ` Daniel Micay
  2017-05-10 11:12           ` Mark Rutland
  1 sibling, 0 replies; 25+ messages in thread
From: Daniel Micay @ 2017-05-09 23:02 UTC (permalink / raw)
  To: Kees Cook, Mark Rutland; +Cc: kernel-hardening, Ard Biesheuvel, Matt Fleming

On Tue, 2017-05-09 at 13:39 -0700, Kees Cook wrote:
> On Fri, May 5, 2017 at 3:38 AM, Mark Rutland <mark.rutland@arm.com>
> wrote:
> > ---->8----
> > diff --git a/drivers/firmware/efi/libstub/Makefile
> > b/drivers/firmware/efi/libstub/Makefile
> > index f742596..b5327f5 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  +=
> > -I$(srctree)/scripts/dtc/libfdt
> > 
> >  KBUILD_CFLAGS                  := $(cflags-y)
> > -DDISABLE_BRANCH_PROFILING \
> >                                    $(call cc-option,-ffreestanding)
> > \
> > -                                  $(call cc-option,-fno-stack-
> > protector)
> > +                                  $(call cc-option,-fno-stack-
> > protector) \
> > +                                  -D__NO_FORTIFY
> > 
> >  GCOV_PROFILE                   := n
> >  KASAN_SANITIZE                 := n
> > ---->8----
> 
> Can we split the compile time from runtime checks so the efi stub is
> still covered by the build-time checks? (Or was there a compile
> failure I missed?)
> 
> -Kees

It might just need fortify_panic defined somewhere. It seems like the
place I defined it on x86 covers this but I might be wrong about that.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-09 20:39         ` Kees Cook
  2017-05-09 23:02           ` Daniel Micay
@ 2017-05-10 11:12           ` Mark Rutland
  1 sibling, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2017-05-10 11:12 UTC (permalink / raw)
  To: Kees Cook; +Cc: Daniel Micay, kernel-hardening, Ard Biesheuvel, Matt Fleming

On Tue, May 09, 2017 at 01:39:01PM -0700, Kees Cook wrote:
> On Fri, May 5, 2017 at 3:38 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > ---->8----
> > diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> > index f742596..b5327f5 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
> >
> >  KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
> >                                    $(call cc-option,-ffreestanding) \
> > -                                  $(call cc-option,-fno-stack-protector)
> > +                                  $(call cc-option,-fno-stack-protector) \
> > +                                  -D__NO_FORTIFY
> >
> >  GCOV_PROFILE                   := n
> >  KASAN_SANITIZE                 := n
> > ---->8----
> 
> Can we split the compile time from runtime checks so the efi stub is
> still covered by the build-time checks? (Or was there a compile
> failure I missed?)

Without this, the EFI stub won't build for arm64, due to the lack of a
fortify_panic(). The arm64 __efistub_ symbol mangling prevents us using
the usual kernel version, which would be wrong to use anyway.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
  2017-05-08 17:57   ` Daniel Axtens
@ 2017-05-10 12:00     ` Michael Ellerman
  -1 siblings, 0 replies; 25+ messages in thread
From: Michael Ellerman @ 2017-05-10 12:00 UTC (permalink / raw)
  To: Daniel Axtens, Daniel Micay, Kees Cook, kernel-hardening, linuxppc-dev
  Cc: Daniel Micay, andrew.donnellan

Daniel Axtens <dja@axtens.net> writes:

> Hi Daniel and ppc people,
>
> (ppc people: this does some compile and run time bounds checking on
> string functions. It's cool - currently it picks up a lot of random
> things so it will require some more work across the tree, but hopefully
> it will eventually hit mainline.)

Cool! 

> Once those were added, I needed to disable fortification in prom_init.c,
> as we apparently can't have new symbols there. (I don't understand that
> file so I haven't dug into it.)

We can refer to new symbols from there, we just have a script to check
we don't refer to anything new inadvertently.

prom_init() is sort of a shim that runs before the kernel, except it's
linked with the kernel, but ideally wouldn't be, but we never bothered
actually making it separate. O_o

> We also have problems with the feature fixup tests leading to a panic on
> boot. It relates to getting what I think are asm labels(?) and how we
> address them. I have just disabled fortify here for now; I think the
> code could be rewritten to take the labels as unsigned char *, but I
> haven't dug into it.

OK, yeah it's using asm labels to compare the patched code vs the
expected result. Can probably be fixed by making them char[] like we do
for other things like __start_interrupts etc.

cheers

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kernel-hardening] [PATCH] add the option of fortified string.h functions
@ 2017-05-10 12:00     ` Michael Ellerman
  0 siblings, 0 replies; 25+ messages in thread
From: Michael Ellerman @ 2017-05-10 12:00 UTC (permalink / raw)
  To: Daniel Axtens, Daniel Micay, Kees Cook, kernel-hardening, linuxppc-dev
  Cc: andrew.donnellan

Daniel Axtens <dja@axtens.net> writes:

> Hi Daniel and ppc people,
>
> (ppc people: this does some compile and run time bounds checking on
> string functions. It's cool - currently it picks up a lot of random
> things so it will require some more work across the tree, but hopefully
> it will eventually hit mainline.)

Cool! 

> Once those were added, I needed to disable fortification in prom_init.c,
> as we apparently can't have new symbols there. (I don't understand that
> file so I haven't dug into it.)

We can refer to new symbols from there, we just have a script to check
we don't refer to anything new inadvertently.

prom_init() is sort of a shim that runs before the kernel, except it's
linked with the kernel, but ideally wouldn't be, but we never bothered
actually making it separate. O_o

> We also have problems with the feature fixup tests leading to a panic on
> boot. It relates to getting what I think are asm labels(?) and how we
> address them. I have just disabled fortify here for now; I think the
> code could be rewritten to take the labels as unsigned char *, but I
> haven't dug into it.

OK, yeah it's using asm labels to compare the patched code vs the
expected result. Can probably be fixed by making them char[] like we do
for other things like __start_interrupts etc.

cheers

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2017-05-10 12:00 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-04 14:24 [kernel-hardening] [PATCH] add the option of fortified string.h functions Daniel Micay
2017-05-04 14:51 ` [kernel-hardening] " Daniel Micay
2017-05-04 15:48 ` [kernel-hardening] " Mark Rutland
2017-05-04 17:49   ` Daniel Micay
2017-05-04 18:09     ` Mark Rutland
2017-05-04 19:03       ` Daniel Micay
2017-05-05 10:38       ` Mark Rutland
2017-05-05 10:52         ` Mark Rutland
2017-05-05 13:44           ` Kees Cook
2017-05-05 17:38         ` Daniel Micay
2017-05-08 11:41           ` Mark Rutland
2017-05-08 16:08             ` Daniel Micay
2017-05-09 20:39         ` Kees Cook
2017-05-09 23:02           ` Daniel Micay
2017-05-10 11:12           ` Mark Rutland
2017-05-05 16:42 ` Kees Cook
2017-05-05 16:42   ` [kernel-hardening] " Kees Cook
2017-05-06  2:12 ` Kees Cook
2017-05-08 17:57 ` [kernel-hardening] " Daniel Axtens
2017-05-08 17:57   ` Daniel Axtens
2017-05-08 20:50   ` Daniel Micay
2017-05-08 21:53     ` Kees Cook
2017-05-10 12:00   ` Michael Ellerman
2017-05-10 12:00     ` Michael Ellerman
     [not found] ` <20170508175723.448CCAC043@b01ledav006.gho.pok.ibm.com>
2017-05-09  6:24   ` Andrew Donnellan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.