kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] RFC: introduce CONFIG_INIT_ALL_MEMORY
@ 2019-04-10 13:17 Alexander Potapenko
  2019-04-10 13:17 ` [PATCH v4 1/3] initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK Alexander Potapenko
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:17 UTC (permalink / raw)
  To: yamada.masahiro, jmorris, serge
  Cc: linux-security-module, linux-kbuild, ndesaulniers, kcc, dvyukov,
	keescook, sspatil, labbott, kernel-hardening

This patch is a part of a bigger initiative to allow initializing
heap/stack memory in the Linux kernels by default.
The rationale behind doing so is to reduce the severity of bugs caused
by using uninitialized memory.

Over the last two years KMSAN (https://github.com/google/kmsan/) has
found more than a hundred bugs running in a really moderate setup (orders
of magnitude less CPU/months than KASAN). Some of those bugs led to
information leaks if uninitialized memory was copied to the userspace,
other could cause DoS because of subverted control flow.
A lot more bugs remain uncovered, so we want to provide the distros and OS
vendors with a last resort measure to mitigate such bugs.

Our plan is to introduce configuration flags to force initialization of
stack and heap variables with a fixed pattern.
This is going to render information leaks inefficient (as we'll only leak
pattern data) and make uses of uninitialized values in conditions more
deterministic and discoverable.

The stack instrumentation part is based on Clang's -ftrivial-auto-var-init
(see https://reviews.llvm.org/D54604 ; there's also a GCC feature request
for a similar flag: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87210)
or GCC's -fplugin-arg-structleak_plugin-byref-all
The heap initialization part is compiler-agnostic and is done in the
places that previously checked for __GFP_ZERO to initialize the newly
allocated memory.

Alexander Potapenko (3):
  initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK
  initmem: introduce CONFIG_INIT_ALL_HEAP
  net: make sk_prot_alloc() work with CONFIG_INIT_ALL_HEAP

 Makefile                               | 10 ++++++
 arch/arm64/Kconfig                     |  1 +
 arch/arm64/include/asm/page.h          |  1 +
 arch/x86/Kconfig                       |  1 +
 arch/x86/include/asm/page_64.h         | 10 ++++++
 arch/x86/lib/clear_page_64.S           | 24 ++++++++++++++
 drivers/infiniband/core/uverbs_ioctl.c |  4 +--
 include/linux/gfp.h                    | 10 ++++++
 include/linux/highmem.h                |  8 +++++
 include/net/sock.h                     |  8 ++---
 kernel/kexec_core.c                    |  8 +++--
 mm/dmapool.c                           |  4 +--
 mm/page_alloc.c                        |  9 ++++--
 mm/slab.c                              | 19 ++++++++----
 mm/slub.c                              | 12 ++++---
 net/core/sock.c                        |  5 +--
 security/Kconfig                       |  1 +
 security/Kconfig.initmem               | 43 ++++++++++++++++++++++++++
 18 files changed, 154 insertions(+), 24 deletions(-)
 create mode 100644 security/Kconfig.initmem

-- 
2.21.0.392.gf8f6787159e-goog

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 1/3] initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK
  2019-04-10 13:17 [PATCH v4 0/3] RFC: introduce CONFIG_INIT_ALL_MEMORY Alexander Potapenko
@ 2019-04-10 13:17 ` Alexander Potapenko
  2019-04-10 13:17 ` [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP Alexander Potapenko
  2019-04-10 13:17 ` [PATCH 3/3] net: make sk_prot_alloc() work with CONFIG_INIT_ALL_HEAP Alexander Potapenko
  2 siblings, 0 replies; 8+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:17 UTC (permalink / raw)
  To: yamada.masahiro, jmorris, serge
  Cc: linux-security-module, linux-kbuild, ndesaulniers, kcc, dvyukov,
	keescook, sspatil, labbott, kernel-hardening

CONFIG_INIT_ALL_MEMORY is going to be an umbrella config for options
that force heap and stack initialization.
The rationale behind doing so is to reduce the severity of bugs caused
by using uninitialized memory.

CONFIG_INIT_ALL_STACK turns on stack initialization based on
-ftrivial-auto-var-init in Clang builds and on
-fplugin-arg-structleak_plugin-byref-all in GCC builds.

-ftrivial-auto-var-init is a Clang flag that provides trivial
initializers for uninitialized local variables, variable fields and
padding.

It has three possible values:
  pattern - uninitialized locals are filled with a fixed pattern
    (mostly 0xAA on 64-bit platforms, see https://reviews.llvm.org/D54604
    for more details) likely to cause crashes when uninitialized value is
    used;
  zero (it's still debated whether this flag makes it to the official
    Clang release) - uninitialized locals are filled with zeroes;
  uninitialized (default) - uninitialized locals are left intact.

The proposed config builds the kernel with
-ftrivial-auto-var-init=pattern.

Developers have the possibility to opt-out of this feature on a
per-variable basis by using __attribute__((uninitialized)).

For GCC builds, CONFIG_INIT_ALL_STACK is simply wired up to
CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. No opt-out is possible at the
moment.

Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-security-module@vger.kernel.org
Cc: linux-kbuild@vger.kernel.org
Cc: kernel-hardening@lists.openwall.com
---
 v2:
  - addressed Kees Cook's comments: added GCC support
 v3: addressed Masahiro Yamada's comments:
  - dropped per-file opt-out mechanism
  - fixed GCC_PLUGINS dependencies
 v4:
  - addressed Randy Dunlap's comments: remove redundant "depends on"
  - addressed Masahiro Yamada's comments: drop Makefile.initmem
---
 Makefile                 | 10 ++++++++++
 security/Kconfig         |  1 +
 security/Kconfig.initmem | 28 ++++++++++++++++++++++++++++
 3 files changed, 39 insertions(+)
 create mode 100644 security/Kconfig.initmem

diff --git a/Makefile b/Makefile
index 15c8251d4d5e..02f4b9df0102 100644
--- a/Makefile
+++ b/Makefile
@@ -727,6 +727,16 @@ KBUILD_CFLAGS += $(call cc-disable-warning, tautological-compare)
 # See modpost pattern 2
 KBUILD_CFLAGS += $(call cc-option, -mno-global-merge,)
 KBUILD_CFLAGS += $(call cc-option, -fcatch-undefined-behavior)
+
+ifdef CONFIG_INIT_ALL_STACK
+# Clang's -ftrivial-auto-var-init=pattern flag initializes the
+# uninitialized parts of local variables (including fields and padding)
+# with a fixed pattern (0xAA in most cases).
+ifdef CONFIG_CC_HAS_AUTO_VAR_INIT
+KBUILD_CFLAGS += -ftrivial-auto-var-init=pattern
+endif
+endif
+
 else
 
 # These warnings generated too much noise in a regular build.
diff --git a/security/Kconfig b/security/Kconfig
index 353cfef71d4e..4d27437c2eb8 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -229,6 +229,7 @@ config STATIC_USERMODEHELPER_PATH
 	  If you wish for all usermode helper programs to be disabled,
 	  specify an empty string here (i.e. "").
 
+source "security/Kconfig.initmem"
 source "security/selinux/Kconfig"
 source "security/smack/Kconfig"
 source "security/tomoyo/Kconfig"
diff --git a/security/Kconfig.initmem b/security/Kconfig.initmem
new file mode 100644
index 000000000000..cdad1e185b10
--- /dev/null
+++ b/security/Kconfig.initmem
@@ -0,0 +1,28 @@
+menu "Initialize all memory"
+
+config CC_HAS_AUTO_VAR_INIT
+	def_bool $(cc-option,-ftrivial-auto-var-init=pattern)
+
+config INIT_ALL_MEMORY
+	bool "Initialize all memory"
+	default n
+	help
+	  Enforce memory initialization to mitigate infoleaks and make
+	  the control-flow bugs depending on uninitialized values more
+	  deterministic.
+
+if INIT_ALL_MEMORY
+
+config INIT_ALL_STACK
+	bool "Initialize all stack"
+	depends on CC_HAS_AUTO_VAR_INIT || (HAVE_GCC_PLUGINS && PLUGIN_HOSTCC != "")
+	select GCC_PLUGINS if !CC_HAS_AUTO_VAR_INIT
+	select GCC_PLUGIN_STRUCTLEAK if !CC_HAS_AUTO_VAR_INIT
+	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL if !CC_HAS_AUTO_VAR_INIT
+	default y
+	help
+	  Initialize uninitialized stack data with a fixed pattern
+	  (0x00 in GCC, 0xAA in Clang).
+
+endif # INIT_ALL_MEMORY
+endmenu
-- 
2.21.0.392.gf8f6787159e-goog

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP
  2019-04-10 13:17 [PATCH v4 0/3] RFC: introduce CONFIG_INIT_ALL_MEMORY Alexander Potapenko
  2019-04-10 13:17 ` [PATCH v4 1/3] initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK Alexander Potapenko
@ 2019-04-10 13:17 ` Alexander Potapenko
  2019-04-10 16:09   ` Kees Cook
  2019-04-10 13:17 ` [PATCH 3/3] net: make sk_prot_alloc() work with CONFIG_INIT_ALL_HEAP Alexander Potapenko
  2 siblings, 1 reply; 8+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:17 UTC (permalink / raw)
  To: yamada.masahiro, jmorris, serge
  Cc: linux-security-module, linux-kbuild, ndesaulniers, kcc, dvyukov,
	keescook, sspatil, labbott, kernel-hardening

This config option adds the possibility to initialize newly allocated
pages and heap objects with a 0xAA pattern.
There's already a number of places where allocations are initialized
based on the presence of __GFP_ZERO flag. We just change this code so
that under CONFIG_INIT_ALL_HEAP these allocations are always initialized
with either 0x00 or 0xAA depending on the __GFP_ZERO.

No performance optimizations are done at the moment to reduce double
initialization of memory regions.

Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jann Horn <jannh@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-security-module@vger.kernel.org
Cc: linux-kbuild@vger.kernel.org
Cc: kernel-hardening@lists.openwall.com
---
 v3:
  - addressed comments by Masahiro Yamada (Kconfig fixes)
 v4:
  - addressed Randy Dunlap's comments: remove redundant "depends on"
  - replaced code wiring SLUB_DEBUG and page poisoning with a more
    lightweight implementation (Laura Abbott mentioned turning on
    debugging has serious performance issues)
---
 arch/arm64/Kconfig                     |  1 +
 arch/arm64/include/asm/page.h          |  1 +
 arch/x86/Kconfig                       |  1 +
 arch/x86/include/asm/page_64.h         | 10 ++++++++++
 arch/x86/lib/clear_page_64.S           | 24 ++++++++++++++++++++++++
 drivers/infiniband/core/uverbs_ioctl.c |  4 ++--
 include/linux/gfp.h                    | 10 ++++++++++
 include/linux/highmem.h                |  8 ++++++++
 kernel/kexec_core.c                    |  8 ++++++--
 mm/dmapool.c                           |  4 ++--
 mm/page_alloc.c                        |  9 +++++++--
 mm/slab.c                              | 19 +++++++++++++------
 mm/slub.c                              | 12 ++++++++----
 security/Kconfig.initmem               | 15 +++++++++++++++
 14 files changed, 108 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..3548235752f6 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -110,6 +110,7 @@ config ARM64
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
 	select HAVE_ARCH_HUGE_VMAP
+	select HAVE_ARCH_INIT_ALL_HEAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
 	select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index c88a3cb117a1..a7bea266c16d 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -33,6 +33,7 @@ extern void copy_page(void *to, const void *from);
 extern void clear_page(void *to);
 
 #define clear_user_page(addr,vaddr,pg)  __cpu_clear_user_page(addr, vaddr)
+#define clear_page_pattern(addr) __memset((addr), GFP_POISON_BYTE, PAGE_SIZE)
 #define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr)
 
 typedef struct page *pgtable_t;
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5ad92419be19..2058beaf3582 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -116,6 +116,7 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
+	select HAVE_ARCH_INIT_ALL_HEAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
 	select HAVE_ARCH_KASAN			if X86_64
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 939b1cff4a7b..8e6e0c00f8c8 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -54,6 +54,16 @@ static inline void clear_page(void *page)
 			   : "cc", "memory", "rax", "rcx");
 }
 
+#ifdef CONFIG_INIT_ALL_HEAP
+
+void clear_page_pattern_orig(void *page);
+// FIXME: add rep and erms.
+static inline void clear_page_pattern(void *page)
+{
+	clear_page_pattern_orig(page);
+}
+#endif
+
 void copy_page(void *to, void *from);
 
 #endif	/* !__ASSEMBLY__ */
diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
index 88acd349911b..b64dab97d4ba 100644
--- a/arch/x86/lib/clear_page_64.S
+++ b/arch/x86/lib/clear_page_64.S
@@ -49,3 +49,27 @@ ENTRY(clear_page_erms)
 	ret
 ENDPROC(clear_page_erms)
 EXPORT_SYMBOL_GPL(clear_page_erms)
+
+#ifdef CONFIG_INIT_ALL_HEAP
+ENTRY(clear_page_pattern_orig)
+	movq   $0xAAAAAAAAAAAAAAAA,%rax
+	movl   $4096/64,%ecx
+	.p2align 4
+.Lloop_pat:
+	decl	%ecx
+	movq %rax,(%rdi)
+	PUT(1)
+	PUT(2)
+	PUT(3)
+	PUT(4)
+	PUT(5)
+	PUT(6)
+	PUT(7)
+	leaq	64(%rdi),%rdi
+	jnz	.Lloop_pat
+	nop
+	ret
+ENDPROC(clear_page_pattern_orig)
+EXPORT_SYMBOL_GPL(clear_page_pattern_orig)
+#endif
+
diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c
index e1379949e663..1ad0eb28e651 100644
--- a/drivers/infiniband/core/uverbs_ioctl.c
+++ b/drivers/infiniband/core/uverbs_ioctl.c
@@ -127,8 +127,8 @@ __malloc void *_uverbs_alloc(struct uverbs_attr_bundle *bundle, size_t size,
 	res = (void *)pbundle->internal_buffer + pbundle->internal_used;
 	pbundle->internal_used =
 		ALIGN(new_used, sizeof(*pbundle->internal_buffer));
-	if (flags & __GFP_ZERO)
-		memset(res, 0, size);
+	if (GFP_INIT_ALWAYS_ON || (flags & __GFP_ZERO))
+		memset(res, INITMEM_FILL_BYTE(flags), size);
 	return res;
 }
 EXPORT_SYMBOL(_uverbs_alloc);
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index fdab7de7490d..a0016357a91a 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -213,6 +213,16 @@ struct vm_area_struct;
 #define __GFP_COMP	((__force gfp_t)___GFP_COMP)
 #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)
 
+#define GFP_POISON_BYTE (0xAA)
+#ifdef CONFIG_INIT_ALL_HEAP
+#define GFP_INIT_ALWAYS_ON (1)
+#define INITMEM_FILL_BYTE(gfp_flags) \
+	(((gfp_flags) & __GFP_ZERO) ? (0) : GFP_POISON_BYTE)
+#else
+#define GFP_INIT_ALWAYS_ON (0)
+#define INITMEM_FILL_BYTE(gfp_flags) (0)
+#endif
+
 /* Disable lockdep for GFP context tracking */
 #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
 
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ea5cdbd8c2c3..b48a04c7b046 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -215,6 +215,14 @@ static inline void clear_highpage(struct page *page)
 	kunmap_atomic(kaddr);
 }
 
+static inline void clear_highpage_pattern(struct page *page)
+{
+	void *kaddr = kmap_atomic(page);
+
+	clear_page_pattern(kaddr);
+	kunmap_atomic(kaddr);
+}
+
 static inline void zero_user_segments(struct page *page,
 	unsigned start1, unsigned end1,
 	unsigned start2, unsigned end2)
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index d7140447be75..63104a71cf60 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -315,9 +315,13 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
 		arch_kexec_post_alloc_pages(page_address(pages), count,
 					    gfp_mask);
 
-		if (gfp_mask & __GFP_ZERO)
+		if (GFP_INIT_ALWAYS_ON || (gfp_mask & __GFP_ZERO)) {
 			for (i = 0; i < count; i++)
-				clear_highpage(pages + i);
+				if (!INITMEM_FILL_BYTE(gfp_mask))
+					clear_highpage(pages + i);
+				else
+					clear_highpage_pattern(pages + i);
+		}
 	}
 
 	return pages;
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 76a160083506..f36dd9065f2b 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -381,8 +381,8 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
 #endif
 	spin_unlock_irqrestore(&pool->lock, flags);
 
-	if (mem_flags & __GFP_ZERO)
-		memset(retval, 0, pool->size);
+	if (GFP_INIT_ALWAYS_ON || (mem_flags & __GFP_ZERO))
+		memset(retval, INITMEM_FILL_BYTE(mem_flags), pool->size);
 
 	return retval;
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d96ca5bc555b..c8477db5ac94 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2014,9 +2014,14 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
 
 	post_alloc_hook(page, order, gfp_flags);
 
-	if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
+	if (!free_pages_prezeroed() && (GFP_INIT_ALWAYS_ON ||
+					(gfp_flags & __GFP_ZERO))) {
 		for (i = 0; i < (1 << order); i++)
-			clear_highpage(page + i);
+			if (!INITMEM_FILL_BYTE(gfp_flags))
+				clear_highpage(page + i);
+			else
+				clear_highpage_pattern(page + i);
+	}
 
 	if (order && (gfp_flags & __GFP_COMP))
 		prep_compound_page(page, order);
diff --git a/mm/slab.c b/mm/slab.c
index 47a380a486ee..6d30db845dbd 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3331,8 +3331,11 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
 	local_irq_restore(save_flags);
 	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
 
-	if (unlikely(flags & __GFP_ZERO) && ptr)
-		memset(ptr, 0, cachep->object_size);
+	if ((unlikely(flags & __GFP_ZERO) ||
+			(GFP_INIT_ALWAYS_ON && !cachep->ctor &&
+			!(cachep->flags & SLAB_TYPESAFE_BY_RCU))) &&
+			ptr)
+		memset(ptr, INITMEM_FILL_BYTE(flags), cachep->object_size);
 
 	slab_post_alloc_hook(cachep, flags, 1, &ptr);
 	return ptr;
@@ -3388,8 +3391,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller)
 	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
 	prefetchw(objp);
 
-	if (unlikely(flags & __GFP_ZERO) && objp)
-		memset(objp, 0, cachep->object_size);
+	if (((GFP_INIT_ALWAYS_ON && !cachep->ctor &&
+			!(cachep->flags & SLAB_TYPESAFE_BY_RCU)) ||
+			unlikely(flags & __GFP_ZERO)) && objp)
+		memset(objp, INITMEM_FILL_BYTE(flags), cachep->object_size);
 
 	slab_post_alloc_hook(cachep, flags, 1, &objp);
 	return objp;
@@ -3596,9 +3601,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_);
 
 	/* Clear memory outside IRQ disabled section */
-	if (unlikely(flags & __GFP_ZERO))
+	if ((GFP_INIT_ALWAYS_ON && !s->ctor &&
+			!(flags & SLAB_TYPESAFE_BY_RCU)) ||
+			unlikely(flags & __GFP_ZERO))
 		for (i = 0; i < size; i++)
-			memset(p[i], 0, s->object_size);
+			memset(p[i], INITMEM_FILL_BYTE(flags), s->object_size);
 
 	slab_post_alloc_hook(s, flags, size, p);
 	/* FIXME: Trace call missing. Christoph would like a bulk variant */
diff --git a/mm/slub.c b/mm/slub.c
index d30ede89f4a6..f89c39b01578 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2750,8 +2750,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 		stat(s, ALLOC_FASTPATH);
 	}
 
-	if (unlikely(gfpflags & __GFP_ZERO) && object)
-		memset(object, 0, s->object_size);
+	if (((GFP_INIT_ALWAYS_ON && !s->ctor &&
+			!(s->flags & SLAB_TYPESAFE_BY_RCU)) ||
+			unlikely(gfpflags & __GFP_ZERO)) && object)
+		memset(object, INITMEM_FILL_BYTE(gfpflags), s->object_size);
 
 	slab_post_alloc_hook(s, gfpflags, 1, &object);
 
@@ -3172,11 +3174,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	local_irq_enable();
 
 	/* Clear memory outside IRQ disabled fastpath loop */
-	if (unlikely(flags & __GFP_ZERO)) {
+	if ((GFP_INIT_ALWAYS_ON && !s->ctor &&
+			!(s->flags & SLAB_TYPESAFE_BY_RCU)) ||
+			unlikely(flags & __GFP_ZERO)) {
 		int j;
 
 		for (j = 0; j < i; j++)
-			memset(p[j], 0, s->object_size);
+			memset(p[j], INITMEM_FILL_BYTE(flags), s->object_size);
 	}
 
 	/* memcg and kmem_cache debug support */
diff --git a/security/Kconfig.initmem b/security/Kconfig.initmem
index cdad1e185b10..93bc6fe32536 100644
--- a/security/Kconfig.initmem
+++ b/security/Kconfig.initmem
@@ -1,5 +1,8 @@
 menu "Initialize all memory"
 
+config HAVE_ARCH_INIT_ALL_HEAP
+	bool
+
 config CC_HAS_AUTO_VAR_INIT
 	def_bool $(cc-option,-ftrivial-auto-var-init=pattern)
 
@@ -24,5 +27,17 @@ config INIT_ALL_STACK
 	  Initialize uninitialized stack data with a fixed pattern
 	  (0x00 in GCC, 0xAA in Clang).
 
+if HAVE_ARCH_INIT_ALL_HEAP
+
+config INIT_ALL_HEAP
+	bool "Initialize all heap"
+	depends on SLAB || SLUB
+	default y
+	help
+	  Initialize uninitialized pages and SL[AOU]B allocations with 0xAA
+	  pattern.
+
+endif # HAVE_ARCH_INIT_ALL_HEAP
+
 endif # INIT_ALL_MEMORY
 endmenu
-- 
2.21.0.392.gf8f6787159e-goog

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] net: make sk_prot_alloc() work with CONFIG_INIT_ALL_HEAP
  2019-04-10 13:17 [PATCH v4 0/3] RFC: introduce CONFIG_INIT_ALL_MEMORY Alexander Potapenko
  2019-04-10 13:17 ` [PATCH v4 1/3] initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK Alexander Potapenko
  2019-04-10 13:17 ` [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP Alexander Potapenko
@ 2019-04-10 13:17 ` Alexander Potapenko
  2 siblings, 0 replies; 8+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:17 UTC (permalink / raw)
  To: yamada.masahiro, jmorris, serge
  Cc: linux-security-module, linux-kbuild, ndesaulniers, kcc, dvyukov,
	keescook, sspatil, labbott, kernel-hardening

Rename sk_prot_clear_nulls() to sk_prot_clear() and introduce an extra
init_byte parameter to be passed to memset() when initializing struct sock.
In the case CONFIG_INIT_ALL_HEAP is on, initialize newly created struct
sock with 0xAA.

Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jann Horn <jannh@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-security-module@vger.kernel.org
Cc: netdev@vger.kernel.org
Cc: linux-kbuild@vger.kernel.org
Cc: kernel-hardening@lists.openwall.com
---
 include/net/sock.h | 8 ++++----
 net/core/sock.c    | 5 +++--
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 8de5ee258b93..a49c1f1c71c1 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1044,13 +1044,13 @@ struct module;
 
 /*
  * caches using SLAB_TYPESAFE_BY_RCU should let .next pointer from nulls nodes
- * un-modified. Special care is taken when initializing object to zero.
+ * un-modified. Special care is taken when initializing object.
  */
-static inline void sk_prot_clear_nulls(struct sock *sk, int size)
+static inline void sk_prot_clear(struct sock *sk, int size, int init_byte)
 {
 	if (offsetof(struct sock, sk_node.next) != 0)
-		memset(sk, 0, offsetof(struct sock, sk_node.next));
-	memset(&sk->sk_node.pprev, 0,
+		memset(sk, init_byte, offsetof(struct sock, sk_node.next));
+	memset(&sk->sk_node.pprev, init_byte,
 	       size - offsetof(struct sock, sk_node.pprev));
 }
 
diff --git a/net/core/sock.c b/net/core/sock.c
index 782343bb925b..1ad855e99512 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1601,8 +1601,9 @@ static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority,
 		sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO);
 		if (!sk)
 			return sk;
-		if (priority & __GFP_ZERO)
-			sk_prot_clear_nulls(sk, prot->obj_size);
+		if (GFP_INIT_ALWAYS_ON || (priority & __GFP_ZERO))
+			sk_prot_clear(sk, prot->obj_size,
+				      INITMEM_FILL_BYTE(priority));
 	} else
 		sk = kmalloc(prot->obj_size, priority);
 
-- 
2.21.0.392.gf8f6787159e-goog

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP
  2019-04-10 13:17 ` [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP Alexander Potapenko
@ 2019-04-10 16:09   ` Kees Cook
  2019-04-11  8:39     ` Alexander Potapenko
  0 siblings, 1 reply; 8+ messages in thread
From: Kees Cook @ 2019-04-10 16:09 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Masahiro Yamada, James Morris, Serge E. Hallyn,
	linux-security-module, linux-kbuild, Nick Desaulniers,
	Kostya Serebryany, Dmitry Vyukov, Kees Cook, Sandeep Patil,
	Laura Abbott, Kernel Hardening

On Wed, Apr 10, 2019 at 6:18 AM Alexander Potapenko <glider@google.com> wrote:
>
> This config option adds the possibility to initialize newly allocated
> pages and heap objects with a 0xAA pattern.
> There's already a number of places where allocations are initialized
> based on the presence of __GFP_ZERO flag. We just change this code so
> that under CONFIG_INIT_ALL_HEAP these allocations are always initialized
> with either 0x00 or 0xAA depending on the __GFP_ZERO.

Why not just make __GFP_ZERO unconditional instead? This looks like
it'd be simpler and not need arch-specific implementation?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP
  2019-04-10 16:09   ` Kees Cook
@ 2019-04-11  8:39     ` Alexander Potapenko
  2019-04-11 17:29       ` Kees Cook
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Potapenko @ 2019-04-11  8:39 UTC (permalink / raw)
  To: Kees Cook
  Cc: Masahiro Yamada, James Morris, Serge E. Hallyn,
	linux-security-module, linux-kbuild, Nick Desaulniers,
	Kostya Serebryany, Dmitry Vyukov, Sandeep Patil, Laura Abbott,
	Kernel Hardening

On Wed, Apr 10, 2019 at 6:09 PM Kees Cook <keescook@chromium.org> wrote:
>
> On Wed, Apr 10, 2019 at 6:18 AM Alexander Potapenko <glider@google.com> wrote:
> >
> > This config option adds the possibility to initialize newly allocated
> > pages and heap objects with a 0xAA pattern.
> > There's already a number of places where allocations are initialized
> > based on the presence of __GFP_ZERO flag. We just change this code so
> > that under CONFIG_INIT_ALL_HEAP these allocations are always initialized
> > with either 0x00 or 0xAA depending on the __GFP_ZERO.
>
> Why not just make __GFP_ZERO unconditional instead? This looks like
> it'd be simpler and not need arch-specific implementation?
Right, but it would mean we can only initialize with 0x00 pattern.
I believe that for testing purposes a nonzero pattern is better,
because it'll not only assure the execution is deterministic, but will
also uncover logic bugs earlier (see the discussion at
https://reviews.llvm.org/D54604?id=174471)
For hardening purposes the pattern shouldn't matter much.

If you think arch-specific code is too much of a trouble, we could
implement clear_page_pattern() using memset() on every architecture,
but allow the user to choose between slow (0xAA) and production (0x00)
modes.
> --
> Kees Cook



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP
  2019-04-11  8:39     ` Alexander Potapenko
@ 2019-04-11 17:29       ` Kees Cook
  2019-04-11 17:40         ` Alexander Potapenko
  0 siblings, 1 reply; 8+ messages in thread
From: Kees Cook @ 2019-04-11 17:29 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Masahiro Yamada, James Morris, Serge E. Hallyn,
	linux-security-module, linux-kbuild, Nick Desaulniers,
	Kostya Serebryany, Dmitry Vyukov, Sandeep Patil, Laura Abbott,
	Kernel Hardening

On Thu, Apr 11, 2019 at 1:39 AM Alexander Potapenko <glider@google.com> wrote:
>
> On Wed, Apr 10, 2019 at 6:09 PM Kees Cook <keescook@chromium.org> wrote:
> >
> > On Wed, Apr 10, 2019 at 6:18 AM Alexander Potapenko <glider@google.com> wrote:
> > >
> > > This config option adds the possibility to initialize newly allocated
> > > pages and heap objects with a 0xAA pattern.
> > > There's already a number of places where allocations are initialized
> > > based on the presence of __GFP_ZERO flag. We just change this code so
> > > that under CONFIG_INIT_ALL_HEAP these allocations are always initialized
> > > with either 0x00 or 0xAA depending on the __GFP_ZERO.
> >
> > Why not just make __GFP_ZERO unconditional instead? This looks like
> > it'd be simpler and not need arch-specific implementation?
>
> Right, but it would mean we can only initialize with 0x00 pattern.
> I believe that for testing purposes a nonzero pattern is better,

Can it be implemented in a way that isn't arch-specific? I'd really
like to have a general solution that works immediately for all
architectures. (Can't everything just use a memset?)

> because it'll not only assure the execution is deterministic, but will
> also uncover logic bugs earlier (see the discussion at
> https://reviews.llvm.org/D54604?id=174471)
> For hardening purposes the pattern shouldn't matter much.

So, for hardening, it actually does matter but only in certain cases.
On 64-bit, a 0xAA... pointer will have the high bit set, so it'll land
in the non-canonical memory range, which is good. For 32-bit, 0xAA...
will be in userspace (TASK_SIZE is 0xC0000000). In the above URL I see
now that 32-bit pointer init gets 0x000000AA, which is good, but for
heap init, types aren't known. So perhaps use 0x000000AA for 32-bit
and 0xAA... for 64-bit heap init? (0xAA... has stronger properties
since there have been NULL page mapping bypass flaws in the (recent!)
past, so I could see keeping that for 64-bit instead of just using
0-init everywhere.)

> If you think arch-specific code is too much of a trouble, we could
> implement clear_page_pattern() using memset() on every architecture,
> but allow the user to choose between slow (0xAA) and production (0x00)
> modes.

How about 32-bit use 0x00, 64-bit use 0xAA (and provide per-arch
speed-ups with a generic "slow" version for all the other
architectures?)

-Kees

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP
  2019-04-11 17:29       ` Kees Cook
@ 2019-04-11 17:40         ` Alexander Potapenko
  0 siblings, 0 replies; 8+ messages in thread
From: Alexander Potapenko @ 2019-04-11 17:40 UTC (permalink / raw)
  To: Kees Cook
  Cc: Masahiro Yamada, James Morris, Serge E. Hallyn,
	linux-security-module, linux-kbuild, Nick Desaulniers,
	Kostya Serebryany, Dmitry Vyukov, Sandeep Patil, Laura Abbott,
	Kernel Hardening

On Thu, Apr 11, 2019 at 7:29 PM Kees Cook <keescook@chromium.org> wrote:
>
> On Thu, Apr 11, 2019 at 1:39 AM Alexander Potapenko <glider@google.com> wrote:
> >
> > On Wed, Apr 10, 2019 at 6:09 PM Kees Cook <keescook@chromium.org> wrote:
> > >
> > > On Wed, Apr 10, 2019 at 6:18 AM Alexander Potapenko <glider@google.com> wrote:
> > > >
> > > > This config option adds the possibility to initialize newly allocated
> > > > pages and heap objects with a 0xAA pattern.
> > > > There's already a number of places where allocations are initialized
> > > > based on the presence of __GFP_ZERO flag. We just change this code so
> > > > that under CONFIG_INIT_ALL_HEAP these allocations are always initialized
> > > > with either 0x00 or 0xAA depending on the __GFP_ZERO.
> > >
> > > Why not just make __GFP_ZERO unconditional instead? This looks like
> > > it'd be simpler and not need arch-specific implementation?
> >
> > Right, but it would mean we can only initialize with 0x00 pattern.
> > I believe that for testing purposes a nonzero pattern is better,
>
> Can it be implemented in a way that isn't arch-specific? I'd really
> like to have a general solution that works immediately for all
> architectures. (Can't everything just use a memset?)
>
> > because it'll not only assure the execution is deterministic, but will
> > also uncover logic bugs earlier (see the discussion at
> > https://reviews.llvm.org/D54604?id=174471)
> > For hardening purposes the pattern shouldn't matter much.
>
> So, for hardening, it actually does matter but only in certain cases.
> On 64-bit, a 0xAA... pointer will have the high bit set, so it'll land
> in the non-canonical memory range, which is good. For 32-bit, 0xAA...
> will be in userspace (TASK_SIZE is 0xC0000000). In the above URL I see
> now that 32-bit pointer init gets 0x000000AA, which is good, but for
> heap init, types aren't known. So perhaps use 0x000000AA for 32-bit
> and 0xAA... for 64-bit heap init? (0xAA... has stronger properties
> since there have been NULL page mapping bypass flaws in the (recent!)
> past, so I could see keeping that for 64-bit instead of just using
> 0-init everywhere.)
>
> > If you think arch-specific code is too much of a trouble, we could
> > implement clear_page_pattern() using memset() on every architecture,
> > but allow the user to choose between slow (0xAA) and production (0x00)
> > modes.
>
> How about 32-bit use 0x00, 64-bit use 0xAA (and provide per-arch
> speed-ups with a generic "slow" version for all the other
> architectures?)
Might be easier to start with a generic 0x00 version and add
improvements on top of that :)
I'll send an updated patch.
> -Kees
>
> --
> Kees Cook



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-04-11 17:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-10 13:17 [PATCH v4 0/3] RFC: introduce CONFIG_INIT_ALL_MEMORY Alexander Potapenko
2019-04-10 13:17 ` [PATCH v4 1/3] initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK Alexander Potapenko
2019-04-10 13:17 ` [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP Alexander Potapenko
2019-04-10 16:09   ` Kees Cook
2019-04-11  8:39     ` Alexander Potapenko
2019-04-11 17:29       ` Kees Cook
2019-04-11 17:40         ` Alexander Potapenko
2019-04-10 13:17 ` [PATCH 3/3] net: make sk_prot_alloc() work with CONFIG_INIT_ALL_HEAP Alexander Potapenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).