All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sami Tolvanen <samitolvanen@google.com>
To: Masahiro Yamada <masahiroy@kernel.org>,
	Will Deacon <will@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Nick Desaulniers <ndesaulniers@google.com>,
	clang-built-linux@googlegroups.com,
	kernel-hardening@lists.openwall.com, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-pci@vger.kernel.org, x86@kernel.org,
	Arvind Sankar <nivedita@alum.mit.edu>
Subject: [PATCH v4 02/29] x86/asm: Replace __force_order with memory clobber
Date: Tue, 29 Sep 2020 14:46:04 -0700	[thread overview]
Message-ID: <20200929214631.3516445-3-samitolvanen@google.com> (raw)
In-Reply-To: <20200929214631.3516445-1-samitolvanen@google.com>

From: Arvind Sankar <nivedita@alum.mit.edu>

The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.

The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.

There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
  doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
  functions, but CRn writes affect the semantics of memory accesses, so
  this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
  LLVM toolchain can in some cases require a definition: LLVM (as well
  as GCC 4.9) requires it for PIE code, which is why the compressed
  kernel has a definition, but also the clang integrated assembler may
  consider the address of __force_order to be significant, resulting in
  a reference that requires a definition.

Fix this by:
- Using a memory clobber for the write functions to additionally prevent
  caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
  read functions, instead of a global variable. This will prevent reads
  from being reordered across writes, while allowing memory loads to be
  cached/reordered across CRn reads, which should be safe.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
---
 arch/x86/boot/compressed/pgtable_64.c |  9 ---------
 arch/x86/include/asm/special_insns.h  | 28 ++++++++++++++-------------
 arch/x86/kernel/cpu/common.c          |  4 ++--
 3 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
index c8862696a47b..7d0394f4ebf9 100644
--- a/arch/x86/boot/compressed/pgtable_64.c
+++ b/arch/x86/boot/compressed/pgtable_64.c
@@ -5,15 +5,6 @@
 #include "pgtable.h"
 #include "../string.h"
 
-/*
- * __force_order is used by special_insns.h asm code to force instruction
- * serialization.
- *
- * It is not referenced from the code, but GCC < 5 with -fPIE would fail
- * due to an undefined symbol. Define it to make these ancient GCCs work.
- */
-unsigned long __force_order;
-
 #define BIOS_START_MIN		0x20000U	/* 128K, less than this is insane */
 #define BIOS_START_MAX		0x9f000U	/* 640K, absolute maximum */
 
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 59a3e13204c3..d6e3bb9363d2 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -11,45 +11,47 @@
 #include <linux/jump_label.h>
 
 /*
- * Volatile isn't enough to prevent the compiler from reordering the
- * read/write functions for the control registers and messing everything up.
- * A memory clobber would solve the problem, but would prevent reordering of
- * all loads stores around it, which can hurt performance. Solution is to
- * use a variable and mimic reads and writes to it to enforce serialization
+ * The compiler should not reorder volatile asm statements with respect to each
+ * other: they should execute in program order. However GCC 4.9.x and 5.x have
+ * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder
+ * volatile asm. The write functions are not affected since they have memory
+ * clobbers preventing reordering. To prevent reads from being reordered with
+ * respect to writes, use a dummy memory operand.
  */
-extern unsigned long __force_order;
+
+#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL)
 
 void native_write_cr0(unsigned long val);
 
 static inline unsigned long native_read_cr0(void)
 {
 	unsigned long val;
-	asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 	return val;
 }
 
 static __always_inline unsigned long native_read_cr2(void)
 {
 	unsigned long val;
-	asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 	return val;
 }
 
 static __always_inline void native_write_cr2(unsigned long val)
 {
-	asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
+	asm volatile("mov %0,%%cr2": : "r" (val) : "memory");
 }
 
 static inline unsigned long __native_read_cr3(void)
 {
 	unsigned long val;
-	asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 	return val;
 }
 
 static inline void native_write_cr3(unsigned long val)
 {
-	asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
+	asm volatile("mov %0,%%cr3": : "r" (val) : "memory");
 }
 
 static inline unsigned long native_read_cr4(void)
@@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void)
 	asm volatile("1: mov %%cr4, %0\n"
 		     "2:\n"
 		     _ASM_EXTABLE(1b, 2b)
-		     : "=r" (val), "=m" (__force_order) : "0" (0));
+		     : "=r" (val) : "0" (0), __FORCE_ORDER);
 #else
 	/* CR4 always exists on x86_64. */
-	asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 #endif
 	return val;
 }
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index c5d6f17d9b9d..178499f90366 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -359,7 +359,7 @@ void native_write_cr0(unsigned long val)
 	unsigned long bits_missing = 0;
 
 set_register:
-	asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order));
+	asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");
 
 	if (static_branch_likely(&cr_pinning)) {
 		if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
@@ -378,7 +378,7 @@ void native_write_cr4(unsigned long val)
 	unsigned long bits_changed = 0;
 
 set_register:
-	asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));
+	asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");
 
 	if (static_branch_likely(&cr_pinning)) {
 		if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
-- 
2.28.0.709.gb0816b6eb0-goog


WARNING: multiple messages have this Message-ID (diff)
From: Sami Tolvanen <samitolvanen@google.com>
To: Masahiro Yamada <masahiroy@kernel.org>,
	Will Deacon <will@kernel.org>,
	 Steven Rostedt <rostedt@goodmis.org>
Cc: linux-arch@vger.kernel.org, x86@kernel.org,
	Kees Cook <keescook@chromium.org>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	kernel-hardening@lists.openwall.com,
	Peter Zijlstra <peterz@infradead.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-kbuild@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com,
	Arvind Sankar <nivedita@alum.mit.edu>,
	linux-pci@vger.kernel.org, linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 02/29] x86/asm: Replace __force_order with memory clobber
Date: Tue, 29 Sep 2020 14:46:04 -0700	[thread overview]
Message-ID: <20200929214631.3516445-3-samitolvanen@google.com> (raw)
In-Reply-To: <20200929214631.3516445-1-samitolvanen@google.com>

From: Arvind Sankar <nivedita@alum.mit.edu>

The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.

The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.

There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
  doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
  functions, but CRn writes affect the semantics of memory accesses, so
  this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
  LLVM toolchain can in some cases require a definition: LLVM (as well
  as GCC 4.9) requires it for PIE code, which is why the compressed
  kernel has a definition, but also the clang integrated assembler may
  consider the address of __force_order to be significant, resulting in
  a reference that requires a definition.

Fix this by:
- Using a memory clobber for the write functions to additionally prevent
  caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
  read functions, instead of a global variable. This will prevent reads
  from being reordered across writes, while allowing memory loads to be
  cached/reordered across CRn reads, which should be safe.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
---
 arch/x86/boot/compressed/pgtable_64.c |  9 ---------
 arch/x86/include/asm/special_insns.h  | 28 ++++++++++++++-------------
 arch/x86/kernel/cpu/common.c          |  4 ++--
 3 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
index c8862696a47b..7d0394f4ebf9 100644
--- a/arch/x86/boot/compressed/pgtable_64.c
+++ b/arch/x86/boot/compressed/pgtable_64.c
@@ -5,15 +5,6 @@
 #include "pgtable.h"
 #include "../string.h"
 
-/*
- * __force_order is used by special_insns.h asm code to force instruction
- * serialization.
- *
- * It is not referenced from the code, but GCC < 5 with -fPIE would fail
- * due to an undefined symbol. Define it to make these ancient GCCs work.
- */
-unsigned long __force_order;
-
 #define BIOS_START_MIN		0x20000U	/* 128K, less than this is insane */
 #define BIOS_START_MAX		0x9f000U	/* 640K, absolute maximum */
 
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 59a3e13204c3..d6e3bb9363d2 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -11,45 +11,47 @@
 #include <linux/jump_label.h>
 
 /*
- * Volatile isn't enough to prevent the compiler from reordering the
- * read/write functions for the control registers and messing everything up.
- * A memory clobber would solve the problem, but would prevent reordering of
- * all loads stores around it, which can hurt performance. Solution is to
- * use a variable and mimic reads and writes to it to enforce serialization
+ * The compiler should not reorder volatile asm statements with respect to each
+ * other: they should execute in program order. However GCC 4.9.x and 5.x have
+ * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder
+ * volatile asm. The write functions are not affected since they have memory
+ * clobbers preventing reordering. To prevent reads from being reordered with
+ * respect to writes, use a dummy memory operand.
  */
-extern unsigned long __force_order;
+
+#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL)
 
 void native_write_cr0(unsigned long val);
 
 static inline unsigned long native_read_cr0(void)
 {
 	unsigned long val;
-	asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 	return val;
 }
 
 static __always_inline unsigned long native_read_cr2(void)
 {
 	unsigned long val;
-	asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 	return val;
 }
 
 static __always_inline void native_write_cr2(unsigned long val)
 {
-	asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
+	asm volatile("mov %0,%%cr2": : "r" (val) : "memory");
 }
 
 static inline unsigned long __native_read_cr3(void)
 {
 	unsigned long val;
-	asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 	return val;
 }
 
 static inline void native_write_cr3(unsigned long val)
 {
-	asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
+	asm volatile("mov %0,%%cr3": : "r" (val) : "memory");
 }
 
 static inline unsigned long native_read_cr4(void)
@@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void)
 	asm volatile("1: mov %%cr4, %0\n"
 		     "2:\n"
 		     _ASM_EXTABLE(1b, 2b)
-		     : "=r" (val), "=m" (__force_order) : "0" (0));
+		     : "=r" (val) : "0" (0), __FORCE_ORDER);
 #else
 	/* CR4 always exists on x86_64. */
-	asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
+	asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER);
 #endif
 	return val;
 }
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index c5d6f17d9b9d..178499f90366 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -359,7 +359,7 @@ void native_write_cr0(unsigned long val)
 	unsigned long bits_missing = 0;
 
 set_register:
-	asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order));
+	asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");
 
 	if (static_branch_likely(&cr_pinning)) {
 		if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
@@ -378,7 +378,7 @@ void native_write_cr4(unsigned long val)
 	unsigned long bits_changed = 0;
 
 set_register:
-	asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));
+	asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");
 
 	if (static_branch_likely(&cr_pinning)) {
 		if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
-- 
2.28.0.709.gb0816b6eb0-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-09-29 21:46 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-29 21:46 [PATCH v4 00/29] Add support for Clang LTO Sami Tolvanen
2020-09-29 21:46 ` Sami Tolvanen
2020-09-29 21:46 ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 01/29] RAS/CEC: Fix cec_init() prototype Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` Sami Tolvanen [this message]
2020-09-29 21:46   ` [PATCH v4 02/29] x86/asm: Replace __force_order with memory clobber Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 03/29] kbuild: preprocess module linker script Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 04/29] objtool: Add a pass for generating __mcount_loc Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-10-01 13:17   ` Miroslav Benes
2020-10-01 13:17     ` Miroslav Benes
2020-10-01 13:17     ` Miroslav Benes
2020-10-01 13:36     ` Peter Zijlstra
2020-10-01 13:36       ` Peter Zijlstra
2020-10-02 14:13       ` Josh Poimboeuf
2020-10-02 14:13         ` Josh Poimboeuf
2020-10-05  7:10         ` Miroslav Benes
2020-10-05  7:10           ` Miroslav Benes
2020-10-05  7:10           ` Miroslav Benes
2020-09-29 21:46 ` [PATCH v4 05/29] objtool: Don't autodetect vmlinux.o Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 06/29] tracing: move function tracer options to Kconfig Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-30  0:12   ` Steven Rostedt
2020-09-30  0:12     ` Steven Rostedt
2020-09-30 16:05     ` Sami Tolvanen
2020-09-30 16:05       ` Sami Tolvanen
2020-09-30 16:05       ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 07/29] tracing: add support for objtool mcount Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 08/29] x86, build: use " Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 09/29] arm64: disable recordmcount with DYNAMIC_FTRACE_WITH_REGS Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-30  9:58   ` Mark Rutland
2020-09-30  9:58     ` Mark Rutland
2020-09-30 17:10     ` Sami Tolvanen
2020-09-30 17:10       ` Sami Tolvanen
2020-09-30 17:10       ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 10/29] treewide: remove DISABLE_LTO Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-30 20:45   ` Kees Cook
2020-09-30 20:45     ` Kees Cook
2020-09-29 21:46 ` [PATCH v4 11/29] kbuild: add support for Clang LTO Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 12/29] kbuild: lto: fix module versioning Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 13/29] kbuild: lto: postpone objtool Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 14/29] kbuild: lto: limit inlining Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 15/29] kbuild: lto: merge module sections Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 16/29] kbuild: lto: remove duplicate dependencies from .mod files Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 17/29] init: lto: ensure initcall ordering Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 18/29] init: lto: fix PREL32 relocations Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 19/29] PCI: Fix PREL32 relocations for LTO Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 20/29] modpost: lto: strip .lto from module names Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 21/29] scripts/mod: disable LTO for empty.c Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 22/29] efi/libstub: disable LTO Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 23/29] drivers/misc/lkdtm: disable LTO for rodata.o Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 24/29] arm64: vdso: disable LTO Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 25/29] KVM: arm64: disable LTO for the nVHE directory Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 26/29] arm64: allow LTO_CLANG and THINLTO to be selected Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 27/29] x86, vdso: disable LTO only for vDSO Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 28/29] x86, cpu: disable LTO for cpu.c Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46 ` [PATCH v4 29/29] x86, build: allow LTO_CLANG and THINLTO to be selected Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-29 21:46   ` Sami Tolvanen
2020-09-30 21:10 ` [PATCH v4 00/29] Add support for Clang LTO Kees Cook
2020-09-30 21:10   ` Kees Cook
2020-09-30 21:58 ` Nick Desaulniers
2020-09-30 21:58   ` Nick Desaulniers
2020-09-30 21:58   ` Nick Desaulniers
2020-09-30 22:12   ` Sami Tolvanen
2020-09-30 22:12     ` Sami Tolvanen
2020-09-30 22:12     ` Sami Tolvanen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200929214631.3516445-3-samitolvanen@google.com \
    --to=samitolvanen@google.com \
    --cc=clang-built-linux@googlegroups.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kbuild@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=masahiroy@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=nivedita@alum.mit.edu \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.