linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kees Cook <keescook@chromium.org>
To: Thomas Gleixner <tglx@linutronix.de>,
	Borislav Petkov <bp@alien8.de>, Ingo Molnar <mingo@redhat.com>
Cc: Arvind Sankar <nivedita@alum.mit.edu>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>,
	Sedat Dilek <sedat.dilek@gmail.com>,
	Segher Boessenkool <segher@kernel.crashing.org>,
	Nick Desaulniers <ndesaulniers@google.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
	<x86@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juergen Gross <jgross@suse.com>,
	Andy Lutomirski <luto@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	LKML <linux-kernel@vger.kernel.org>,
	clang-built-linux <clang-built-linux@googlegroups.com>,
	Will Deacon <will@kernel.org>,
	nadav.amit@gmail.com,
	Nathan Chancellor <natechancellor@gmail.com>,
	Sami Tolvanen <samitolvanen@google.com>
Subject: Re: [PATCH v3] x86/asm: Replace __force_order with memory clobber
Date: Wed, 30 Sep 2020 13:50:26 -0700	[thread overview]
Message-ID: <202009301347.EB0EFDD80@keescook> (raw)
In-Reply-To: <20200902232152.3709896-1-nivedita@alum.mit.edu>

*thread ping*

Can an x86 maintainer please take this for -next? Getting this landed
for v5.10 would be very helpful! :)

-Kees

On Wed, Sep 02, 2020 at 07:21:52PM -0400, Arvind Sankar wrote:
> The CRn accessor functions use __force_order as a dummy operand to
> prevent the compiler from reordering CRn reads/writes with respect to
> each other.
> 
> The fact that the asm is volatile should be enough to prevent this:
> volatile asm statements should be executed in program order. However GCC
> 4.9.x and 5.x have a bug that might result in reordering. This was fixed
> in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
> may reorder volatile asm statements with respect to each other.
> 
> There are some issues with __force_order as implemented:
> - It is used only as an input operand for the write functions, and hence
>   doesn't do anything additional to prevent reordering writes.
> - It allows memory accesses to be cached/reordered across write
>   functions, but CRn writes affect the semantics of memory accesses, so
>   this could be dangerous.
> - __force_order is not actually defined in the kernel proper, but the
>   LLVM toolchain can in some cases require a definition: LLVM (as well
>   as GCC 4.9) requires it for PIE code, which is why the compressed
>   kernel has a definition, but also the clang integrated assembler may
>   consider the address of __force_order to be significant, resulting in
>   a reference that requires a definition.
> 
> Fix this by:
> - Using a memory clobber for the write functions to additionally prevent
>   caching/reordering memory accesses across CRn writes.
> - Using a dummy input operand with an arbitrary constant address for the
>   read functions, instead of a global variable. This will prevent reads
>   from being reordered across writes, while allowing memory loads to be
>   cached/reordered across CRn reads, which should be safe.
> 
> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
> Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
> Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
> ---
> Changes from v2:
> - Clarify commit log and source comment some more
> Changes from v1:
> - Add lore link to email thread and mention state of 5.x/4.9.x in commit log
> 
>  arch/x86/boot/compressed/pgtable_64.c |  9 ---------
>  arch/x86/include/asm/special_insns.h  | 28 ++++++++++++++-------------
>  arch/x86/kernel/cpu/common.c          |  4 ++--
>  3 files changed, 17 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
> index c8862696a47b..7d0394f4ebf9 100644
> --- a/arch/x86/boot/compressed/pgtable_64.c
> +++ b/arch/x86/boot/compressed/pgtable_64.c
> @@ -5,15 +5,6 @@
>  #include "pgtable.h"
>  #include "../string.h"
>  
> -/*
> - * __force_order is used by special_insns.h asm code to force instruction
> - * serialization.
> - *
> - * It is not referenced from the code, but GCC < 5 with -fPIE would fail
> - * due to an undefined symbol. Define it to make these ancient GCCs work.
> - */
> -unsigned long __force_order;
> -
>  #define BIOS_START_MIN		0x20000U	/* 128K, less than this is insane */
>  #define BIOS_START_MAX		0x9f000U	/* 640K, absolute maximum */
>  
> diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
> index 59a3e13204c3..d6e3bb9363d2 100644
> --- a/arch/x86/include/asm/special_insns.h
> +++ b/arch/x86/include/asm/special_insns.h
> @@ -11,45 +11,47 @@
>  #include <linux/jump_label.h>
>  
>  /*
> - * Volatile isn't enough to prevent the compiler from reordering the
> - * read/write functions for the control registers and messing everything up.
> - * A memory clobber would solve the problem, but would prevent reordering of
> - * all loads stores around it, which can hurt performance. Solution is to
> - * use a variable and mimic reads and writes to it to enforce serialization
> + * The compiler should not reorder volatile asm statements with respect to each
> + * other: they should execute in program order. However GCC 4.9.x and 5.x have
> + * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder
> + * volatile asm. The write functions are not affected since they have memory
> + * clobbers preventing reordering. To prevent reads from being reordered with
> + * respect to writes, use a dummy memory operand.
>   */
> -extern unsigned long __force_order;
> +
> +#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL)
>  
>  void native_write_cr0(unsigned long val);
>  
>  static inline unsigned long native_read_cr0(void)
>  {
>  	unsigned long val;
> -	asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
> +	asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER);
>  	return val;
>  }
>  
>  static __always_inline unsigned long native_read_cr2(void)
>  {
>  	unsigned long val;
> -	asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
> +	asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER);
>  	return val;
>  }
>  
>  static __always_inline void native_write_cr2(unsigned long val)
>  {
> -	asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
> +	asm volatile("mov %0,%%cr2": : "r" (val) : "memory");
>  }
>  
>  static inline unsigned long __native_read_cr3(void)
>  {
>  	unsigned long val;
> -	asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
> +	asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER);
>  	return val;
>  }
>  
>  static inline void native_write_cr3(unsigned long val)
>  {
> -	asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
> +	asm volatile("mov %0,%%cr3": : "r" (val) : "memory");
>  }
>  
>  static inline unsigned long native_read_cr4(void)
> @@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void)
>  	asm volatile("1: mov %%cr4, %0\n"
>  		     "2:\n"
>  		     _ASM_EXTABLE(1b, 2b)
> -		     : "=r" (val), "=m" (__force_order) : "0" (0));
> +		     : "=r" (val) : "0" (0), __FORCE_ORDER);
>  #else
>  	/* CR4 always exists on x86_64. */
> -	asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
> +	asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER);
>  #endif
>  	return val;
>  }
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index c5d6f17d9b9d..178499f90366 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -359,7 +359,7 @@ void native_write_cr0(unsigned long val)
>  	unsigned long bits_missing = 0;
>  
>  set_register:
> -	asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order));
> +	asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");
>  
>  	if (static_branch_likely(&cr_pinning)) {
>  		if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
> @@ -378,7 +378,7 @@ void native_write_cr4(unsigned long val)
>  	unsigned long bits_changed = 0;
>  
>  set_register:
> -	asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));
> +	asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");
>  
>  	if (static_branch_likely(&cr_pinning)) {
>  		if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
> -- 
> 2.26.2
> 

-- 
Kees Cook

  parent reply	other threads:[~2020-09-30 20:50 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-27 13:53 [PATCH] x86: work around clang IAS bug referencing __force_order Arnd Bergmann
2020-08-01 11:50 ` Sedat Dilek
2020-08-06 22:13   ` Thomas Gleixner
2020-08-07  7:03     ` Sedat Dilek
2020-08-04  0:09 ` Nick Desaulniers
2020-08-14 17:29   ` Sedat Dilek
2020-08-14 21:19     ` Sedat Dilek
2020-08-14 22:57       ` Nick Desaulniers
2020-08-15  0:26         ` Nick Desaulniers
2020-08-15  3:28           ` Sedat Dilek
2020-08-15  8:23             ` Sedat Dilek
2020-08-15 10:46               ` Sedat Dilek
2020-08-15 14:39                 ` Sedat Dilek
2020-08-16  9:37                   ` Sedat Dilek
2020-08-06 22:11 ` Thomas Gleixner
2020-08-13  0:12   ` Nick Desaulniers
2020-08-13  8:49     ` David Laight
2020-08-13 17:20     ` Arvind Sankar
2020-08-13 17:28     ` Thomas Gleixner
2020-08-13 17:37       ` Paul E. McKenney
2020-08-13 18:09         ` Arvind Sankar
2020-08-13 18:20           ` Paul E. McKenney
2020-08-20 10:44           ` Thomas Gleixner
2020-08-20 13:06             ` Arvind Sankar
2020-08-21  0:37               ` Thomas Gleixner
2020-08-21 23:04                 ` Arvind Sankar
2020-08-21 23:16                   ` Nick Desaulniers
2020-08-21 23:25                     ` Arvind Sankar
2020-08-22  0:43                     ` Thomas Gleixner
2020-08-22  3:55                       ` Arvind Sankar
2020-08-22  8:41                         ` Segher Boessenkool
2020-08-22  9:23                           ` Sedat Dilek
2020-08-22  9:51                             ` Sedat Dilek
2020-08-22 10:26                               ` Segher Boessenkool
2020-08-22 10:35                                 ` Arnd Bergmann
2020-08-22 18:17                               ` Miguel Ojeda
2020-08-22 21:08                                 ` Linus Torvalds
2020-08-22 23:10                                   ` Arvind Sankar
2020-08-23  0:10                                     ` Linus Torvalds
2020-08-23  1:16                                       ` Arvind Sankar
2020-08-23 21:25                                         ` [PATCH] x86/asm: Replace __force_order with memory clobber Arvind Sankar
2020-08-24 17:50                                           ` Nathan Chancellor
2020-08-24 19:13                                           ` Miguel Ojeda
2020-08-25 15:19                                             ` Arvind Sankar
2020-08-25 15:21                                               ` Sedat Dilek
2020-09-02 15:33                                           ` [PATCH v2] " Arvind Sankar
2020-09-02 15:58                                             ` David Laight
2020-09-02 16:14                                               ` Arvind Sankar
2020-09-02 16:08                                             ` Arvind Sankar
2020-09-02 20:26                                               ` David Laight
2020-09-02 17:16                                             ` Segher Boessenkool
2020-09-02 17:36                                               ` Arvind Sankar
2020-09-02 18:19                                             ` Miguel Ojeda
2020-09-02 18:24                                               ` Arvind Sankar
2020-09-02 23:21                                           ` [PATCH v3] " Arvind Sankar
2020-09-03  2:17                                             ` Kees Cook
2020-09-03  5:34                                             ` Miguel Ojeda
2020-09-30 20:50                                             ` Kees Cook [this message]
2020-10-01 10:12                                             ` [tip: x86/asm] x86/asm: Replace __force_order with a " tip-bot2 for Arvind Sankar
2020-10-13  9:30                                             ` tip-bot2 for Arvind Sankar
2020-08-22 21:17                                 ` [PATCH] x86: work around clang IAS bug referencing __force_order Arvind Sankar
2020-08-23 13:31                                   ` David Laight
2020-09-08 22:25                               ` Pavel Machek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202009301347.EB0EFDD80@keescook \
    --to=keescook@chromium.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=clang-built-linux@googlegroups.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=miguel.ojeda.sandonis@gmail.com \
    --cc=mingo@redhat.com \
    --cc=nadav.amit@gmail.com \
    --cc=natechancellor@gmail.com \
    --cc=ndesaulniers@google.com \
    --cc=nivedita@alum.mit.edu \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=samitolvanen@google.com \
    --cc=sedat.dilek@gmail.com \
    --cc=segher@kernel.crashing.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).