All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nick Desaulniers <ndesaulniers@google.com>
To: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Russell King <rmk+kernel@armlinux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Olof Johansson <olof@lixom.net>, Arnd Bergmann <arnd@arndb.de>,
	Nicolas Saenz Julienne <nsaenzjulienne@suse.de>,
	Allison Randal <allison@lohutok.net>,
	Enrico Weigelt <info@metux.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Kate Stewart <kstewart@linuxfoundation.org>,
	Russell King <linux@armlinux.org.uk>,
	Stefan Agner <stefan@agner.ch>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vincent Whitchurch <vincent.whitchurch@axis.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] ARM: add __always_inline to functions called from __get_user_check()
Date: Tue, 1 Oct 2019 10:03:50 -0700	[thread overview]
Message-ID: <CAKwvOd=NObDXDL3jz9ZX9wo4tn747peBJPTj0DXyLerixgL+wQ@mail.gmail.com> (raw)
In-Reply-To: <20191001083701.27207-1-yamada.masahiro@socionext.com>

On Tue, Oct 1, 2019 at 1:37 AM Masahiro Yamada
<yamada.masahiro@socionext.com> wrote:
>
> KernelCI reports that bcm2835_defconfig is no longer booting since
> commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING
> forcibly") (https://lkml.org/lkml/2019/9/26/825).
>
> I also received a regression report from Nicolas Saenz Julienne
> (https://lkml.org/lkml/2019/9/27/263).
>
> This problem has cropped up on bcm2835_defconfig because it enables
> CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends to prefer not inlining
> functions with -Os. I was able to reproduce it with other boards and
> defconfig files by manually enabling CONFIG_CC_OPTIMIZE_FOR_SIZE.
>
> The __get_user_check() specifically uses r0, r1, r2 registers.
> So, uaccess_save_and_enable() and uaccess_restore() must be inlined.
> Otherwise, those register assignments would be entirely dropped,
> according to my analysis of the disassembly.
>
> Prior to commit 9012d011660e ("compiler: allow all arches to enable
> CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for
> inlining functions, except on x86.
>
> Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING.
> So, __always_inline is now the only guaranteed way of forcible inlining.

No, the C preprocessor is the only guaranteed way of inlining.  I
preferred v1; if you're going to <strikethrough>play with
fire</strikethrough>write assembly, don't get burned.

>
> I also added __always_inline to 4 functions in the call-graph from the
> __get_user_check() macro.
>
> Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING")
> Reported-by: "kernelci.org bot" <bot@kernelci.org>
> Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
> ---
>
> Changes in v2:
>   - Use __always_inline instead of changing the function call places
>      (per Russell King)
>   - The previous submission is: https://lore.kernel.org/patchwork/patch/1132459/
>
>  arch/arm/include/asm/domain.h  | 8 ++++----
>  arch/arm/include/asm/uaccess.h | 4 ++--
>  2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h
> index 567dbede4785..f1d0a7807cd0 100644
> --- a/arch/arm/include/asm/domain.h
> +++ b/arch/arm/include/asm/domain.h
> @@ -82,7 +82,7 @@
>  #ifndef __ASSEMBLY__
>
>  #ifdef CONFIG_CPU_CP15_MMU
> -static inline unsigned int get_domain(void)
> +static __always_inline unsigned int get_domain(void)
>  {
>         unsigned int domain;
>
> @@ -94,7 +94,7 @@ static inline unsigned int get_domain(void)
>         return domain;
>  }
>
> -static inline void set_domain(unsigned val)
> +static __always_inline void set_domain(unsigned int val)
>  {
>         asm volatile(
>         "mcr    p15, 0, %0, c3, c0      @ set domain"
> @@ -102,12 +102,12 @@ static inline void set_domain(unsigned val)
>         isb();
>  }
>  #else
> -static inline unsigned int get_domain(void)
> +static __always_inline unsigned int get_domain(void)
>  {
>         return 0;
>  }
>
> -static inline void set_domain(unsigned val)
> +static __always_inline void set_domain(unsigned int val)
>  {
>  }
>  #endif
> diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
> index 303248e5b990..98c6b91be4a8 100644
> --- a/arch/arm/include/asm/uaccess.h
> +++ b/arch/arm/include/asm/uaccess.h
> @@ -22,7 +22,7 @@
>   * perform such accesses (eg, via list poison values) which could then
>   * be exploited for priviledge escalation.
>   */
> -static inline unsigned int uaccess_save_and_enable(void)
> +static __always_inline unsigned int uaccess_save_and_enable(void)
>  {
>  #ifdef CONFIG_CPU_SW_DOMAIN_PAN
>         unsigned int old_domain = get_domain();
> @@ -37,7 +37,7 @@ static inline unsigned int uaccess_save_and_enable(void)
>  #endif
>  }
>
> -static inline void uaccess_restore(unsigned int flags)
> +static __always_inline void uaccess_restore(unsigned int flags)
>  {
>  #ifdef CONFIG_CPU_SW_DOMAIN_PAN
>         /* Restore the user access mask */
> --
> 2.17.1
>


-- 
Thanks,
~Nick Desaulniers

WARNING: multiple messages have this Message-ID (diff)
From: Nick Desaulniers <ndesaulniers@google.com>
To: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kate Stewart <kstewart@linuxfoundation.org>,
	Arnd Bergmann <arnd@arndb.de>, Enrico Weigelt <info@metux.net>,
	Vincent Whitchurch <vincent.whitchurch@axis.com>,
	Allison Randal <allison@lohutok.net>,
	Russell King <linux@armlinux.org.uk>,
	Stefan Agner <stefan@agner.ch>,
	LKML <linux-kernel@vger.kernel.org>,
	Russell King <rmk+kernel@armlinux.org.uk>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Olof Johansson <olof@lixom.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Subject: Re: [PATCH v2] ARM: add __always_inline to functions called from __get_user_check()
Date: Tue, 1 Oct 2019 10:03:50 -0700	[thread overview]
Message-ID: <CAKwvOd=NObDXDL3jz9ZX9wo4tn747peBJPTj0DXyLerixgL+wQ@mail.gmail.com> (raw)
In-Reply-To: <20191001083701.27207-1-yamada.masahiro@socionext.com>

On Tue, Oct 1, 2019 at 1:37 AM Masahiro Yamada
<yamada.masahiro@socionext.com> wrote:
>
> KernelCI reports that bcm2835_defconfig is no longer booting since
> commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING
> forcibly") (https://lkml.org/lkml/2019/9/26/825).
>
> I also received a regression report from Nicolas Saenz Julienne
> (https://lkml.org/lkml/2019/9/27/263).
>
> This problem has cropped up on bcm2835_defconfig because it enables
> CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends to prefer not inlining
> functions with -Os. I was able to reproduce it with other boards and
> defconfig files by manually enabling CONFIG_CC_OPTIMIZE_FOR_SIZE.
>
> The __get_user_check() specifically uses r0, r1, r2 registers.
> So, uaccess_save_and_enable() and uaccess_restore() must be inlined.
> Otherwise, those register assignments would be entirely dropped,
> according to my analysis of the disassembly.
>
> Prior to commit 9012d011660e ("compiler: allow all arches to enable
> CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for
> inlining functions, except on x86.
>
> Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING.
> So, __always_inline is now the only guaranteed way of forcible inlining.

No, the C preprocessor is the only guaranteed way of inlining.  I
preferred v1; if you're going to <strikethrough>play with
fire</strikethrough>write assembly, don't get burned.

>
> I also added __always_inline to 4 functions in the call-graph from the
> __get_user_check() macro.
>
> Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING")
> Reported-by: "kernelci.org bot" <bot@kernelci.org>
> Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
> ---
>
> Changes in v2:
>   - Use __always_inline instead of changing the function call places
>      (per Russell King)
>   - The previous submission is: https://lore.kernel.org/patchwork/patch/1132459/
>
>  arch/arm/include/asm/domain.h  | 8 ++++----
>  arch/arm/include/asm/uaccess.h | 4 ++--
>  2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h
> index 567dbede4785..f1d0a7807cd0 100644
> --- a/arch/arm/include/asm/domain.h
> +++ b/arch/arm/include/asm/domain.h
> @@ -82,7 +82,7 @@
>  #ifndef __ASSEMBLY__
>
>  #ifdef CONFIG_CPU_CP15_MMU
> -static inline unsigned int get_domain(void)
> +static __always_inline unsigned int get_domain(void)
>  {
>         unsigned int domain;
>
> @@ -94,7 +94,7 @@ static inline unsigned int get_domain(void)
>         return domain;
>  }
>
> -static inline void set_domain(unsigned val)
> +static __always_inline void set_domain(unsigned int val)
>  {
>         asm volatile(
>         "mcr    p15, 0, %0, c3, c0      @ set domain"
> @@ -102,12 +102,12 @@ static inline void set_domain(unsigned val)
>         isb();
>  }
>  #else
> -static inline unsigned int get_domain(void)
> +static __always_inline unsigned int get_domain(void)
>  {
>         return 0;
>  }
>
> -static inline void set_domain(unsigned val)
> +static __always_inline void set_domain(unsigned int val)
>  {
>  }
>  #endif
> diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
> index 303248e5b990..98c6b91be4a8 100644
> --- a/arch/arm/include/asm/uaccess.h
> +++ b/arch/arm/include/asm/uaccess.h
> @@ -22,7 +22,7 @@
>   * perform such accesses (eg, via list poison values) which could then
>   * be exploited for priviledge escalation.
>   */
> -static inline unsigned int uaccess_save_and_enable(void)
> +static __always_inline unsigned int uaccess_save_and_enable(void)
>  {
>  #ifdef CONFIG_CPU_SW_DOMAIN_PAN
>         unsigned int old_domain = get_domain();
> @@ -37,7 +37,7 @@ static inline unsigned int uaccess_save_and_enable(void)
>  #endif
>  }
>
> -static inline void uaccess_restore(unsigned int flags)
> +static __always_inline void uaccess_restore(unsigned int flags)
>  {
>  #ifdef CONFIG_CPU_SW_DOMAIN_PAN
>         /* Restore the user access mask */
> --
> 2.17.1
>


-- 
Thanks,
~Nick Desaulniers

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2019-10-01 17:04 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01  8:37 [PATCH v2] ARM: add __always_inline to functions called from __get_user_check() Masahiro Yamada
2019-10-01  8:37 ` Masahiro Yamada
2019-10-01  9:03 ` Nicolas Saenz Julienne
2019-10-01  9:03   ` Nicolas Saenz Julienne
2019-10-01 17:03 ` Nick Desaulniers [this message]
2019-10-01 17:03   ` Nick Desaulniers
2019-10-02  8:24   ` Russell King - ARM Linux admin
2019-10-02  8:24     ` Russell King - ARM Linux admin
2019-10-02 10:45     ` Masahiro Yamada
2019-10-02 10:45       ` Masahiro Yamada
2019-10-02 13:01   ` Masahiro Yamada
2019-10-02 13:01     ` Masahiro Yamada

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKwvOd=NObDXDL3jz9ZX9wo4tn747peBJPTj0DXyLerixgL+wQ@mail.gmail.com' \
    --to=ndesaulniers@google.com \
    --cc=allison@lohutok.net \
    --cc=arnd@arndb.de \
    --cc=gregkh@linuxfoundation.org \
    --cc=info@metux.net \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kstewart@linuxfoundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=nsaenzjulienne@suse.de \
    --cc=olof@lixom.net \
    --cc=rmk+kernel@armlinux.org.uk \
    --cc=stefan@agner.ch \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vincent.whitchurch@axis.com \
    --cc=yamada.masahiro@socionext.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.