All of lore.kernel.org
 help / color / mirror / Atom feed
From: ard.biesheuvel@linaro.org (Ard Biesheuvel)
To: linux-arm-kernel@lists.infradead.org
Subject: [kernel-hardening] Re: [PATCH v2 3/7] arm64: Introduce uaccess_{disable, enable} functionality based on TTBR0_EL1
Date: Sun, 11 Sep 2016 14:55:12 +0100	[thread overview]
Message-ID: <CAKv+Gu8GMU=Lgh4awFLda-7K=orpg03D18kPDVVQEP6KzB5++g@mail.gmail.com> (raw)
In-Reply-To: <20160906104514.GC1425@leverpostej>

On 6 September 2016 at 11:45, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Sep 06, 2016 at 11:27:42AM +0100, Catalin Marinas wrote:
>> On Mon, Sep 05, 2016 at 06:20:38PM +0100, Mark Rutland wrote:
>> > On Fri, Sep 02, 2016 at 04:02:09PM +0100, Catalin Marinas wrote:
>> > > +static inline void uaccess_ttbr0_enable(void)
>> > > +{
>> > > + unsigned long flags;
>> > > +
>> > > + /*
>> > > +  * Disable interrupts to avoid preemption and potential saved
>> > > +  * TTBR0_EL1 updates between reading the variable and the MSR.
>> > > +  */
>> > > + local_irq_save(flags);
>> > > + write_sysreg(current_thread_info()->ttbr0, ttbr0_el1);
>> > > + isb();
>> > > + local_irq_restore(flags);
>> > > +}
>> >
>> > I don't follow what problem this actually protects us against. In the
>> > case of preemption everything should be saved+restored transparently, or
>> > things would go wrong as soon as we enable IRQs anyway.
>> >
>> > Is this a hold-over from a percpu approach rather than the
>> > current_thread_info() approach?
>>
>> If we get preempted between reading current_thread_info()->ttbr0 and
>> writing TTBR0_EL1, a series of context switches could lead to the update
>> of the ASID part of ttbr0. The actual MSR would store an old ASID in
>> TTBR0_EL1.
>
> Ah! Can you fold something about racing with an ASID update into the
> description?
>
>> > > +#else
>> > > +static inline void uaccess_ttbr0_disable(void)
>> > > +{
>> > > +}
>> > > +
>> > > +static inline void uaccess_ttbr0_enable(void)
>> > > +{
>> > > +}
>> > > +#endif
>> >
>> > I think that it's better to drop the ifdef and add:
>> >
>> >     if (!IS_ENABLED(CONFIG_ARM64_TTBR0_PAN))
>> >             return;
>> >
>> > ... at the start of each function. GCC should optimize the entire thing
>> > away when not used, but we'll get compiler coverage regardless, and
>> > therefore less breakage. All the symbols we required should exist
>> > regardless.
>>
>> The reason for this is that thread_info.ttbr0 is conditionally defined.
>> I don't think the compiler would ignore it.
>
> Good point; I missed that.
>
> [...]
>
>> > How about something like:
>> >
>> >     .macro alternative_endif_else_nop
>> >     alternative_else
>> >     .rept ((662b-661b) / 4)
>> >            nop
>> >     .endr
>> >     alternative_endif
>> >     .endm
>> >
>> > So for the above we could have:
>> >
>> >     alternative_if_not ARM64_HAS_PAN
>> >             save_and_disable_irq \tmp2
>> >             uaccess_ttbr0_enable \tmp1
>> >             restore_irq \tmp2
>> >     alternative_endif_else_nop
>> >
>> > I'll see about spinning a patch, or discovering why that happens to be
>> > broken.
>>
>> This looks better. Minor comment, I would actually name the ending
>> statement alternative_else_nop_endif to match the order in which you'd
>> normally write them.
>
> Completely agreed. I already made this change locally, immediately after
> sending the suggestion. :)
>
>> > >    * tables again to remove any speculatively loaded cache lines.
>> > >    */
>> > >   mov     x0, x25
>> > > - add     x1, x26, #SWAPPER_DIR_SIZE
>> > > + add     x1, x26, #SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
>> > >   dmb     sy
>> > >   bl      __inval_cache_range
>> > >
>> > > diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>> > > index 659963d40bb4..fe393ccf9352 100644
>> > > --- a/arch/arm64/kernel/vmlinux.lds.S
>> > > +++ b/arch/arm64/kernel/vmlinux.lds.S
>> > > @@ -196,6 +196,11 @@ SECTIONS
>> > >   swapper_pg_dir = .;
>> > >   . += SWAPPER_DIR_SIZE;
>> > >
>> > > +#ifdef CONFIG_ARM64_TTBR0_PAN
>> > > + reserved_ttbr0 = .;
>> > > + . += PAGE_SIZE;
>> > > +#endif
>> >
>> > Surely RESERVED_TTBR0_SIZE, as elsewhere?
>>
>> I'll try to move it somewhere where it can be included in vmlinux.lds.S
>> (I can probably include cpufeature.h directly).
>

Do we really need another zero page? The ordinary zero page is already
statically allocated these days, so we could simply move it between
idmap_pg_dir[] and swapper_pg_dir[], and get all the changes in the
early boot code for free (given that it covers the range between the
start of idmap_pg_dir[] and the end of swapper_pg_dir[])

That way, we could refer to __pa(empty_zero_page) anywhere by reading
ttbr1_el1 and subtracting PAGE_SIZE

WARNING: multiple messages have this Message-ID (diff)
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: kernel-hardening@lists.openwall.com
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Kees Cook <keescook@chromium.org>,
	Will Deacon <will.deacon@arm.com>,
	AKASHI Takahiro <takahiro.akashi@linaro.org>,
	James Morse <james.morse@arm.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [kernel-hardening] Re: [PATCH v2 3/7] arm64: Introduce uaccess_{disable, enable} functionality based on TTBR0_EL1
Date: Sun, 11 Sep 2016 14:55:12 +0100	[thread overview]
Message-ID: <CAKv+Gu8GMU=Lgh4awFLda-7K=orpg03D18kPDVVQEP6KzB5++g@mail.gmail.com> (raw)
In-Reply-To: <20160906104514.GC1425@leverpostej>

On 6 September 2016 at 11:45, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Sep 06, 2016 at 11:27:42AM +0100, Catalin Marinas wrote:
>> On Mon, Sep 05, 2016 at 06:20:38PM +0100, Mark Rutland wrote:
>> > On Fri, Sep 02, 2016 at 04:02:09PM +0100, Catalin Marinas wrote:
>> > > +static inline void uaccess_ttbr0_enable(void)
>> > > +{
>> > > + unsigned long flags;
>> > > +
>> > > + /*
>> > > +  * Disable interrupts to avoid preemption and potential saved
>> > > +  * TTBR0_EL1 updates between reading the variable and the MSR.
>> > > +  */
>> > > + local_irq_save(flags);
>> > > + write_sysreg(current_thread_info()->ttbr0, ttbr0_el1);
>> > > + isb();
>> > > + local_irq_restore(flags);
>> > > +}
>> >
>> > I don't follow what problem this actually protects us against. In the
>> > case of preemption everything should be saved+restored transparently, or
>> > things would go wrong as soon as we enable IRQs anyway.
>> >
>> > Is this a hold-over from a percpu approach rather than the
>> > current_thread_info() approach?
>>
>> If we get preempted between reading current_thread_info()->ttbr0 and
>> writing TTBR0_EL1, a series of context switches could lead to the update
>> of the ASID part of ttbr0. The actual MSR would store an old ASID in
>> TTBR0_EL1.
>
> Ah! Can you fold something about racing with an ASID update into the
> description?
>
>> > > +#else
>> > > +static inline void uaccess_ttbr0_disable(void)
>> > > +{
>> > > +}
>> > > +
>> > > +static inline void uaccess_ttbr0_enable(void)
>> > > +{
>> > > +}
>> > > +#endif
>> >
>> > I think that it's better to drop the ifdef and add:
>> >
>> >     if (!IS_ENABLED(CONFIG_ARM64_TTBR0_PAN))
>> >             return;
>> >
>> > ... at the start of each function. GCC should optimize the entire thing
>> > away when not used, but we'll get compiler coverage regardless, and
>> > therefore less breakage. All the symbols we required should exist
>> > regardless.
>>
>> The reason for this is that thread_info.ttbr0 is conditionally defined.
>> I don't think the compiler would ignore it.
>
> Good point; I missed that.
>
> [...]
>
>> > How about something like:
>> >
>> >     .macro alternative_endif_else_nop
>> >     alternative_else
>> >     .rept ((662b-661b) / 4)
>> >            nop
>> >     .endr
>> >     alternative_endif
>> >     .endm
>> >
>> > So for the above we could have:
>> >
>> >     alternative_if_not ARM64_HAS_PAN
>> >             save_and_disable_irq \tmp2
>> >             uaccess_ttbr0_enable \tmp1
>> >             restore_irq \tmp2
>> >     alternative_endif_else_nop
>> >
>> > I'll see about spinning a patch, or discovering why that happens to be
>> > broken.
>>
>> This looks better. Minor comment, I would actually name the ending
>> statement alternative_else_nop_endif to match the order in which you'd
>> normally write them.
>
> Completely agreed. I already made this change locally, immediately after
> sending the suggestion. :)
>
>> > >    * tables again to remove any speculatively loaded cache lines.
>> > >    */
>> > >   mov     x0, x25
>> > > - add     x1, x26, #SWAPPER_DIR_SIZE
>> > > + add     x1, x26, #SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE
>> > >   dmb     sy
>> > >   bl      __inval_cache_range
>> > >
>> > > diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>> > > index 659963d40bb4..fe393ccf9352 100644
>> > > --- a/arch/arm64/kernel/vmlinux.lds.S
>> > > +++ b/arch/arm64/kernel/vmlinux.lds.S
>> > > @@ -196,6 +196,11 @@ SECTIONS
>> > >   swapper_pg_dir = .;
>> > >   . += SWAPPER_DIR_SIZE;
>> > >
>> > > +#ifdef CONFIG_ARM64_TTBR0_PAN
>> > > + reserved_ttbr0 = .;
>> > > + . += PAGE_SIZE;
>> > > +#endif
>> >
>> > Surely RESERVED_TTBR0_SIZE, as elsewhere?
>>
>> I'll try to move it somewhere where it can be included in vmlinux.lds.S
>> (I can probably include cpufeature.h directly).
>

Do we really need another zero page? The ordinary zero page is already
statically allocated these days, so we could simply move it between
idmap_pg_dir[] and swapper_pg_dir[], and get all the changes in the
early boot code for free (given that it covers the range between the
start of idmap_pg_dir[] and the end of swapper_pg_dir[])

That way, we could refer to __pa(empty_zero_page) anywhere by reading
ttbr1_el1 and subtracting PAGE_SIZE

  reply	other threads:[~2016-09-11 13:55 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-02 15:02 [PATCH v2 0/7] arm64: Privileged Access Never using TTBR0_EL1 switching Catalin Marinas
2016-09-02 15:02 ` [kernel-hardening] " Catalin Marinas
2016-09-02 15:02 ` [PATCH v2 1/7] arm64: Factor out PAN enabling/disabling into separate uaccess_* macros Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] " Catalin Marinas
2016-09-05 15:38   ` Mark Rutland
2016-09-05 15:38     ` [kernel-hardening] " Mark Rutland
2016-09-12 14:52     ` Catalin Marinas
2016-09-12 14:52       ` [kernel-hardening] " Catalin Marinas
2016-09-12 15:09       ` Mark Rutland
2016-09-12 15:09         ` [kernel-hardening] " Mark Rutland
2016-09-12 16:26         ` Catalin Marinas
2016-09-12 16:26           ` [kernel-hardening] " Catalin Marinas
2016-09-02 15:02 ` [PATCH v2 2/7] arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] " Catalin Marinas
2016-09-05 16:11   ` Mark Rutland
2016-09-05 16:11     ` [kernel-hardening] " Mark Rutland
2016-09-02 15:02 ` [PATCH v2 3/7] arm64: Introduce uaccess_{disable, enable} functionality based on TTBR0_EL1 Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] [PATCH v2 3/7] arm64: Introduce uaccess_{disable,enable} " Catalin Marinas
2016-09-05 17:20   ` [PATCH v2 3/7] arm64: Introduce uaccess_{disable, enable} " Mark Rutland
2016-09-05 17:20     ` [kernel-hardening] " Mark Rutland
2016-09-06 10:27     ` Catalin Marinas
2016-09-06 10:27       ` [kernel-hardening] " Catalin Marinas
2016-09-06 10:45       ` Mark Rutland
2016-09-06 10:45         ` [kernel-hardening] " Mark Rutland
2016-09-11 13:55         ` Ard Biesheuvel [this message]
2016-09-11 13:55           ` Ard Biesheuvel
2016-09-12  9:32           ` Catalin Marinas
2016-09-12  9:32             ` Catalin Marinas
2016-09-09 17:15   ` Catalin Marinas
2016-09-09 17:15     ` [kernel-hardening] " Catalin Marinas
2016-09-02 15:02 ` [PATCH v2 4/7] arm64: Disable TTBR0_EL1 during normal kernel execution Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] " Catalin Marinas
2016-09-06 17:31   ` Mark Rutland
2016-09-06 17:31     ` [kernel-hardening] " Mark Rutland
2016-09-02 15:02 ` [PATCH v2 5/7] arm64: Handle faults caused by inadvertent user access with PAN enabled Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] " Catalin Marinas
2016-09-02 15:02 ` [PATCH v2 6/7] arm64: xen: Enable user access before a privcmd hvc call Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] " Catalin Marinas
2016-09-02 15:02 ` [PATCH v2 7/7] arm64: Enable CONFIG_ARM64_TTBR0_PAN Catalin Marinas
2016-09-02 15:02   ` [kernel-hardening] " Catalin Marinas
2016-09-02 15:47   ` Mark Rutland
2016-09-02 15:47     ` [kernel-hardening] " Mark Rutland
2016-09-07 23:20 ` [PATCH v2 0/7] arm64: Privileged Access Never using TTBR0_EL1 switching Kees Cook
2016-09-07 23:20   ` [kernel-hardening] " Kees Cook
2016-09-08 12:51   ` Catalin Marinas
2016-09-08 12:51     ` [kernel-hardening] " Catalin Marinas
2016-09-08 15:50     ` Kees Cook
2016-09-08 15:50       ` [kernel-hardening] " Kees Cook
2016-09-09 16:31     ` Mark Rutland
2016-09-09 16:31       ` [kernel-hardening] " Mark Rutland
2016-09-09 18:24       ` Kees Cook
2016-09-09 18:24         ` [kernel-hardening] " Kees Cook
2016-09-09 23:40 ` [kernel-hardening] " David Brown
2016-09-09 23:40   ` David Brown
2016-09-10  9:51 ` Catalin Marinas
2016-09-10  9:51   ` [kernel-hardening] " Catalin Marinas
2016-09-10 10:56   ` Ard Biesheuvel
2016-09-10 10:56     ` [kernel-hardening] " Ard Biesheuvel
2016-09-11 12:16     ` Catalin Marinas
2016-09-11 12:16       ` [kernel-hardening] " Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKv+Gu8GMU=Lgh4awFLda-7K=orpg03D18kPDVVQEP6KzB5++g@mail.gmail.com' \
    --to=ard.biesheuvel@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.