All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kees Cook <keescook@chromium.org>
To: Andy Lutomirski <luto@kernel.org>
Cc: "kernel-hardening@lists.openwall.com" 
	<kernel-hardening@lists.openwall.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Hoeun Ryu <hoeun.ryu@gmail.com>, PaX Team <pageexec@freemail.hu>,
	Emese Revfy <re.emese@gmail.com>,
	Russell King <linux@armlinux.org.uk>, X86 ML <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap()
Date: Wed, 5 Apr 2017 17:14:09 -0700	[thread overview]
Message-ID: <CAGXu5jLENkemQiL6bcL5=qsesi-2M=1jYhS7SGBpBG64ER2CsA@mail.gmail.com> (raw)
In-Reply-To: <CALCETrW7nN8Ja=r9k4ANpekKovC_wsYrGrge1WNoGsAjZZvqHQ@mail.gmail.com>

On Wed, Apr 5, 2017 at 4:57 PM, Andy Lutomirski <luto@kernel.org> wrote:
> On Wed, Mar 29, 2017 at 6:41 PM, Kees Cook <keescook@chromium.org> wrote:
>> On Wed, Mar 29, 2017 at 3:38 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>>> On Wed, Mar 29, 2017 at 11:15 AM, Kees Cook <keescook@chromium.org> wrote:
>>>> Based on PaX's x86 pax_{open,close}_kernel() implementation, this
>>>> allows HAVE_ARCH_RARE_WRITE to work on x86.
>>>>
>>>
>>>> +
>>>> +static __always_inline unsigned long __arch_rare_write_begin(void)
>>>> +{
>>>> +       unsigned long cr0;
>>>> +
>>>> +       preempt_disable();
>>>
>>> This looks wrong.  DEBUG_LOCKS_WARN_ON(!irqs_disabled()) would work,
>>> as would local_irq_disable().  There's no way that just disabling
>>> preemption is enough.
>>>
>>> (Also, how does this interact with perf nmis?)
>>
>> Do you mean preempt_disable() isn't strong enough here? I'm open to
>> suggestions. The goal would be to make sure nothing between _begin and
>> _end would get executed without interruption...
>>
>
> Sorry for the very slow response.
>
> preempt_disable() isn't strong enough to prevent interrupts, and an
> interrupt here would run with WP off, causing unknown havoc.  I tend
> to think that the caller should be responsible for turning off
> interrupts.

So, something like:

Top-level functions:

static __always_inline rare_write_begin(void)
{
    preempt_disable();
    local_irq_disable();
    barrier();
    __arch_rare_write_begin();
    barrier();
}

static __always_inline rare_write_end(void)
{
    barrier();
    __arch_rare_write_end();
    barrier();
    local_irq_enable();
    preempt_enable_no_resched();
}

x86-specific helpers:

static __always_inline unsigned long __arch_rare_write_begin(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(cr0 & X86_CR0_WP);
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

static __always_inline unsigned long __arch_rare_write_end(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(!(cr0 & X86_CR0_WP));
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

I can give it a spin...

-Kees

-- 
Kees Cook
Pixel Security

WARNING: multiple messages have this Message-ID (diff)
From: keescook@chromium.org (Kees Cook)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap()
Date: Wed, 5 Apr 2017 17:14:09 -0700	[thread overview]
Message-ID: <CAGXu5jLENkemQiL6bcL5=qsesi-2M=1jYhS7SGBpBG64ER2CsA@mail.gmail.com> (raw)
In-Reply-To: <CALCETrW7nN8Ja=r9k4ANpekKovC_wsYrGrge1WNoGsAjZZvqHQ@mail.gmail.com>

On Wed, Apr 5, 2017 at 4:57 PM, Andy Lutomirski <luto@kernel.org> wrote:
> On Wed, Mar 29, 2017 at 6:41 PM, Kees Cook <keescook@chromium.org> wrote:
>> On Wed, Mar 29, 2017 at 3:38 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>>> On Wed, Mar 29, 2017 at 11:15 AM, Kees Cook <keescook@chromium.org> wrote:
>>>> Based on PaX's x86 pax_{open,close}_kernel() implementation, this
>>>> allows HAVE_ARCH_RARE_WRITE to work on x86.
>>>>
>>>
>>>> +
>>>> +static __always_inline unsigned long __arch_rare_write_begin(void)
>>>> +{
>>>> +       unsigned long cr0;
>>>> +
>>>> +       preempt_disable();
>>>
>>> This looks wrong.  DEBUG_LOCKS_WARN_ON(!irqs_disabled()) would work,
>>> as would local_irq_disable().  There's no way that just disabling
>>> preemption is enough.
>>>
>>> (Also, how does this interact with perf nmis?)
>>
>> Do you mean preempt_disable() isn't strong enough here? I'm open to
>> suggestions. The goal would be to make sure nothing between _begin and
>> _end would get executed without interruption...
>>
>
> Sorry for the very slow response.
>
> preempt_disable() isn't strong enough to prevent interrupts, and an
> interrupt here would run with WP off, causing unknown havoc.  I tend
> to think that the caller should be responsible for turning off
> interrupts.

So, something like:

Top-level functions:

static __always_inline rare_write_begin(void)
{
    preempt_disable();
    local_irq_disable();
    barrier();
    __arch_rare_write_begin();
    barrier();
}

static __always_inline rare_write_end(void)
{
    barrier();
    __arch_rare_write_end();
    barrier();
    local_irq_enable();
    preempt_enable_no_resched();
}

x86-specific helpers:

static __always_inline unsigned long __arch_rare_write_begin(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(cr0 & X86_CR0_WP);
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

static __always_inline unsigned long __arch_rare_write_end(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(!(cr0 & X86_CR0_WP));
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

I can give it a spin...

-Kees

-- 
Kees Cook
Pixel Security

WARNING: multiple messages have this Message-ID (diff)
From: Kees Cook <keescook@chromium.org>
To: Andy Lutomirski <luto@kernel.org>
Cc: "kernel-hardening@lists.openwall.com"
	<kernel-hardening@lists.openwall.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Hoeun Ryu <hoeun.ryu@gmail.com>, PaX Team <pageexec@freemail.hu>,
	Emese Revfy <re.emese@gmail.com>,
	Russell King <linux@armlinux.org.uk>, X86 ML <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [kernel-hardening] Re: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap()
Date: Wed, 5 Apr 2017 17:14:09 -0700	[thread overview]
Message-ID: <CAGXu5jLENkemQiL6bcL5=qsesi-2M=1jYhS7SGBpBG64ER2CsA@mail.gmail.com> (raw)
In-Reply-To: <CALCETrW7nN8Ja=r9k4ANpekKovC_wsYrGrge1WNoGsAjZZvqHQ@mail.gmail.com>

On Wed, Apr 5, 2017 at 4:57 PM, Andy Lutomirski <luto@kernel.org> wrote:
> On Wed, Mar 29, 2017 at 6:41 PM, Kees Cook <keescook@chromium.org> wrote:
>> On Wed, Mar 29, 2017 at 3:38 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>>> On Wed, Mar 29, 2017 at 11:15 AM, Kees Cook <keescook@chromium.org> wrote:
>>>> Based on PaX's x86 pax_{open,close}_kernel() implementation, this
>>>> allows HAVE_ARCH_RARE_WRITE to work on x86.
>>>>
>>>
>>>> +
>>>> +static __always_inline unsigned long __arch_rare_write_begin(void)
>>>> +{
>>>> +       unsigned long cr0;
>>>> +
>>>> +       preempt_disable();
>>>
>>> This looks wrong.  DEBUG_LOCKS_WARN_ON(!irqs_disabled()) would work,
>>> as would local_irq_disable().  There's no way that just disabling
>>> preemption is enough.
>>>
>>> (Also, how does this interact with perf nmis?)
>>
>> Do you mean preempt_disable() isn't strong enough here? I'm open to
>> suggestions. The goal would be to make sure nothing between _begin and
>> _end would get executed without interruption...
>>
>
> Sorry for the very slow response.
>
> preempt_disable() isn't strong enough to prevent interrupts, and an
> interrupt here would run with WP off, causing unknown havoc.  I tend
> to think that the caller should be responsible for turning off
> interrupts.

So, something like:

Top-level functions:

static __always_inline rare_write_begin(void)
{
    preempt_disable();
    local_irq_disable();
    barrier();
    __arch_rare_write_begin();
    barrier();
}

static __always_inline rare_write_end(void)
{
    barrier();
    __arch_rare_write_end();
    barrier();
    local_irq_enable();
    preempt_enable_no_resched();
}

x86-specific helpers:

static __always_inline unsigned long __arch_rare_write_begin(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(cr0 & X86_CR0_WP);
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

static __always_inline unsigned long __arch_rare_write_end(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(!(cr0 & X86_CR0_WP));
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

I can give it a spin...

-Kees

-- 
Kees Cook
Pixel Security

  reply	other threads:[~2017-04-06  0:14 UTC|newest]

Thread overview: 188+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-29 18:15 [RFC v2] Introduce rare_write() infrastructure Kees Cook
2017-03-29 18:15 ` [kernel-hardening] " Kees Cook
2017-03-29 18:15 ` Kees Cook
2017-03-29 18:15 ` [RFC v2][PATCH 01/11] " Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-29 18:23   ` Kees Cook
2017-03-29 18:23     ` [kernel-hardening] " Kees Cook
2017-03-29 18:23     ` Kees Cook
2017-03-30  7:44     ` Ho-Eun Ryu
2017-03-30  7:44       ` [kernel-hardening] " Ho-Eun Ryu
2017-03-30  7:44       ` Ho-Eun Ryu
2017-03-30 17:02       ` Kees Cook
2017-03-30 17:02         ` [kernel-hardening] " Kees Cook
2017-03-30 17:02         ` Kees Cook
2017-04-07  8:09   ` Ho-Eun Ryu
2017-04-07  8:09     ` [kernel-hardening] " Ho-Eun Ryu
2017-04-07  8:09     ` Ho-Eun Ryu
2017-04-07 20:38     ` Kees Cook
2017-04-07 20:38       ` [kernel-hardening] " Kees Cook
2017-04-07 20:38       ` Kees Cook
2017-03-29 18:15 ` [RFC v2][PATCH 02/11] lkdtm: add test for " Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-30  9:34   ` [kernel-hardening] " Ian Campbell
2017-03-30  9:34     ` Ian Campbell
2017-03-30 16:16     ` Kees Cook
2017-03-30 16:16       ` Kees Cook
2017-03-30 16:16       ` Kees Cook
2017-03-29 18:15 ` [RFC v2][PATCH 03/11] net: switch sock_diag handlers to rare_write() Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-29 18:15 ` [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap() Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-29 22:38   ` Andy Lutomirski
2017-03-29 22:38     ` [kernel-hardening] " Andy Lutomirski
2017-03-29 22:38     ` Andy Lutomirski
2017-03-30  1:41     ` Kees Cook
2017-03-30  1:41       ` [kernel-hardening] " Kees Cook
2017-03-30  1:41       ` Kees Cook
2017-04-05 23:57       ` Andy Lutomirski
2017-04-05 23:57         ` [kernel-hardening] " Andy Lutomirski
2017-04-05 23:57         ` Andy Lutomirski
2017-04-06  0:14         ` Kees Cook [this message]
2017-04-06  0:14           ` [kernel-hardening] " Kees Cook
2017-04-06  0:14           ` Kees Cook
2017-04-06 15:59           ` Andy Lutomirski
2017-04-06 15:59             ` [kernel-hardening] " Andy Lutomirski
2017-04-06 15:59             ` Andy Lutomirski
2017-04-07  8:34             ` [kernel-hardening] " Mathias Krause
2017-04-07  8:34               ` Mathias Krause
2017-04-07  8:34               ` Mathias Krause
2017-04-07  9:46               ` Thomas Gleixner
2017-04-07  9:46                 ` Thomas Gleixner
2017-04-07  9:46                 ` Thomas Gleixner
2017-04-07 10:51                 ` Mathias Krause
2017-04-07 10:51                   ` Mathias Krause
2017-04-07 10:51                   ` Mathias Krause
2017-04-07 13:14                   ` Thomas Gleixner
2017-04-07 13:14                     ` Thomas Gleixner
2017-04-07 13:14                     ` Thomas Gleixner
2017-04-07 13:30                     ` Mathias Krause
2017-04-07 13:30                       ` Mathias Krause
2017-04-07 13:30                       ` Mathias Krause
2017-04-07 16:14                       ` Andy Lutomirski
2017-04-07 16:14                         ` Andy Lutomirski
2017-04-07 16:14                         ` Andy Lutomirski
2017-04-07 16:22                         ` Mark Rutland
2017-04-07 16:22                           ` Mark Rutland
2017-04-07 16:22                           ` Mark Rutland
2017-04-07 19:58                         ` PaX Team
2017-04-07 19:58                           ` PaX Team
2017-04-07 19:58                           ` PaX Team
2017-04-08  4:58                           ` Andy Lutomirski
2017-04-08  4:58                             ` Andy Lutomirski
2017-04-08  4:58                             ` Andy Lutomirski
2017-04-09 12:47                             ` PaX Team
2017-04-09 12:47                               ` PaX Team
2017-04-09 12:47                               ` PaX Team
2017-04-10  0:10                               ` Andy Lutomirski
2017-04-10  0:10                                 ` Andy Lutomirski
2017-04-10  0:10                                 ` Andy Lutomirski
2017-04-10 10:42                                 ` PaX Team
2017-04-10 10:42                                   ` PaX Team
2017-04-10 10:42                                   ` PaX Team
2017-04-10 16:01                                   ` Andy Lutomirski
2017-04-10 16:01                                     ` Andy Lutomirski
2017-04-10 16:01                                     ` Andy Lutomirski
2017-04-07 20:44                         ` Thomas Gleixner
2017-04-07 20:44                           ` Thomas Gleixner
2017-04-07 20:44                           ` Thomas Gleixner
2017-04-07 21:20                           ` Kees Cook
2017-04-07 21:20                             ` Kees Cook
2017-04-07 21:20                             ` Kees Cook
2017-04-08  4:12                             ` Daniel Micay
2017-04-08  4:12                               ` Daniel Micay
2017-04-08  4:12                               ` Daniel Micay
2017-04-08  4:13                               ` Daniel Micay
2017-04-08  4:13                                 ` Daniel Micay
2017-04-08  4:13                                 ` Daniel Micay
2017-04-08  4:21                         ` Daniel Micay
2017-04-08  4:21                           ` Daniel Micay
2017-04-08  4:21                           ` Daniel Micay
2017-04-08  5:07                           ` Andy Lutomirski
2017-04-08  5:07                             ` Andy Lutomirski
2017-04-08  5:07                             ` Andy Lutomirski
2017-04-08  7:33                             ` Daniel Micay
2017-04-08  7:33                               ` Daniel Micay
2017-04-08  7:33                               ` Daniel Micay
2017-04-08 15:20                               ` Andy Lutomirski
2017-04-08 15:20                                 ` Andy Lutomirski
2017-04-08 15:20                                 ` Andy Lutomirski
2017-04-09 10:53                                 ` Ingo Molnar
2017-04-09 10:53                                   ` Ingo Molnar
2017-04-09 10:53                                   ` Ingo Molnar
2017-04-10 10:22                                 ` Mark Rutland
2017-04-10 10:22                                   ` Mark Rutland
2017-04-10 10:22                                   ` Mark Rutland
2017-04-09 20:24                             ` PaX Team
2017-04-09 20:24                               ` PaX Team
2017-04-09 20:24                               ` PaX Team
2017-04-10  0:31                               ` Andy Lutomirski
2017-04-10  0:31                                 ` Andy Lutomirski
2017-04-10  0:31                                 ` Andy Lutomirski
2017-04-10 19:47                                 ` PaX Team
2017-04-10 19:47                                   ` PaX Team
2017-04-10 19:47                                   ` PaX Team
2017-04-10 20:27                                   ` Andy Lutomirski
2017-04-10 20:27                                     ` Andy Lutomirski
2017-04-10 20:27                                     ` Andy Lutomirski
2017-04-10 20:13                               ` Kees Cook
2017-04-10 20:13                                 ` Kees Cook
2017-04-10 20:13                                 ` Kees Cook
2017-04-10 20:17                                 ` Andy Lutomirski
2017-04-10 20:17                                   ` Andy Lutomirski
2017-04-10 20:17                                   ` Andy Lutomirski
2017-04-07 19:25                       ` Thomas Gleixner
2017-04-07 19:25                         ` Thomas Gleixner
2017-04-07 19:25                         ` Thomas Gleixner
2017-04-07 14:45                   ` Peter Zijlstra
2017-04-07 14:45                     ` Peter Zijlstra
2017-04-07 14:45                     ` Peter Zijlstra
2017-04-10 10:29                     ` Mark Rutland
2017-04-10 10:29                       ` Mark Rutland
2017-04-10 10:29                       ` Mark Rutland
2017-04-07 19:52                 ` PaX Team
2017-04-07 19:52                   ` PaX Team
2017-04-07 19:52                   ` PaX Team
2017-04-10  8:26                   ` Thomas Gleixner
2017-04-10  8:26                     ` Thomas Gleixner
2017-04-10  8:26                     ` Thomas Gleixner
2017-04-10 19:55                     ` PaX Team
2017-04-10 19:55                       ` PaX Team
2017-04-10 19:55                       ` PaX Team
2017-04-07  9:37   ` Peter Zijlstra
2017-04-07  9:37     ` [kernel-hardening] " Peter Zijlstra
2017-04-07  9:37     ` Peter Zijlstra
2017-03-29 18:15 ` [RFC v2][PATCH 05/11] ARM: mm: dump: Add domain to output Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-29 18:15 ` [RFC v2][PATCH 06/11] ARM: domains: Extract common USER domain init Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-29 18:15 ` [RFC v2][PATCH 07/11] ARM: mm: set DOMAIN_WR_RARE for rodata Kees Cook
2017-03-29 18:15   ` [kernel-hardening] " Kees Cook
2017-03-29 18:15   ` Kees Cook
2017-03-29 18:16 ` [RFC v2][PATCH 08/11] ARM: Implement __arch_rare_write_begin/end() Kees Cook
2017-03-29 18:16   ` [kernel-hardening] " Kees Cook
2017-03-29 18:16   ` Kees Cook
2017-04-07  9:36   ` Peter Zijlstra
2017-04-07  9:36     ` [kernel-hardening] " Peter Zijlstra
2017-04-07  9:36     ` Peter Zijlstra
2017-03-29 18:16 ` [RFC v2][PATCH 09/11] list: add rare_write() list helpers Kees Cook
2017-03-29 18:16   ` [kernel-hardening] " Kees Cook
2017-03-29 18:16   ` Kees Cook
2017-03-29 18:16 ` [RFC v2][PATCH 10/11] gcc-plugins: Add constify plugin Kees Cook
2017-03-29 18:16   ` [kernel-hardening] " Kees Cook
2017-03-29 18:16   ` Kees Cook
2017-03-29 18:16 ` [RFC v2][PATCH 11/11] cgroups: force all struct cftype const Kees Cook
2017-03-29 18:16   ` [kernel-hardening] " Kees Cook
2017-03-29 18:16   ` Kees Cook
2017-03-29 19:00 ` [RFC v2] Introduce rare_write() infrastructure Russell King - ARM Linux
2017-03-29 19:00   ` [kernel-hardening] " Russell King - ARM Linux
2017-03-29 19:00   ` Russell King - ARM Linux
2017-03-29 19:14   ` Kees Cook
2017-03-29 19:14     ` [kernel-hardening] " Kees Cook
2017-03-29 19:14     ` Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGXu5jLENkemQiL6bcL5=qsesi-2M=1jYhS7SGBpBG64ER2CsA@mail.gmail.com' \
    --to=keescook@chromium.org \
    --cc=hoeun.ryu@gmail.com \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=pageexec@freemail.hu \
    --cc=re.emese@gmail.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.