From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756492AbdDGJqa (ORCPT ); Fri, 7 Apr 2017 05:46:30 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:41123 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755968AbdDGJqV (ORCPT ); Fri, 7 Apr 2017 05:46:21 -0400 Date: Fri, 7 Apr 2017 11:46:01 +0200 (CEST) From: Thomas Gleixner To: Mathias Krause cc: Andy Lutomirski , Kees Cook , Andy Lutomirski , "kernel-hardening@lists.openwall.com" , Mark Rutland , Hoeun Ryu , PaX Team , Emese Revfy , Russell King , X86 ML , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , Peter Zijlstra Subject: Re: [kernel-hardening] Re: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap() In-Reply-To: Message-ID: References: <1490811363-93944-1-git-send-email-keescook@chromium.org> <1490811363-93944-5-git-send-email-keescook@chromium.org> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 7 Apr 2017, Mathias Krause wrote: > On 6 April 2017 at 17:59, Andy Lutomirski wrote: > > On Wed, Apr 5, 2017 at 5:14 PM, Kees Cook wrote: > >> static __always_inline rare_write_begin(void) > >> { > >> preempt_disable(); > >> local_irq_disable(); > >> barrier(); > >> __arch_rare_write_begin(); > >> barrier(); > >> } > > > > Looks good, except you don't need preempt_disable(). > > local_irq_disable() also disables preemption. You might need to use > > local_irq_save(), though, depending on whether any callers already > > have IRQs off. > > Well, doesn't look good to me. NMIs will still be able to interrupt > this code and will run with CR0.WP = 0. > > Shouldn't you instead question yourself why PaX can do it "just" with > preempt_disable() instead?! That's silly. Just because PaX does it, doesn't mean it's correct. To be honest, playing games with the CR0.WP bit is outright stupid to begin with. Whether protected by preempt_disable or local_irq_disable, to make that work it needs CR0 handling in the exception entry/exit at the lowest level. And that's just a nightmare maintainence wise as it's prone to be broken over time. Aside of that it's pointless overhead for the normal case. The proper solution is: write_rare(ptr, val) { mp = map_shadow_rw(ptr); *mp = val; unmap_shadow_rw(mp); } map_shadow_rw() is essentially the same thing as we do in the highmem case where the kernel creates a shadow mapping of the user space pages via kmap_atomic(). It's valid (at least on x86) to have a shadow map with the same page attributes but write enabled. That does not require any fixups of CR0 and just works. Thanks, tglx From mboxrd@z Thu Jan 1 00:00:00 1970 From: tglx@linutronix.de (Thomas Gleixner) Date: Fri, 7 Apr 2017 11:46:01 +0200 (CEST) Subject: [kernel-hardening] Re: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap() In-Reply-To: References: <1490811363-93944-1-git-send-email-keescook@chromium.org> <1490811363-93944-5-git-send-email-keescook@chromium.org> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Fri, 7 Apr 2017, Mathias Krause wrote: > On 6 April 2017 at 17:59, Andy Lutomirski wrote: > > On Wed, Apr 5, 2017 at 5:14 PM, Kees Cook wrote: > >> static __always_inline rare_write_begin(void) > >> { > >> preempt_disable(); > >> local_irq_disable(); > >> barrier(); > >> __arch_rare_write_begin(); > >> barrier(); > >> } > > > > Looks good, except you don't need preempt_disable(). > > local_irq_disable() also disables preemption. You might need to use > > local_irq_save(), though, depending on whether any callers already > > have IRQs off. > > Well, doesn't look good to me. NMIs will still be able to interrupt > this code and will run with CR0.WP = 0. > > Shouldn't you instead question yourself why PaX can do it "just" with > preempt_disable() instead?! That's silly. Just because PaX does it, doesn't mean it's correct. To be honest, playing games with the CR0.WP bit is outright stupid to begin with. Whether protected by preempt_disable or local_irq_disable, to make that work it needs CR0 handling in the exception entry/exit at the lowest level. And that's just a nightmare maintainence wise as it's prone to be broken over time. Aside of that it's pointless overhead for the normal case. The proper solution is: write_rare(ptr, val) { mp = map_shadow_rw(ptr); *mp = val; unmap_shadow_rw(mp); } map_shadow_rw() is essentially the same thing as we do in the highmem case where the kernel creates a shadow mapping of the user space pages via kmap_atomic(). It's valid (at least on x86) to have a shadow map with the same page attributes but write enabled. That does not require any fixups of CR0 and just works. Thanks, tglx From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Fri, 7 Apr 2017 11:46:01 +0200 (CEST) From: Thomas Gleixner In-Reply-To: Message-ID: References: <1490811363-93944-1-git-send-email-keescook@chromium.org> <1490811363-93944-5-git-send-email-keescook@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Subject: Re: [kernel-hardening] Re: [RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap() To: Mathias Krause Cc: Andy Lutomirski , Kees Cook , Andy Lutomirski , "kernel-hardening@lists.openwall.com" , Mark Rutland , Hoeun Ryu , PaX Team , Emese Revfy , Russell King , X86 ML , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , Peter Zijlstra List-ID: On Fri, 7 Apr 2017, Mathias Krause wrote: > On 6 April 2017 at 17:59, Andy Lutomirski wrote: > > On Wed, Apr 5, 2017 at 5:14 PM, Kees Cook wrote: > >> static __always_inline rare_write_begin(void) > >> { > >> preempt_disable(); > >> local_irq_disable(); > >> barrier(); > >> __arch_rare_write_begin(); > >> barrier(); > >> } > > > > Looks good, except you don't need preempt_disable(). > > local_irq_disable() also disables preemption. You might need to use > > local_irq_save(), though, depending on whether any callers already > > have IRQs off. > > Well, doesn't look good to me. NMIs will still be able to interrupt > this code and will run with CR0.WP = 0. > > Shouldn't you instead question yourself why PaX can do it "just" with > preempt_disable() instead?! That's silly. Just because PaX does it, doesn't mean it's correct. To be honest, playing games with the CR0.WP bit is outright stupid to begin with. Whether protected by preempt_disable or local_irq_disable, to make that work it needs CR0 handling in the exception entry/exit at the lowest level. And that's just a nightmare maintainence wise as it's prone to be broken over time. Aside of that it's pointless overhead for the normal case. The proper solution is: write_rare(ptr, val) { mp = map_shadow_rw(ptr); *mp = val; unmap_shadow_rw(mp); } map_shadow_rw() is essentially the same thing as we do in the highmem case where the kernel creates a shadow mapping of the user space pages via kmap_atomic(). It's valid (at least on x86) to have a shadow map with the same page attributes but write enabled. That does not require any fixups of CR0 and just works. Thanks, tglx