From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751370AbeA1JOJ (ORCPT ); Sun, 28 Jan 2018 04:14:09 -0500 Received: from mail-wr0-f196.google.com ([209.85.128.196]:36406 "EHLO mail-wr0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751169AbeA1JOH (ORCPT ); Sun, 28 Jan 2018 04:14:07 -0500 X-Google-Smtp-Source: AH8x2245mMJXqS5+XIa6Q2FCprsH+kEbiFZC5V6ql4Ogwr8PLu1Zmj+H8Ip3Cnwum2z8+PcBVU3LMQ== Date: Sun, 28 Jan 2018 10:02:50 +0100 From: Ingo Molnar To: Dan Williams Cc: tglx@linutronix.de, linux-arch@vger.kernel.org, kernel-hardening@lists.openwall.com, gregkh@linuxfoundation.org, x86@kernel.org, Ingo Molnar , "H. Peter Anvin" , torvalds@linux-foundation.org, alan@linux.intel.com Subject: Re: [PATCH v5 03/12] x86: implement array_idx_mask Message-ID: <20180128090250.3gxq2uoebiwh4who@gmail.com> References: <151703971300.26578.1185595719337719486.stgit@dwillia2-desk3.amr.corp.intel.com> <151703972912.26578.6792656143278523491.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <151703972912.26578.6792656143278523491.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Dan Williams wrote: > 'array_idx' uses a mask to sanitize user controllable array indexes, > i.e. generate a 0 mask if idx >= sz, and a ~0 mask otherwise. While the > default array_idx_mask handles the carry-bit from the (index - size) > result in software. The x86 'array_idx_mask' does the same, but the > carry-bit is handled in the processor CF flag without conditional > instructions in the control flow. Same style comments apply as for patch 02. > Suggested-by: Linus Torvalds > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > Cc: x86@kernel.org > Signed-off-by: Dan Williams > --- > arch/x86/include/asm/barrier.h | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h > index 01727dbc294a..30419b674ebd 100644 > --- a/arch/x86/include/asm/barrier.h > +++ b/arch/x86/include/asm/barrier.h > @@ -24,6 +24,28 @@ > #define wmb() asm volatile("sfence" ::: "memory") > #endif > > +/** > + * array_idx_mask - generate a mask for array_idx() that is ~0UL when > + * the bounds check succeeds and 0 otherwise > + * > + * mask = 0 - (idx < sz); > + */ > +#define array_idx_mask array_idx_mask > +static inline unsigned long array_idx_mask(unsigned long idx, unsigned long sz) Please put an extra newline between definitions (even if they are closely related as these). > +{ > + unsigned long mask; > + > +#ifdef CONFIG_X86_32 > + asm ("cmpl %1,%2; sbbl %0,%0;" > +#else > + asm ("cmpq %1,%2; sbbq %0,%0;" > +#endif Wouldn't this suffice: asm ("cmp %1,%2; sbb %0,%0;" ... as the word width should automatically be 32 bits on 32-bit kernels and 64 bits on 64-bit kernels? Thanks, Ingo From mboxrd@z Thu Jan 1 00:00:00 1970 Sender: Ingo Molnar Date: Sun, 28 Jan 2018 10:02:50 +0100 From: Ingo Molnar Message-ID: <20180128090250.3gxq2uoebiwh4who@gmail.com> References: <151703971300.26578.1185595719337719486.stgit@dwillia2-desk3.amr.corp.intel.com> <151703972912.26578.6792656143278523491.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <151703972912.26578.6792656143278523491.stgit@dwillia2-desk3.amr.corp.intel.com> Subject: [kernel-hardening] Re: [PATCH v5 03/12] x86: implement array_idx_mask To: Dan Williams Cc: tglx@linutronix.de, linux-arch@vger.kernel.org, kernel-hardening@lists.openwall.com, gregkh@linuxfoundation.org, x86@kernel.org, Ingo Molnar , "H. Peter Anvin" , torvalds@linux-foundation.org, alan@linux.intel.com List-ID: * Dan Williams wrote: > 'array_idx' uses a mask to sanitize user controllable array indexes, > i.e. generate a 0 mask if idx >= sz, and a ~0 mask otherwise. While the > default array_idx_mask handles the carry-bit from the (index - size) > result in software. The x86 'array_idx_mask' does the same, but the > carry-bit is handled in the processor CF flag without conditional > instructions in the control flow. Same style comments apply as for patch 02. > Suggested-by: Linus Torvalds > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > Cc: x86@kernel.org > Signed-off-by: Dan Williams > --- > arch/x86/include/asm/barrier.h | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h > index 01727dbc294a..30419b674ebd 100644 > --- a/arch/x86/include/asm/barrier.h > +++ b/arch/x86/include/asm/barrier.h > @@ -24,6 +24,28 @@ > #define wmb() asm volatile("sfence" ::: "memory") > #endif > > +/** > + * array_idx_mask - generate a mask for array_idx() that is ~0UL when > + * the bounds check succeeds and 0 otherwise > + * > + * mask = 0 - (idx < sz); > + */ > +#define array_idx_mask array_idx_mask > +static inline unsigned long array_idx_mask(unsigned long idx, unsigned long sz) Please put an extra newline between definitions (even if they are closely related as these). > +{ > + unsigned long mask; > + > +#ifdef CONFIG_X86_32 > + asm ("cmpl %1,%2; sbbl %0,%0;" > +#else > + asm ("cmpq %1,%2; sbbq %0,%0;" > +#endif Wouldn't this suffice: asm ("cmp %1,%2; sbb %0,%0;" ... as the word width should automatically be 32 bits on 32-bit kernels and 64 bits on 64-bit kernels? Thanks, Ingo