All of lore.kernel.org
 help / color / mirror / Atom feed
From: AKASHI Takahiro <takahiro.akashi@linaro.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: kernel-hardening@lists.openwall.com, "Reshetova,
	Elena" <elena.reshetova@intel.com>,
	David Windsor <dave@progbits.org>,
	Kees Cook <keescook@chromium.org>,
	Hans Liljestrand <ishkamiel@gmail.com>,
	Colin Vidal <colin@cvidal.org>
Subject: Re: [kernel-hardening] [RFC 2/2] arm: implementation for HARDENED_ATOMIC
Date: Fri, 28 Oct 2016 14:18:57 +0900	[thread overview]
Message-ID: <20161028051856.GE19531@linaro.org> (raw)
In-Reply-To: <20161027132235.GA30193@leverpostej>

On Thu, Oct 27, 2016 at 02:24:36PM +0100, Mark Rutland wrote:
> Hi,
> 
> On Tue, Oct 18, 2016 at 04:59:21PM +0200, Colin Vidal wrote:
> > This adds arm-specific code in order to support HARDENED_ATOMIC
> > feature. When overflow is detected in atomic_t, atomic64_t or
> > atomic_long_t, an exception is raised and call
> > hardened_atomic_overflow.
> 
> I have some comments below, but for real review this needs to go via the
> linux-arm-kernel list.

Yeah, definitely, but

> > diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
> > index 66d0e21..fdaee17 100644
> > --- a/arch/arm/include/asm/atomic.h
> > +++ b/arch/arm/include/asm/atomic.h
> > @@ -17,18 +17,52 @@
> >  #include <linux/irqflags.h>
> >  #include <asm/barrier.h>
> >  #include <asm/cmpxchg.h>
> > +#include <linux/bug.h>
> >  
> >  #define ATOMIC_INIT(i)	{ (i) }
> >  
> >  #ifdef __KERNEL__
> >  
> > +#ifdef CONFIG_HARDENED_ATOMIC
> > +#define HARDENED_ATOMIC_INSN "bkpt 0xf103"
> 
> Please put the immediate in a #define somewhere.
> 
> What about thumb2 kernels?
> 
> > +#define _ASM_EXTABLE(from, to)			\
> > +	".pushsection __ex_table,\"a\"\n"	\
> > +	".align 3\n"				\
> > +	".long "#from","#to"\n"			\
> > +	".popsection"
> > +#define __OVERFLOW_POST				\
> > +	"bvc 3f\n"				\
> > +	"2: "HARDENED_ATOMIC_INSN"\n"		\
> > +	"3:\n"
> > +#define __OVERFLOW_POST_RETURN			\
> > +	"bvc 3f\n"				\
> > +	"mov %0,%1\n"				\
> > +	"2: "HARDENED_ATOMIC_INSN"\n"		\
> > +	"3:\n"
> > +#define __OVERFLOW_EXTABLE			\
> > +	"4:\n"					\
> > +	_ASM_EXTABLE(2b, 4b)
> > +#else
> > +#define __OVERFLOW_POST
> > +#define __OVERFLOW_POST_RETURN
> > +#define __OVERFLOW_EXTABLE
> > +#endif
> > +
> 
> All this should live close to the assembly using it, to make it possible
> to follow.
> 
> This may also not be the best way of structuring this code. The
> additional indirection of passing this in at a high level makes it hard
> to read and potentially fragile. For single instructions it was ok, but
> I'm not so sure that it's ok for larger sequences like this.

I did the similar thing for arm64 port (ll/sc version) as the current
macros are already complicated and I have no other better idea to
generate definitions for protected and _wrap versions equally.

See below. What are different from Colin's arm port are:
  * control __HARDENED_ATOMIC_CHECK/__CL* directly instead of passing them
    as an argument
  * modify regs->pc directly in a handler to skip "brk" (not shown in this
    hunk) instead of using _ASM_EXTABLE (& fixup_exception())

Anyway, I'm putting off posting my arm64 port while some discussions are
going on against Elena's x86 patch.

Thanks,
-Takahiro AKASHI
===8<===
#define ATOMIC_OP(op, asm_op, wrap, cl)					\
  ...

#define ATOMIC_OP_RETURN(name, mb, acq, rel, op, asm_op, wrap, cl)	\
__LL_SC_INLINE int							\
__LL_SC_PREFIX(atomic_##op##_return##wrap##name(int i, atomic##wrap##_t *v))\
{									\
	unsigned long tmp;						\
	int result;							\
									\
	asm volatile("// atomic_" #op "_return" #name "\n"		\
"	prfm	pstl1strm, %2\n"					\
"1:	ld" #acq "xr	%w0, %2\n"					\
"	" #asm_op "	%w0, %w0, %w3\n"				\
	__HARDENED_ATOMIC_CHECK						\
"	st" #rel "xr	%w1, %w0, %2\n"					\
"	cbnz	%w1, 1b\n"						\
"	" #mb "\n"							\
"4:"									\
	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
	: "Ir" (i)							\
	: cl);								\
									\
	return result;							\
}									\
__LL_SC_EXPORT(atomic_##op##_return##wrap##name);

#define ATOMIC_FETCH_OP(name, mb, acq, rel, op, asm_op, wrap, cl)	\
  ...

#define ATOMIC_OPS(...)							\
	ATOMIC_OP(__VA_ARGS__, __CL)					\
	ATOMIC_OP_RETURN(        , dmb ish,  , l, __VA_ARGS__, __CL_MEM)\
	ATOMIC_OP_RETURN(_relaxed,        ,  ,  , __VA_ARGS__, __CL)	\
	ATOMIC_OP_RETURN(_acquire,        , a,  , __VA_ARGS__, __CL_MEM)\
	ATOMIC_OP_RETURN(_release,        ,  , l, __VA_ARGS__, __CL_MEM)\
	ATOMIC_FETCH_OP (        , dmb ish,  , l, __VA_ARGS__, __CL_MEM)\
	ATOMIC_FETCH_OP (_relaxed,        ,  ,  , __VA_ARGS__, __CL)	\
	ATOMIC_FETCH_OP (_acquire,        , a,  , __VA_ARGS__, __CL_MEM)\
	ATOMIC_FETCH_OP (_release,        ,  , l, __VA_ARGS__, __CL_MEM)

#ifdef CONFIG_HARDENED_ATOMIC
#define __HARDENED_ATOMIC_CHECK						\
"	bvc	3f\n"							\
"2:	brk	" __stringify(BUG_ATOMIC_OVERFLOW_BRK_IMM) "\n"		\
"	b	4f\n"							\
"3:"
#define __CL     "cc"
#define __CL_MEM "cc", "memory"

ATOMIC_OPS(add, adds,      )
ATOMIC_OPS(sub, subs,      )
#else
#define __HARDENED_ATOMIC_CHECK
#define __CL
#define __CL_MEM

ATOMIC_OPS(add,  add,      )
ATOMIC_OPS(sub,  sub,      )
#endif

#undef __HARDENED_ATOMIC_CHECK
#define __HARDENED_ATOMIC_CHECK
#undef __CL
#undef __CL_MEM
#define __CL
#define __CL_MEM "memory"
ATOMIC_OPS(add,  add, _wrap)
ATOMIC_OPS(sub,  sub, _wrap)
===>8===


> > diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> > index 3a2e678..ce8ee00 100644
> > --- a/arch/arm/mm/fault.c
> > +++ b/arch/arm/mm/fault.c
> > @@ -580,6 +580,21 @@ do_PrefetchAbort(unsigned long addr, unsigned int ifsr, struct pt_regs *regs)
> >  	const struct fsr_info *inf = ifsr_info + fsr_fs(ifsr);
> >  	struct siginfo info;
> >  
> > +#ifdef CONFIG_HARDENED_ATOMIC
> > +	if (fsr_fs(ifsr) == FAULT_CODE_DEBUG) {
> 
> You'll need to justify why this isn't in the ifsr_info table. It has the
> same basic shape as the usual set of handlers.
> 
> I note that we don't seem to use SW breakpoints at all currently, and I
> suspect there's a reason for that which we need to consider.
> 
> Also, if this *must* live here, please make it a static inline with an
> empty stub, rather than an ifdef'd block.
> 
> > +		unsigned long pc = instruction_pointer(regs);
> > +		unsigned int bkpt;
> > +
> > +		if (!probe_kernel_address((const unsigned int *)pc, bkpt) &&
> > +		    cpu_to_le32(bkpt) == 0xe12f1073) {
> 
> This appears to be the A1 encoding from the ARM ARM. What about the T1
> encoding, i.e. thumb?
> 
> Regardless, *please* de-magic the number using a #define.
> 
> Also, this should be le32_to_cpu -- in the end we're treating the
> coverted value as cpu-native. The variable itself should be a __le32.
> 
> Thanks,
> Mark.
> 
> > +			current->thread.error_code = ifsr;
> > +			current->thread.trap_no = 0;
> > +			hardened_atomic_overflow(regs);
> > +			fixup_exception(regs);
> > +			return;
> > +		}
> > +	}
> > +#endif
> >  	if (!inf->fn(addr, ifsr | FSR_LNX_PF, regs))
> >  		return;
> >  
> > -- 
> > 2.7.4
> > 

  reply	other threads:[~2016-10-28  5:18 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-18 14:59 [kernel-hardening] [RFC 0/2] arm: implementation of HARDENED_ATOMIC Colin Vidal
2016-10-18 14:59 ` [kernel-hardening] [RFC 1/2] Reordering / guard definition on atomic_*_wrap function in order to avoid implicitly defined / redefined error on them, when CONFIG_HARDENED_ATOMIC is unset Colin Vidal
2016-10-18 16:04   ` Vaishali Thakkar
2016-10-19  8:48     ` Colin Vidal
2016-10-19  8:21   ` [kernel-hardening] " Reshetova, Elena
2016-10-19  8:31     ` Greg KH
2016-10-19  8:58       ` Colin Vidal
2016-10-19  9:16         ` Greg KH
2016-10-18 14:59 ` [kernel-hardening] [RFC 2/2] arm: implementation for HARDENED_ATOMIC Colin Vidal
2016-10-18 21:29   ` [kernel-hardening] " Kees Cook
2016-10-19  8:45     ` Colin Vidal
2016-10-19 20:11       ` Kees Cook
2016-10-20  5:58         ` AKASHI Takahiro
2016-10-20  8:30           ` Colin Vidal
2016-10-25  9:18   ` AKASHI Takahiro
2016-10-25 15:02     ` Colin Vidal
2016-10-26  7:24       ` AKASHI Takahiro
2016-10-26  8:20         ` Colin Vidal
2016-10-27 11:08           ` Mark Rutland
2016-10-27 21:37             ` Kees Cook
2016-10-27 13:24   ` [kernel-hardening] " Mark Rutland
2016-10-28  5:18     ` AKASHI Takahiro [this message]
2016-10-28  8:33     ` Colin Vidal
2016-10-28 10:20       ` Mark Rutland
2016-10-28 10:59         ` David Windsor
2016-10-21  7:47 ` [kernel-hardening] Re: [RFC 0/2] arm: implementation of HARDENED_ATOMIC AKASHI Takahiro
2016-10-27 10:32 ` [kernel-hardening] " Mark Rutland
2016-10-27 12:45   ` David Windsor
2016-10-27 13:53     ` Mark Rutland
2016-10-27 14:10       ` David Windsor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161028051856.GE19531@linaro.org \
    --to=takahiro.akashi@linaro.org \
    --cc=colin@cvidal.org \
    --cc=dave@progbits.org \
    --cc=elena.reshetova@intel.com \
    --cc=ishkamiel@gmail.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=mark.rutland@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.