linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>,
	Jayachandran Chandrasekharan Nair <jnair@marvell.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Jan Glauber <jglauber@marvell.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC] Disable lockref on arm64
Date: Sat, 15 Jun 2019 10:47:19 +0200	[thread overview]
Message-ID: <CAKv+Gu_XuhgUCYOeykrbaxJz-wL1HFrc_O+HeZHqaGkMHd2J9Q@mail.gmail.com> (raw)
In-Reply-To: <201906142026.1BC27EDB1E@keescook>

On Sat, 15 Jun 2019 at 06:21, Kees Cook <keescook@chromium.org> wrote:
>
> tl;dr: if arm/arm64 can catch overflow, untested dec-to-zero, and
> inc-from-zero, while performing better than existing REFCOUNT_FULL,
> it's a no-brainer to switch. Minimum parity to x86 would be to catch
> overflow and untested dec-to-zero. Minimum viable protection would be to
> catch overflow. LKDTM is your friend.
>
> Details below...
>
> On Fri, Jun 14, 2019 at 11:38:50AM +0100, Will Deacon wrote:
> > On Fri, Jun 14, 2019 at 12:24:54PM +0200, Ard Biesheuvel wrote:
> > > On Fri, 14 Jun 2019 at 11:58, Will Deacon <will.deacon@arm.com> wrote:
> > > > On Fri, Jun 14, 2019 at 07:09:26AM +0000, Jayachandran Chandrasekharan Nair wrote:
> > > > > x86 added a arch-specific fast refcount implementation - and the commit
> > > > > specifically notes that it is faster than cmpxchg based code[1].
> > > > >
> > > > > There seems to be an ongoing effort to move over more and more subsystems
> > > > > from atomic_t to refcount_t(e.g.[2]), specifically because refcount_t on
> > > > > x86 is fast enough and you get some error checking atomic_t that does not
> > > > > have.
>
> For clarity: the choices on x86 are: full or fast, where both catch
> the condition that leads to use-after-free that can be unconditionally
> mitigated (i.e. refcount overflow-wrapping to zero: the common missing
> ref count decrement). The _underflow_ case (the less common missing ref
> count increment) can be exploited but nothing can be done to mitigate
> it. Only a later increment from zero can indicate that something went
> wrong _in the past_.
>
> There is not a way to build x86 without the overflow protection, and
> that was matched on arm/arm64 by making REFCOUNT_FULL unconditionally
> enabled. So, from the perspective of my take on weakening the protection
> level, I'm totally fine if arm/arm64 falls back to a non-FULL
> implementation as long as it catches the overflow case (which the prior
> "fast" patches totally did).
>
> > > > Correct, but there are also some cases that are only caught by
> > > > REFCOUNT_FULL.
> > > >
> > > Yes, but do note that my arm64 implementation catches
> > > increment-from-zero as well.
>
> FWIW, the vast majority of bugs that refcount_t has found has been
> inc-from-zero (the overflow case doesn't tend to ever get exercised,
> but it's easy for syzkaller and other fuzzers to underflow when such a
> path is found). And those are only found on REFCOUNT_FULL kernels
> presently, so it'd be nice to have that case covered in the "fast"
> arm/arm64 case too.
>
> > Ok, so it's just the silly racy cases that are problematic?
> >
> > > > > Do you think Ard's patch needs changes before it can be considered? I
> > > > > can take a look at that.
> > > >
> > > > I would like to see how it performs if we keep the checking inline, yes.
> > > > I suspect Ard could spin this in short order.
> > >
> > > Moving the post checks before the stores you mean? That shouldn't be
> > > too difficult, I suppose, but it will certainly cost performance.
> >
> > That's what I'd like to assess, since the major complaint seems to be the
> > use of cmpxchg() as opposed to inline branching.
> >
> > > > > > Whatever we do, I prefer to keep REFCOUNT_FULL the default option for arm64,
> > > > > > so if we can't keep the semantics when we remove the cmpxchg, you'll need to
> > > > > > opt into this at config time.
> > > > >
> > > > > Only arm64 and arm selects REFCOUNT_FULL in the default config. So please
> > > > > reconsider this! This is going to slow down arm64 vs. other archs and it
> > > > > will become worse when more code adopts refcount_t.
> > > >
> > > > Maybe, but faced with the choice between your micro-benchmark results and
> > > > security-by-default for people using the arm64 Linux kernel, I really think
> > > > that's a no-brainer. I'm well aware that not everybody agrees with me on
> > > > that.
> > >
> > > I think the question whether the benchmark is valid is justified, but
> > > otoh, we are obsessed with hackbench which is not that representative
> > > of a real workload either. It would be better to discuss these changes
> > > in the context of known real-world use cases where refcounts are a
> > > true bottleneck.
> >
> > I wasn't calling into question the validity of the benchmark (I really have
> > no clue about that), but rather that you can't have your cake and eat it.
> > Faced with the choice, I'd err on the security side because it's far easier
> > to explain to somebody that the default is full mitigation at a cost than it
> > is to explain why a partial mitigation is acceptable (and in the end it's
> > often subjective because people have different thresholds).
>
> I'm happy to call into question the validity of the benchmark though! ;)
> Seriously, it came up repeatedly in the x86 port, where there was a
> claim of "it's slower" (which is certainly objectively true: more cycles
> are spent), but no one could present a real-world workload where the
> difference was measurable.
>
> > > Also, I'd like to have Kees's view on the gap between REFCOUNT_FULL
> > > and the fast version on arm64. I'm not convinced the cases we are not
> > > covering are such a big deal.
> >
> > Fair enough, but if the conclusion is that it's not a big deal then we
> > should just remove REFCOUNT_FULL altogether, because it's the choice that
> > is the problem here.
>
> The coverage difference on x86 is that inc-from-zero is only caught in
> the FULL case. Additionally there is the internal difference around how
> "saturation" of the value happens. e.g. under FULL a count gets pinned
> either to INT_MAX or to zero.
>
> Since the "fast" arm patch caught inc-from-zero, I would say sure
> ditch FULL in favor of it (though check that "dec-to-zero" is caught:
> i.e. _dec() hitting zero -- instead of dec_and_test() hitting zero). LKDTM
> has extensive behavioral tests for refcount_t, so if the tests show the
> same results before/after, go for it. :) Though note that the logic may
> need tweaking depending on the saturation behavior: right now it expects
> either FULL (INT_MAX/0 pinning) or the x86 saturation (INT_MIN / 2).
>
> Note also that LKDTM has a refcount benchmark as well, in case you want
> to measure the difference between atomic_t and refcount_t in the most
> microbenchmark-y way possible. This is what was used for the numbers in
> commit 7a46ec0e2f48 ("locking/refcounts, x86/asm: Implement fast
> refcount overflow protection"):
>
>  2147483646 refcount_inc()s and 2147483647 refcount_dec_and_test()s:
>                  cycles        protections
>  atomic_t         82249267387  none
>  refcount_t-fast  82211446892  overflow, untested dec-to-zero
>  refcount_t-full 144814735193  overflow, untested dec-to-zero, inc-from-zero
>
> Also note that the x86 fast implementations adjusted memory ordering
> slightly later on in commit 47b8f3ab9c49 ("refcount_t: Add ACQUIRE
> ordering on success for dec(sub)_and_test() variants").
>

Thanks Kees.

That acquire ordering patch appears to carry over cleanly . So the
remaining question Will had was whether it makes sense to do the
condition checks before doing the actual store, to avoid having a time
window where the refcount assumes its illegal value. Since arm64 does
not have memory operands, the instruction count wouldn't change, but
it will definitely result in a performance hit on out-of-order CPUs.

  reply	other threads:[~2019-06-15  8:47 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-29 14:52 [RFC] Disable lockref on arm64 Jan Glauber
2019-05-01 16:01 ` Will Deacon
2019-05-02  8:38   ` Jan Glauber
2019-05-01 16:41 ` Linus Torvalds
2019-05-02  8:27   ` Jan Glauber
2019-05-02 16:12     ` Linus Torvalds
2019-05-02 23:19       ` Jayachandran Chandrasekharan Nair
2019-05-03 19:40         ` Linus Torvalds
2019-05-06  6:13           ` [EXT] " Jayachandran Chandrasekharan Nair
2019-05-06 17:13             ` Linus Torvalds
2019-05-06 18:10             ` Will Deacon
2019-05-18  4:24               ` Jayachandran Chandrasekharan Nair
2019-05-18 10:00                 ` Ard Biesheuvel
2019-05-22 16:04                   ` Will Deacon
2019-06-12  4:10                     ` Jayachandran Chandrasekharan Nair
2019-06-12  9:31                       ` Will Deacon
2019-06-14  7:09                         ` Jayachandran Chandrasekharan Nair
2019-06-14  9:58                           ` Will Deacon
2019-06-14 10:24                             ` Ard Biesheuvel
2019-06-14 10:38                               ` Will Deacon
2019-06-15  4:21                                 ` Kees Cook
2019-06-15  8:47                                   ` Ard Biesheuvel [this message]
2019-06-15 13:59                                     ` Kees Cook
2019-06-15 14:18                                       ` Ard Biesheuvel
2019-06-16 21:31                                         ` Kees Cook
2019-06-17 11:33                                           ` Ard Biesheuvel
2019-06-17 17:26                                             ` Will Deacon
2019-06-17 20:07                                               ` Jayachandran Chandrasekharan Nair
2019-06-18  5:41                                               ` Kees Cook
2019-06-13  9:53                       ` Hanjun Guo
2019-06-05 13:48   ` [PATCH] lockref: Limit number of cmpxchg loop retries Jan Glauber
2019-06-05 20:16     ` Linus Torvalds
2019-06-06  8:03       ` Jan Glauber
2019-06-06  9:41         ` Will Deacon
2019-06-06 10:28           ` Jan Glauber
2019-06-07  7:27             ` Jan Glauber
2019-06-07 20:14               ` Linus Torvalds

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKv+Gu_XuhgUCYOeykrbaxJz-wL1HFrc_O+HeZHqaGkMHd2J9Q@mail.gmail.com \
    --to=ard.biesheuvel@linaro.org \
    --cc=catalin.marinas@arm.com \
    --cc=jglauber@marvell.com \
    --cc=jnair@marvell.com \
    --cc=keescook@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).