All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: Jann Horn <jannh@google.com>
Cc: kernel list <linux-kernel@vger.kernel.org>,
	Kees Cook <keescook@chromium.org>, Ingo Molnar <mingo@kernel.org>,
	Elena Reshetova <elena.reshetova@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Hanjun Guo <guohanjun@huawei.com>,
	Jan Glauber <jglauber@marvell.com>,
	Kernel Hardening <kernel-hardening@lists.openwall.com>
Subject: Re: [RESEND PATCH v4 05/10] lib/refcount: Improve performance of generic REFCOUNT_FULL code
Date: Fri, 28 Feb 2020 10:43:33 +0000	[thread overview]
Message-ID: <20200228104332.GB2395@willie-the-truck> (raw)
In-Reply-To: <CAG48ez0CoY2fuxNkdAn_Jx+hffQ1eyM_V++5q6Q5nra+_tE9hQ@mail.gmail.com>

Hi Jann,

On Wed, Feb 26, 2020 at 05:10:46AM +0100, Jann Horn wrote:
> On Thu, Nov 21, 2019 at 12:58 PM Will Deacon <will@kernel.org> wrote:
> > Rewrite the generic REFCOUNT_FULL implementation so that the saturation
> > point is moved to INT_MIN / 2. This allows us to defer the sanity checks
> > until after the atomic operation, which removes many uses of cmpxchg()
> > in favour of atomic_fetch_{add,sub}().
> 
> Oh, I never saw this, this is really neat! CCing the kernel-hardening
> list on this might've been a good idea.

Glad you like it! I'll try to remember to cc them in future but
get_maintainer.pl doesn't tend to mention them very often.

> > + * If another thread also performs a refcount_inc() operation between the two
> > + * atomic operations, then the count will continue to edge closer to 0. If it
> > + * reaches a value of 1 before /any/ of the threads reset it to the saturated
> > + * value, then a concurrent refcount_dec_and_test() may erroneously free the
> > + * underlying object. Given the precise timing details involved with the
> > + * round-robin scheduling of each thread manipulating the refcount and the need
> > + * to hit the race multiple times in succession, there doesn't appear to be a
> > + * practical avenue of attack even if using refcount_add() operations with
> > + * larger increments.
> 
> On top of that, the number of threads that can actually be running at
> a given time is capped. See include/linux/threads.h, where it is
> capped to pow(2, 22):
> 
>     /*
>      * A maximum of 4 million PIDs should be enough for a while.
>      * [NOTE: PID/TIDs are limited to 2^29 ~= 500+ million, see futex.h.]
>      */
>     #define PID_MAX_LIMIT (CONFIG_BASE_SMALL ? PAGE_SIZE * 8 : \
>             (sizeof(long) > 4 ? 4 * 1024 * 1024 : PID_MAX_DEFAULT))
> 
> And in the futex UAPI header, we have this, baking a TID limit into
> the userspace API (note that this is pow(2,30), not pow(2,29) as the
> comment in threads.h claims - I'm not sure where that difference comes
> from):
> 
>     /*
>      * The rest of the robust-futex field is for the TID:
>      */
>     #define FUTEX_TID_MASK 0x3fffffff
> 
> So AFAICS, with the current PID_MAX_LIMIT, if you assume that all
> participating refcount operations are non-batched (delta 1) and the
> attacker can't cause the threads to oops in the middle of the refcount
> operation (maybe that would be possible if you managed to find
> something like a NULL pointer dereference in perf software event code
> and had perf paranoia at <=1 , or something like that? - I'm not
> sure), then even in a theoretical scenario where an attacker spawns
> the maximum number of tasks possible and manages to get all of them to
> sequentially preempt while being in the middle of increment operations
> in several nested contexts (I'm not sure whether that can even happen
> - you're not going to take typical sleeping exceptions like page
> faults in the middle of a refcount op), the attacker will stay
> comfortably inside the saturated range. Even if the PID_MAX_LIMIT is
> at some point raised to the maximum permitted by the futex UAPI, this
> still holds as long as you assume no nesting. Hm, should I send a
> patch to add something like this to the explanatory comment?

Sure, I'd be happy to improve the document by adding this -- please send
out a patch for review. It's probably also worth mentioning the batching
use-cases, although I struggle to reason about the window between the
{under,over}flow occuring and saturation. The idea of oopsing during that
period is interesting, although at least that's not silent (and I think
Android usually (always?) sets panic_on_oops to 1.

> Of course, if someone uses refcount batching with sufficiently large
> values, those guarantees go out of the window - if we wanted to be
> perfectionist about this, we could make the batched operations do slow
> cmpxchg stuff while letting the much more performance-critical
> single-reference case continue to use the fast saturation scheme.
> OTOH, the networking folks would probably hate that, since they're
> using the batched ops for ->sk_wmem_alloc stuff, where they count
> bytes as references? So I guess maybe we should leave it as-is.

Agreed. If we hamper the performance here, then people will either look
to disable the checking or they will switch to atomic_t, which puts us
back to square one. Perfection is the enemy of the good and all that.

Having said that, I'd still be keen to learn about any practical attacks
on this in case there is something smart we can do to mitigate them
without cmpxchg(). For example, one silly approach might be to bound the
maximum increment and split larger ones up using a loop.

Will

  reply	other threads:[~2020-02-28 10:43 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-21 11:58 [RESEND PATCH v4 00/10] Rework REFCOUNT_FULL using atomic_fetch_* operations Will Deacon
2019-11-21 11:58 ` [RESEND PATCH v4 01/10] lib/refcount: Define constants for saturation and max refcount values Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-11-21 11:58 ` [RESEND PATCH v4 02/10] lib/refcount: Ensure integer operands are treated as signed Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-11-21 11:58 ` [RESEND PATCH v4 03/10] lib/refcount: Remove unused refcount_*_checked() variants Will Deacon
2019-11-21 14:55   ` David Sterba
2019-11-21 17:11     ` Kees Cook
2019-11-21 17:17       ` David Sterba
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-11-21 11:58 ` [RESEND PATCH v4 04/10] lib/refcount: Move bulk of REFCOUNT_FULL implementation into header Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: Move the bulk of the REFCOUNT_FULL implementation into the <linux/refcount.h> header tip-bot2 for Will Deacon
2019-11-21 11:58 ` [RESEND PATCH v4 05/10] lib/refcount: Improve performance of generic REFCOUNT_FULL code Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2020-02-26  4:10   ` [RESEND PATCH v4 05/10] lib/refcount: " Jann Horn
2020-02-26  4:10     ` Jann Horn
2020-02-28 10:43     ` Will Deacon [this message]
2020-03-02 13:06       ` Jann Horn
2020-03-02 13:06         ` Jann Horn
2019-11-21 11:58 ` [RESEND PATCH v4 06/10] lib/refcount: Move saturation warnings out of line Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-11-21 11:58 ` [RESEND PATCH v4 07/10] lib/refcount: Consolidate REFCOUNT_{MAX,SATURATED} definitions Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-11-21 11:59 ` [RESEND PATCH v4 08/10] refcount: Consolidate implementations of refcount_t Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-12-01 15:49   ` [refcount] d2d337b185: WARNING:at_lib/refcount.c:#refcount_warn_saturate kernel test robot
2019-12-01 15:49     ` kernel test robot
2019-12-02  9:43     ` Ard Biesheuvel
2019-12-02  9:43       ` Ard Biesheuvel
2019-12-03  8:01       ` [LKP] " Rong Chen
2019-12-03  8:01         ` Rong Chen
2019-12-03  8:09         ` [LKP] " Will Deacon
2019-12-03  8:09           ` Will Deacon
2019-12-03  8:42           ` [LKP] " Rong Chen
2019-12-03  8:42             ` Rong Chen
2019-11-21 11:59 ` [RESEND PATCH v4 09/10] lib/refcount: Remove unused 'refcount_error_report()' function Will Deacon
2019-11-25  8:19   ` [tip: locking/core] locking/refcount: " tip-bot2 for Will Deacon
2019-11-21 11:59 ` [RESEND PATCH v4 10/10] drivers/lkdtm: Remove references to CONFIG_REFCOUNT_FULL Will Deacon
2019-11-25  8:19   ` [tip: locking/core] lkdtm: " tip-bot2 for Will Deacon
2019-11-21 12:07 ` [RESEND PATCH v4 00/10] Rework REFCOUNT_FULL using atomic_fetch_* operations Ard Biesheuvel
2019-11-22  3:44 ` Hanjun Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200228104332.GB2395@willie-the-truck \
    --to=will@kernel.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=elena.reshetova@intel.com \
    --cc=guohanjun@huawei.com \
    --cc=jannh@google.com \
    --cc=jglauber@marvell.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.