linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Benjamin Segall <bsegall@google.com>
To: Mathias Krause <minipli@grsecurity.net>
Cc: "Vincent Guittot" <vincent.guittot@linaro.org>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Juri Lelli" <juri.lelli@redhat.com>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Dietmar Eggemann" <dietmar.eggemann@arm.com>,
	"Steven Rostedt" <rostedt@goodmis.org>,
	"Mel Gorman" <mgorman@suse.de>,
	"Daniel Bristot de Oliveira" <bristot@redhat.com>,
	"Valentin Schneider" <Valentin.Schneider@arm.com>,
	linux-kernel@vger.kernel.org, "Odin Ugedal" <odin@uged.al>,
	"Kevin Tanguy" <kevin.tanguy@corp.ovh.com>,
	"Brad Spengler" <spender@grsecurity.net>
Subject: Re: [PATCH] sched/fair: Prevent dead task groups from regaining cfs_rq's
Date: Thu, 04 Nov 2021 13:46:33 -0700	[thread overview]
Message-ID: <xm26a6ij6446.fsf@google.com> (raw)
In-Reply-To: <a6a3c6c9-d5ea-59b6-8871-0f72bff38833@grsecurity.net> (Mathias Krause's message of "Thu, 4 Nov 2021 16:13:17 +0100")

Mathias Krause <minipli@grsecurity.net> writes:

> Am 04.11.21 um 09:50 schrieb Vincent Guittot:
>> On Wed, 3 Nov 2021 at 23:04, Benjamin Segall <bsegall@google.com> wrote:
>>>
>>> Mathias Krause <minipli@grsecurity.net> writes:
>>>
>>>> Kevin is reporting crashes which point to a use-after-free of a cfs_rq
>>>> in update_blocked_averages(). Initial debugging revealed that we've live
>>>> cfs_rq's (on_list=1) in an about to be kfree()'d task group in
>>>> free_fair_sched_group(). However, it was unclear how that can happen.
>>>> [...]
>>>> Fixes: a7b359fc6a37 ("sched/fair: Correctly insert cfs_rq's to list on unthrottle")
>>>> Cc: Odin Ugedal <odin@uged.al>
>>>> Cc: Michal Koutný <mkoutny@suse.com>
>>>> Reported-by: Kevin Tanguy <kevin.tanguy@corp.ovh.com>
>>>> Suggested-by: Brad Spengler <spender@grsecurity.net>
>>>> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
>>>> ---
>>>>  kernel/sched/core.c | 18 +++++++++++++++---
>>>>  1 file changed, 15 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>>> index 978460f891a1..60125a6c9d1b 100644
>>>> --- a/kernel/sched/core.c
>>>> +++ b/kernel/sched/core.c
>>>> @@ -9506,13 +9506,25 @@ void sched_offline_group(struct task_group *tg)
>>>>  {
>>>>       unsigned long flags;
>>>>
>>>> -     /* End participation in shares distribution: */
>>>> -     unregister_fair_sched_group(tg);
>>>> -
>>>> +     /*
>>>> +      * Unlink first, to avoid walk_tg_tree_from() from finding us (via
>>>> +      * sched_cfs_period_timer()).
>>>> +      */
>>>>       spin_lock_irqsave(&task_group_lock, flags);
>>>>       list_del_rcu(&tg->list);
>>>>       list_del_rcu(&tg->siblings);
>>>>       spin_unlock_irqrestore(&task_group_lock, flags);
>>>> +
>>>> +     /*
>>>> +      * Wait for all pending users of this task group to leave their RCU
>>>> +      * critical section to ensure no new user will see our dying task
>>>> +      * group any more. Specifically ensure that tg_unthrottle_up() won't
>>>> +      * add decayed cfs_rq's to it.
>>>> +      */
>>>> +     synchronize_rcu();
>>>
>>> I was going to suggest that we could just clear all of avg.load_sum/etc, but
>>> that breaks the speculative on_list read. Currently the final avg update
>>> just races, but that's not good enough if we wanted to rely on it to
>>> prevent UAF. synchronize_rcu() doesn't look so bad if the alternative is
>>> taking every rqlock anyways.
>>>
>>> I do wonder if we can move the relevant part of
>>> unregister_fair_sched_group into sched_free_group_rcu. After all
>>> for_each_leaf_cfs_rq_safe is not _rcu and update_blocked_averages does
>>> in fact hold the rqlock (though print_cfs_stats thinks it is _rcu and
>>> should be updated).
>> 
>> I was wondering the same thing.
>> we would have to move unregister_fair_sched_group() completely in
>> sched_free_group_rcu() and probably in cpu_cgroup_css_free() too.
>
> Well, the point is, print_cfs_stats() pretty much relies on the list to
> be stable, i.e. safe to traverse. It doesn't take locks while walking it
> (beside the RCU lock), so if we would modify it concurrently, bad things
> would happen.


Yeah, my idea was that print_cfs_stats is just debug info so it wouldn't
be a disaster to hold the lock, but I forgot that we probably can't hold
it over all that writing.

  parent reply	other threads:[~2021-11-04 20:46 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-11 17:22 [PATCH] sched/fair: Use rq->lock when checking cfs_rq list presence Michal Koutný
2021-10-11 19:12 ` Odin Ugedal
2021-10-12 18:32   ` Tao Zhou
2021-10-13 18:52     ` Odin Ugedal
2021-10-13 14:39   ` Michal Koutný
2021-10-13 18:45     ` Odin Ugedal
2021-10-13  7:57 ` Vincent Guittot
2021-10-13 14:26   ` Michal Koutný
2021-11-02 16:02     ` task_group unthrottling and removal race (was Re: [PATCH] sched/fair: Use rq->lock when checking cfs_rq list) presence Michal Koutný
2021-11-02 20:20       ` Odin Ugedal
2021-11-03  9:51       ` Mathias Krause
2021-11-03 10:51         ` Mathias Krause
2021-11-03 11:10           ` Michal Koutný
2021-11-03 14:16             ` Mathias Krause
2021-11-03 19:06               ` [PATCH] sched/fair: Prevent dead task groups from regaining cfs_rq's Mathias Krause
2021-11-03 22:03                 ` Benjamin Segall
2021-11-04  8:50                   ` Vincent Guittot
2021-11-04 15:13                     ` Mathias Krause
2021-11-04 16:49                       ` Vincent Guittot
2021-11-04 17:37                         ` Mathias Krause
2021-11-05 14:25                           ` Vincent Guittot
2021-11-05 14:44                             ` Mathias Krause
2021-11-05 16:29                               ` Mathias Krause
2021-11-05 16:58                                 ` Peter Zijlstra
2021-11-05 17:14                                   ` Mathias Krause
2021-11-05 17:27                                     ` Peter Zijlstra
2021-11-05 17:40                                       ` Mathias Krause
2021-11-06 10:48                                 ` Peter Zijlstra
2021-11-08 10:27                                   ` Mathias Krause
2021-11-08 11:40                                     ` Peter Zijlstra
2021-11-08 15:06                                       ` Mathias Krause
2021-11-10 15:14                                         ` Vincent Guittot
2021-11-09 18:47                                       ` Michal Koutný
2021-11-10 15:17                                         ` Vincent Guittot
2021-11-04 20:46                       ` Benjamin Segall [this message]
2021-11-04 18:49                 ` Michal Koutný
2021-11-05 14:55                   ` Mathias Krause
2021-11-05 14:58                 ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xm26a6ij6446.fsf@google.com \
    --to=bsegall@google.com \
    --cc=Valentin.Schneider@arm.com \
    --cc=bristot@redhat.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=kevin.tanguy@corp.ovh.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=minipli@grsecurity.net \
    --cc=mkoutny@suse.com \
    --cc=odin@uged.al \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=spender@grsecurity.net \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).