From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932094AbaKLBZH (ORCPT ); Tue, 11 Nov 2014 20:25:07 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:58797 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932075AbaKLBZC (ORCPT ); Tue, 11 Nov 2014 20:25:02 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Kirill Tkhai , "Peter Zijlstra (Intel)" , "Paul E. McKenney" , Linus Torvalds , Ingo Molnar Subject: [PATCH 3.17 068/319] sched: Use dl_bw_of() under RCU read lock Date: Wed, 12 Nov 2014 10:13:26 +0900 Message-Id: <20141112011003.870141157@linuxfoundation.org> X-Mailer: git-send-email 2.1.3 In-Reply-To: <20141112010952.553519040@linuxfoundation.org> References: <20141112010952.553519040@linuxfoundation.org> User-Agent: quilt/0.63-1 MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Kirill Tkhai commit 66339c31bc3978d5fff9c4b4cb590a861def4db2 upstream. dl_bw_of() dereferences rq->rd which has to have RCU read lock held. Probability of use-after-free isn't zero here. Also add lockdep assert into dl_bw_cpus(). Signed-off-by: Kirill Tkhai Signed-off-by: Peter Zijlstra (Intel) Cc: Paul E. McKenney Cc: Linus Torvalds Link: http://lkml.kernel.org/r/20140922183624.11015.71558.stgit@localhost Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- kernel/sched/core.c | 10 ++++++++++ 1 file changed, 10 insertions(+) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1977,6 +1977,8 @@ unsigned long to_ratio(u64 period, u64 r #ifdef CONFIG_SMP inline struct dl_bw *dl_bw_of(int i) { + rcu_lockdep_assert(rcu_read_lock_sched_held(), + "sched RCU must be held"); return &cpu_rq(i)->rd->dl_bw; } @@ -1985,6 +1987,8 @@ static inline int dl_bw_cpus(int i) struct root_domain *rd = cpu_rq(i)->rd; int cpus = 0; + rcu_lockdep_assert(rcu_read_lock_sched_held(), + "sched RCU must be held"); for_each_cpu_and(i, rd->span, cpu_active_mask) cpus++; @@ -7580,6 +7584,8 @@ static int sched_dl_global_constraints(v int cpu, ret = 0; unsigned long flags; + rcu_read_lock(); + /* * Here we want to check the bandwidth not being set to some * value smaller than the currently allocated bandwidth in @@ -7601,6 +7607,8 @@ static int sched_dl_global_constraints(v break; } + rcu_read_unlock(); + return ret; } @@ -7616,6 +7624,7 @@ static void sched_dl_do_global(void) if (global_rt_runtime() != RUNTIME_INF) new_bw = to_ratio(global_rt_period(), global_rt_runtime()); + rcu_read_lock(); /* * FIXME: As above... */ @@ -7626,6 +7635,7 @@ static void sched_dl_do_global(void) dl_b->bw = new_bw; raw_spin_unlock_irqrestore(&dl_b->lock, flags); } + rcu_read_unlock(); } static int sched_rt_global_validate(void)