From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753298AbbFAON4 (ORCPT ); Mon, 1 Jun 2015 10:13:56 -0400 Received: from casper.infradead.org ([85.118.1.10]:50788 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751787AbbFAONT (ORCPT ); Mon, 1 Jun 2015 10:13:19 -0400 Message-Id: <20150601140840.168679775@infradead.org> User-Agent: quilt/0.61-1 Date: Mon, 01 Jun 2015 15:58:24 +0200 From: Peter Zijlstra To: umgwanakikbuti@gmail.com, mingo@elte.hu Cc: ktkhai@parallels.com, rostedt@goodmis.org, juri.lelli@gmail.com, pang.xunlei@linaro.org, oleg@redhat.com, linux-kernel@vger.kernel.org, "Peter Zijlstra" Subject: [RFC][PATCH 6/7] sched,dl: Remove return value from pull_dl_task() References: <20150601135818.506080835@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-sched-post_schedule-4.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In order to be able to use pull_dl_task() from a callback, we need to do away with the return value. Since the return value indicates if we should reschedule, do this inside the function. Since not all callers currently do this, this can increase the number of reschedules due rt balancing. Too many reschedules is not a correctness issues, too few are. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/deadline.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -298,9 +298,8 @@ static inline bool need_pull_dl_task(str return false; } -static inline int pull_dl_task(struct rq *rq) +static inline void pull_dl_task(struct rq *rq) { - return 0; } static inline void set_post_schedule(struct rq *rq) @@ -1041,7 +1040,7 @@ static void check_preempt_equal_dl(struc resched_curr(rq); } -static int pull_dl_task(struct rq *this_rq); +static void pull_dl_task(struct rq *this_rq); #endif /* CONFIG_SMP */ @@ -1472,15 +1471,16 @@ static void push_dl_tasks(struct rq *rq) ; } -static int pull_dl_task(struct rq *this_rq) +static void pull_dl_task(struct rq *this_rq) { - int this_cpu = this_rq->cpu, ret = 0, cpu; + int this_cpu = this_rq->cpu, cpu; struct task_struct *p; + bool resched = false; struct rq *src_rq; u64 dmin = LONG_MAX; if (likely(!dl_overloaded(this_rq))) - return 0; + return; /* * Match the barrier from dl_set_overloaded; this guarantees that if we @@ -1535,7 +1535,7 @@ static int pull_dl_task(struct rq *this_ src_rq->curr->dl.deadline)) goto skip; - ret = 1; + resched = true; deactivate_task(src_rq, p, 0); set_task_cpu(p, this_cpu); @@ -1548,7 +1548,8 @@ static int pull_dl_task(struct rq *this_ double_unlock_balance(this_rq, src_rq); } - return ret; + if (resched) + resched_curr(this_rq); } /* @@ -1704,8 +1705,7 @@ static void switched_from_dl(struct rq * if (!task_on_rq_queued(p) || rq->dl.dl_nr_running) return; - if (pull_dl_task(rq)) - resched_curr(rq); + pull_dl_task(rq); } /*