From: Matt Fleming <matt@codeblueprint.co.uk>
To: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@kernel.org>
Cc: linux-kernel@vger.kernel.org,
Byungchul Park <byungchul.park@lge.com>,
Frederic Weisbecker <fweisbec@gmail.com>,
Luca Abeni <luca.abeni@unitn.it>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Rik van Riel <riel@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Wanpeng Li <wanpeng.li@hotmail.com>,
Yuyang Du <yuyang.du@intel.com>,
Mel Gorman <mgorman@techsingularity.net>,
Mike Galbraith <umgwanakikbuti@gmail.com>,
Matt Fleming <matt@codeblueprint.co.uk>
Subject: [RFC][PATCH 4/5] sched/fair: Push rq lock pin/unpin into idle_balance()
Date: Thu, 12 May 2016 20:49:52 +0100 [thread overview]
Message-ID: <1463082593-27777-5-git-send-email-matt@codeblueprint.co.uk> (raw)
In-Reply-To: <1463082593-27777-1-git-send-email-matt@codeblueprint.co.uk>
Future patches will emit warnings if rq_clock() is called before
update_rq_clock() inside a rq_pin_lock()/rq_unpin_lock() pair.
Since there is only one caller of idle_balance() we can push the
unpin/repin there.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Yuyang Du <yuyang.du@intel.com>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
---
kernel/sched/fair.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7f776a99bde0..217e3a9d78db 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3095,7 +3095,7 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
return cfs_rq->avg.load_avg;
}
-static int idle_balance(struct rq *this_rq);
+static int idle_balance(struct rq *this_rq, struct rq_flags *rf);
#else /* CONFIG_SMP */
@@ -3118,7 +3118,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
static inline void
detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
-static inline int idle_balance(struct rq *rq)
+static inline int idle_balance(struct rq *rq, struct rq_flags *rf)
{
return 0;
}
@@ -5701,15 +5701,8 @@ simple:
return p;
idle:
- /*
- * This is OK, because current is on_cpu, which avoids it being picked
- * for load-balance and preemption/IRQs are still disabled avoiding
- * further scheduler activity on it and we're being very careful to
- * re-start the picking loop.
- */
- rq_unpin_lock(rq, rf);
- new_tasks = idle_balance(rq);
- rq_repin_lock(rq, rf);
+ new_tasks = idle_balance(rq, rf);
+
/*
* Because idle_balance() releases (and re-acquires) rq->lock, it is
* possible for any higher priority task to appear. In that case we
@@ -7641,7 +7634,7 @@ update_next_balance(struct sched_domain *sd, int cpu_busy, unsigned long *next_b
* idle_balance is called by schedule() if this_cpu is about to become
* idle. Attempts to pull tasks from other CPUs.
*/
-static int idle_balance(struct rq *this_rq)
+static int idle_balance(struct rq *this_rq, struct rq_flags *rf)
{
unsigned long next_balance = jiffies + HZ;
int this_cpu = this_rq->cpu;
@@ -7655,6 +7648,14 @@ static int idle_balance(struct rq *this_rq)
*/
this_rq->idle_stamp = rq_clock(this_rq);
+ /*
+ * This is OK, because current is on_cpu, which avoids it being picked
+ * for load-balance and preemption/IRQs are still disabled avoiding
+ * further scheduler activity on it and we're being very careful to
+ * re-start the picking loop.
+ */
+ rq_unpin_lock(this_rq, rf);
+
if (this_rq->avg_idle < sysctl_sched_migration_cost ||
!this_rq->rd->overload) {
rcu_read_lock();
@@ -7732,6 +7733,8 @@ out:
if (pulled_task)
this_rq->idle_stamp = 0;
+ rq_repin_lock(this_rq, rf);
+
return pulled_task;
}
--
2.7.3
next prev parent reply other threads:[~2016-05-12 19:50 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-12 19:49 [RFC][PATCH 0/5] sched: Diagnostic checks for missing rq clock updates Matt Fleming
2016-05-12 19:49 ` [RFC][PATCH 1/5] sched/fair: Update the rq clock before detaching tasks Matt Fleming
2016-05-12 19:49 ` [RFC][PATCH 2/5] sched: Add wrappers for lockdep_(un)pin_lock() Matt Fleming
2016-05-12 19:49 ` [RFC][PATCH 3/5] sched/core: Reset RQCF_ACT_SKIP before unpinning rq->lock Matt Fleming
2016-05-12 19:49 ` Matt Fleming [this message]
2016-05-12 19:49 ` [RFC][PATCH 5/5] sched/core: Add debug code to catch missing update_rq_clock() Matt Fleming
2016-05-15 2:14 ` Yuyang Du
2016-05-16 9:46 ` Matt Fleming
2016-05-16 20:11 ` Yuyang Du
2016-05-17 12:24 ` Matt Fleming
2016-05-17 19:01 ` Yuyang Du
2016-05-18 8:41 ` Matt Fleming
2016-05-18 22:51 ` Yuyang Du
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1463082593-27777-5-git-send-email-matt@codeblueprint.co.uk \
--to=matt@codeblueprint.co.uk \
--cc=byungchul.park@lge.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luca.abeni@unitn.it \
--cc=mgorman@techsingularity.net \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=riel@redhat.com \
--cc=tglx@linutronix.de \
--cc=umgwanakikbuti@gmail.com \
--cc=wanpeng.li@hotmail.com \
--cc=yuyang.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).