linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime()
@ 2012-08-09 22:34 Peter Boonstoppel
  2012-08-16  8:21 ` Peter Zijlstra
  2012-09-04 18:45 ` [tip:sched/core] sched: Unthrottle " tip-bot for Peter Boonstoppel
  0 siblings, 2 replies; 4+ messages in thread
From: Peter Boonstoppel @ 2012-08-09 22:34 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra, Ingo Molnar

migrate_tasks() uses _pick_next_task_rt() to get tasks from the
real-time runqueues to be migrated. When rt_rq is throttled
_pick_next_task_rt() won't return anything, in which case
migrate_tasks() can't move all threads over and gets stuck in an
infinite loop.

Instead unthrottle rt runqueues before migrating tasks.

Additionally: move unthrottle_offline_cfs_rqs() to rq_offline_fair()

Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
---
 kernel/sched/core.c  |    3 ---
 kernel/sched/fair.c  |    7 +++++--
 kernel/sched/rt.c    |    1 +
 kernel/sched/sched.h |    1 -
 4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2df035a..2e7ecff 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5221,9 +5221,6 @@ static void migrate_tasks(unsigned int dead_cpu)
 	 */
 	rq->stop = NULL;
 
-	/* Ensure any throttled groups are reachable by pick_next_task */
-	unthrottle_offline_cfs_rqs(rq);
-
 	for ( ; ; ) {
 		/*
 		 * There's this thread running, bail when that's the only
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3704ad3..dc8341b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2057,7 +2057,7 @@ static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
 	hrtimer_cancel(&cfs_b->slack_timer);
 }
 
-void unthrottle_offline_cfs_rqs(struct rq *rq)
+static void unthrottle_offline_cfs_rqs(struct rq *rq)
 {
 	struct cfs_rq *cfs_rq;
 
@@ -2111,7 +2111,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
 	return NULL;
 }
 static inline void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
-void unthrottle_offline_cfs_rqs(struct rq *rq) {}
+static inline void unthrottle_offline_cfs_rqs(struct rq *rq) {}
 
 #endif /* CONFIG_CFS_BANDWIDTH */
 
@@ -5086,6 +5086,9 @@ static void rq_online_fair(struct rq *rq)
 static void rq_offline_fair(struct rq *rq)
 {
 	update_sysctl();
+
+	/* Ensure any throttled groups are reachable by pick_next_task */
+	unthrottle_offline_cfs_rqs(rq);
 }
 
 #endif /* CONFIG_SMP */
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 573e1ca..b9a94fb 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -691,6 +691,7 @@ balanced:
 		 * runtime - in which case borrowing doesn't make sense.
 		 */
 		rt_rq->rt_runtime = RUNTIME_INF;
+		rt_rq->rt_throttled = 0;
 		raw_spin_unlock(&rt_rq->rt_runtime_lock);
 		raw_spin_unlock(&rt_b->rt_runtime_lock);
 	}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4134d37..5d9aabe 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1167,7 +1167,6 @@ extern void print_rt_stats(struct seq_file *m, int cpu);
 
 extern void init_cfs_rq(struct cfs_rq *cfs_rq);
 extern void init_rt_rq(struct rt_rq *rt_rq, struct rq *rq);
-extern void unthrottle_offline_cfs_rqs(struct rq *rq);
 
 extern void account_cfs_bandwidth_used(int enabled, int was_enabled);
 
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime()
  2012-08-09 22:34 [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime() Peter Boonstoppel
@ 2012-08-16  8:21 ` Peter Zijlstra
  2012-09-04 18:45 ` [tip:sched/core] sched: Unthrottle " tip-bot for Peter Boonstoppel
  1 sibling, 0 replies; 4+ messages in thread
From: Peter Zijlstra @ 2012-08-16  8:21 UTC (permalink / raw)
  To: Peter Boonstoppel; +Cc: linux-kernel, Ingo Molnar

On Thu, 2012-08-09 at 15:34 -0700, Peter Boonstoppel wrote:
> migrate_tasks() uses _pick_next_task_rt() to get tasks from the
> real-time runqueues to be migrated. When rt_rq is throttled
> _pick_next_task_rt() won't return anything, in which case
> migrate_tasks() can't move all threads over and gets stuck in an
> infinite loop.
> 
> Instead unthrottle rt runqueues before migrating tasks.
> 
> Additionally: move unthrottle_offline_cfs_rqs() to rq_offline_fair()

Thanks!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [tip:sched/core] sched: Unthrottle rt runqueues in __disable_runtime()
  2012-08-09 22:34 [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime() Peter Boonstoppel
  2012-08-16  8:21 ` Peter Zijlstra
@ 2012-09-04 18:45 ` tip-bot for Peter Boonstoppel
  1 sibling, 0 replies; 4+ messages in thread
From: tip-bot for Peter Boonstoppel @ 2012-09-04 18:45 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, pjt, pboonstoppel, tglx

Commit-ID:  a4c96ae319b8047f62dedbe1eac79e321c185749
Gitweb:     http://git.kernel.org/tip/a4c96ae319b8047f62dedbe1eac79e321c185749
Author:     Peter Boonstoppel <pboonstoppel@nvidia.com>
AuthorDate: Thu, 9 Aug 2012 15:34:47 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 4 Sep 2012 14:30:30 +0200

sched: Unthrottle rt runqueues in __disable_runtime()

migrate_tasks() uses _pick_next_task_rt() to get tasks from the
real-time runqueues to be migrated. When rt_rq is throttled
_pick_next_task_rt() won't return anything, in which case
migrate_tasks() can't move all threads over and gets stuck in an
infinite loop.

Instead unthrottle rt runqueues before migrating tasks.

Additionally: move unthrottle_offline_cfs_rqs() to rq_offline_fair()

Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/5FBF8E85CA34454794F0F7ECBA79798F379D3648B7@HQMAIL04.nvidia.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c  |    3 ---
 kernel/sched/fair.c  |    7 +++++--
 kernel/sched/rt.c    |    1 +
 kernel/sched/sched.h |    1 -
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 207a81c..a4ea245 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5342,9 +5342,6 @@ static void migrate_tasks(unsigned int dead_cpu)
 	 */
 	rq->stop = NULL;
 
-	/* Ensure any throttled groups are reachable by pick_next_task */
-	unthrottle_offline_cfs_rqs(rq);
-
 	for ( ; ; ) {
 		/*
 		 * There's this thread running, bail when that's the only
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c219bf8..86ad83c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2052,7 +2052,7 @@ static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
 	hrtimer_cancel(&cfs_b->slack_timer);
 }
 
-void unthrottle_offline_cfs_rqs(struct rq *rq)
+static void unthrottle_offline_cfs_rqs(struct rq *rq)
 {
 	struct cfs_rq *cfs_rq;
 
@@ -2106,7 +2106,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
 	return NULL;
 }
 static inline void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
-void unthrottle_offline_cfs_rqs(struct rq *rq) {}
+static inline void unthrottle_offline_cfs_rqs(struct rq *rq) {}
 
 #endif /* CONFIG_CFS_BANDWIDTH */
 
@@ -4956,6 +4956,9 @@ static void rq_online_fair(struct rq *rq)
 static void rq_offline_fair(struct rq *rq)
 {
 	update_sysctl();
+
+	/* Ensure any throttled groups are reachable by pick_next_task */
+	unthrottle_offline_cfs_rqs(rq);
 }
 
 #endif /* CONFIG_SMP */
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 944cb68..e0b7ba9 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -691,6 +691,7 @@ balanced:
 		 * runtime - in which case borrowing doesn't make sense.
 		 */
 		rt_rq->rt_runtime = RUNTIME_INF;
+		rt_rq->rt_throttled = 0;
 		raw_spin_unlock(&rt_rq->rt_runtime_lock);
 		raw_spin_unlock(&rt_b->rt_runtime_lock);
 	}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index f6714d0..0848fa3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1144,7 +1144,6 @@ extern void print_rt_stats(struct seq_file *m, int cpu);
 
 extern void init_cfs_rq(struct cfs_rq *cfs_rq);
 extern void init_rt_rq(struct rt_rq *rt_rq, struct rq *rq);
-extern void unthrottle_offline_cfs_rqs(struct rq *rq);
 
 extern void account_cfs_bandwidth_used(int enabled, int was_enabled);
 

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime()
  2012-05-25 14:49 ` Peter Zijlstra
@ 2012-06-22 17:49   ` Peter Boonstoppel
  0 siblings, 0 replies; 4+ messages in thread
From: Peter Boonstoppel @ 2012-06-22 17:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, pjt, tglx, seto.hidetoshi, linux-kernel, Peter De Schrijver

migrate_tasks() uses _pick_next_task_rt() to get tasks from the
real-time runqueues to be migrated. When rt_rq is throttled
_pick_next_task_rt() won't return anything, in which case
migrate_tasks() can't move all threads over and gets stuck in an
infinite loop.

Instead unthrottle rt runqueues before migrating tasks.

Additionally: move unthrottle_offline_cfs_rqs() to rq_offline_fair()

Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
---
 kernel/sched/core.c  |    3 ---
 kernel/sched/fair.c  |    7 +++++--
 kernel/sched/rt.c    |    1 +
 kernel/sched/sched.h |    1 -
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2df035a..2e7ecff 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5221,9 +5221,6 @@ static void migrate_tasks(unsigned int dead_cpu)
 	 */
 	rq->stop = NULL;
 
-	/* Ensure any throttled groups are reachable by pick_next_task */
-	unthrottle_offline_cfs_rqs(rq);
-
 	for ( ; ; ) {
 		/*
 		 * There's this thread running, bail when that's the only
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3704ad3..dc8341b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2057,7 +2057,7 @@ static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
 	hrtimer_cancel(&cfs_b->slack_timer);
 }
 
-void unthrottle_offline_cfs_rqs(struct rq *rq)
+static void unthrottle_offline_cfs_rqs(struct rq *rq)
 {
 	struct cfs_rq *cfs_rq;
 
@@ -2111,7 +2111,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
 	return NULL;
 }
 static inline void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
-void unthrottle_offline_cfs_rqs(struct rq *rq) {}
+static inline void unthrottle_offline_cfs_rqs(struct rq *rq) {}
 
 #endif /* CONFIG_CFS_BANDWIDTH */
 
@@ -5086,6 +5086,9 @@ static void rq_online_fair(struct rq *rq)
 static void rq_offline_fair(struct rq *rq)
 {
 	update_sysctl();
+
+	/* Ensure any throttled groups are reachable by pick_next_task */
+	unthrottle_offline_cfs_rqs(rq);
 }
 
 #endif /* CONFIG_SMP */
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 573e1ca..b9a94fb 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -691,6 +691,7 @@ balanced:
 		 * runtime - in which case borrowing doesn't make sense.
 		 */
 		rt_rq->rt_runtime = RUNTIME_INF;
+		rt_rq->rt_throttled = 0;
 		raw_spin_unlock(&rt_rq->rt_runtime_lock);
 		raw_spin_unlock(&rt_b->rt_runtime_lock);
 	}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4134d37..5d9aabe 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1167,7 +1167,6 @@ extern void print_rt_stats(struct seq_file *m, int cpu);
 
 extern void init_cfs_rq(struct cfs_rq *cfs_rq);
 extern void init_rt_rq(struct rt_rq *rt_rq, struct rq *rq);
-extern void unthrottle_offline_cfs_rqs(struct rq *rq);
 
 extern void account_cfs_bandwidth_used(int enabled, int was_enabled);
 
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-09-04 18:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-09 22:34 [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime() Peter Boonstoppel
2012-08-16  8:21 ` Peter Zijlstra
2012-09-04 18:45 ` [tip:sched/core] sched: Unthrottle " tip-bot for Peter Boonstoppel
  -- strict thread matches above, loose matches on Subject: below --
2012-05-18 18:56 [PATCH] sched: unthrottle rt_rq in migrate_tasks() Peter Boonstoppel
2012-05-25 14:49 ` Peter Zijlstra
2012-06-22 17:49   ` [PATCH 1/1] sched: unthrottle rt runqueues in __disable_runtime() Peter Boonstoppel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).