linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched: Make nr_uninterruptible count a signed value
@ 2012-05-08 21:39 Diwakar Tundlam
  2012-05-08 21:56 ` Peter Zijlstra
  0 siblings, 1 reply; 14+ messages in thread
From: Diwakar Tundlam @ 2012-05-08 21:39 UTC (permalink / raw)
  To: 'Peter Zijlstra'
  Cc: 'Ingo Molnar', 'David Rientjes',
	'linux-kernel@vger.kernel.org',
	Peter De Schrijver

Declare nr_uninterruptible as a signed long to avoid garbage values
seen in cat /proc/sched_debug when a task is moved to the run queue of
a newly online core. This is part of a global counter where only the
total sum over all CPUs matters.

Signed-off-by: Diwakar Tundlam <dtundlam@nvidia.com>
---
 kernel/sched/core.c  |    7 ++++---
 kernel/sched/sched.h |    2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8d5eef6..7a64b5b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2114,7 +2114,8 @@ unsigned long nr_running(void)
 
 unsigned long nr_uninterruptible(void)
 {
-	unsigned long i, sum = 0;
+	unsigned long i;
+	long sum = 0;
 
 	for_each_possible_cpu(i)
 		sum += cpu_rq(i)->nr_uninterruptible;
@@ -2123,7 +2124,7 @@ unsigned long nr_uninterruptible(void)
 	 * Since we read the counters lockless, it might be slightly
 	 * inaccurate. Do not allow it to go below zero though:
 	 */
-	if (unlikely((long)sum < 0))
+	if (unlikely(sum < 0))
 		sum = 0;
 
 	return sum;
@@ -2174,7 +2175,7 @@ static long calc_load_fold_active(struct rq *this_rq)
 	long nr_active, delta = 0;
 
 	nr_active = this_rq->nr_running;
-	nr_active += (long) this_rq->nr_uninterruptible;
+	nr_active += this_rq->nr_uninterruptible;
 
 	if (nr_active != this_rq->calc_load_active) {
 		delta = nr_active - this_rq->calc_load_active;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fb3acba..2668b07 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -385,7 +385,7 @@ struct rq {
 	 * one CPU and if it got migrated afterwards it may decrease
 	 * it on another CPU. Always updated under the runqueue lock:
 	 */
-	unsigned long nr_uninterruptible;
+	long nr_uninterruptible;
 
 	struct task_struct *curr, *idle, *stop;
 	unsigned long next_balance;
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2012-05-11  2:19 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-08 21:39 [PATCH] sched: Make nr_uninterruptible count a signed value Diwakar Tundlam
2012-05-08 21:56 ` Peter Zijlstra
2012-05-08 22:14   ` Diwakar Tundlam
2012-05-08 22:27     ` Peter Zijlstra
2012-05-08 22:29       ` Peter Zijlstra
2012-05-08 22:46         ` Diwakar Tundlam
2012-05-09  7:49           ` Michael Wang
2012-05-09 18:55             ` Diwakar Tundlam
2012-05-10  4:46               ` Michael Wang
2012-05-09  8:11           ` Peter Zijlstra
2012-05-09 19:04             ` Diwakar Tundlam
2012-05-10  3:41             ` Michael Wang
2012-05-10  9:46               ` Peter Zijlstra
2012-05-11  2:19                 ` Michael Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).