linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] sched: misc. fix and cleanup for sched-devel
@ 2007-12-06 15:27 Gregory Haskins
  2007-12-06 15:27 ` [PATCH 1/2] Subject: SCHED - Only adjust overload state when changing Gregory Haskins
  2007-12-06 15:28 ` [PATCH 2/2] Subject: SCHED - Clean up some old cpuset logic Gregory Haskins
  0 siblings, 2 replies; 3+ messages in thread
From: Gregory Haskins @ 2007-12-06 15:27 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, rostedt, ghaskins

Hi Ingo,
   Here are a few more small patches for consideration in sched-devel.

The second patch should be Ack'd by Steven before accepting to make sure I
didn't misunderstand here...but I believe that logic is now defunct since he
moved away from the overlapped cpuset work some time ago.

Regards,
-Greg

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] Subject: SCHED - Only adjust overload state when changing
  2007-12-06 15:27 [PATCH 0/2] sched: misc. fix and cleanup for sched-devel Gregory Haskins
@ 2007-12-06 15:27 ` Gregory Haskins
  2007-12-06 15:28 ` [PATCH 2/2] Subject: SCHED - Clean up some old cpuset logic Gregory Haskins
  1 sibling, 0 replies; 3+ messages in thread
From: Gregory Haskins @ 2007-12-06 15:27 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, rostedt, ghaskins

The overload set/clears were originally idempotent when this logic was first
implemented.  But that is no longer true due to the addition of the atomic
counter and this logic was never updated to work properly with that change.
So only adjust the overload state if it is actually changing to avoid
getting out of sync.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
---

 kernel/sched_rt.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 4cbde83..53cd9e8 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -34,9 +34,11 @@ static inline void rt_clear_overload(struct rq *rq)
 static void update_rt_migration(struct rq *rq)
 {
 	if (rq->rt.rt_nr_migratory && (rq->rt.rt_nr_running > 1)) {
-		rt_set_overload(rq);
-		rq->rt.overloaded = 1;
-	} else {
+		if (!rq->rt.overloaded) {
+			rt_set_overload(rq);
+			rq->rt.overloaded = 1;
+		}
+	} else if (rq->rt.overloaded) {
 		rt_clear_overload(rq);
 		rq->rt.overloaded = 0;
 	}


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] Subject: SCHED - Clean up some old cpuset logic
  2007-12-06 15:27 [PATCH 0/2] sched: misc. fix and cleanup for sched-devel Gregory Haskins
  2007-12-06 15:27 ` [PATCH 1/2] Subject: SCHED - Only adjust overload state when changing Gregory Haskins
@ 2007-12-06 15:28 ` Gregory Haskins
  1 sibling, 0 replies; 3+ messages in thread
From: Gregory Haskins @ 2007-12-06 15:28 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, rostedt, ghaskins

We had support for overlapping cpuset based rto logic in early prototypes that
is no longer used, so clean it up.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
---

 kernel/sched_rt.c |   32 --------------------------------
 1 files changed, 0 insertions(+), 32 deletions(-)

diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 53cd9e8..65cbb78 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -586,37 +586,6 @@ static int pull_rt_task(struct rq *this_rq)
 			continue;
 
 		src_rq = cpu_rq(cpu);
-		if (unlikely(src_rq->rt.rt_nr_running <= 1)) {
-			/*
-			 * It is possible that overlapping cpusets
-			 * will miss clearing a non overloaded runqueue.
-			 * Clear it now.
-			 */
-			if (double_lock_balance(this_rq, src_rq)) {
-				/* unlocked our runqueue lock */
-				struct task_struct *old_next = next;
-
-				next = pick_next_task_rt(this_rq);
-				if (next != old_next)
-					ret = 1;
-			}
-			if (likely(src_rq->rt.rt_nr_running <= 1)) {
-				/*
-				 * Small chance that this_rq->curr changed
-				 * but it's really harmless here.
-				 */
-				rt_clear_overload(this_rq);
-			} else {
-				/*
-				 * Heh, the src_rq is now overloaded, since
-				 * we already have the src_rq lock, go straight
-				 * to pulling tasks from it.
-				 */
-				goto try_pulling;
-			}
-			spin_unlock(&src_rq->lock);
-			continue;
-		}
 
 		/*
 		 * We can potentially drop this_rq's lock in
@@ -641,7 +610,6 @@ static int pull_rt_task(struct rq *this_rq)
 			continue;
 		}
 
- try_pulling:
 		p = pick_next_highest_task_rt(src_rq, this_cpu);
 
 		/*


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2007-12-06 15:50 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-12-06 15:27 [PATCH 0/2] sched: misc. fix and cleanup for sched-devel Gregory Haskins
2007-12-06 15:27 ` [PATCH 1/2] Subject: SCHED - Only adjust overload state when changing Gregory Haskins
2007-12-06 15:28 ` [PATCH 2/2] Subject: SCHED - Clean up some old cpuset logic Gregory Haskins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).