All of lore.kernel.org
 help / color / mirror / Atom feed
* [git pull request] scheduler updates
@ 2007-07-11 19:38 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-11 19:38 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Mike Galbraith, Andrew Morton


Linus, please pull the latest sched.git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

It includes 5 small fixes from the CFS merge fallout: Mike noticed a 
typo in the prio_to_wmult[] lookup table (the visible effects of this 
bug were minor), plus allow the scheduler to default to larger than 10 
msecs granularity - this should help larger boxes (without changing any 
of the tunings on smaller boxes), then there are also show_tasks() 
output fixes and some small cleanups.

Thanks,

	Ingo

----------------------->
Mike Galbraith (1):
      sched: fix prio_to_wmult[] for nice 1

Ingo Molnar (4):
      sched: allow larger granularity
      sched: remove stale version info from kernel/sched_debug.c
      sched: fix show_task()/show_tasks() output
      sched: small topology.h cleanup

 include/linux/topology.h |    2 +-
 kernel/sched.c           |   30 ++++++++++++------------------
 kernel/sched_debug.c     |    2 +-
 3 files changed, 14 insertions(+), 20 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-24 18:09 ` Linus Torvalds
  2007-08-24 19:37   ` Ingo Molnar
@ 2007-08-31  1:58   ` Roman Zippel
  1 sibling, 0 replies; 20+ messages in thread
From: Roman Zippel @ 2007-08-31  1:58 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Ingo Molnar, Andrew Morton, linux-kernel

Hi,

On Friday 24 August 2007, Linus Torvalds wrote:

> Why the hell can't you just make the code sane and do what the comment
> *says* it does, and just admit that HZ has nothing what-so-ever to do with
> that thing, and then you do
>
> 	unsigned int sysctl_sched_granularity __read_mostly = 3000000ULL;
>
> and be done with it. Instead of this *insane* expectation that HZ is
> always 1000, and any other value means that you want bigger granularity,
> which is not true and makes no sense.

I'd actually like to base this on the cpu frequency or the number of cycles to 
be precise, e.g. with 10^7 cycles this would be 100ms for 100MHz and 10ms for 
1GHz.

bye, Roman

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-28 14:46   ` Ingo Molnar
@ 2007-08-28 14:55     ` Mike Galbraith
  0 siblings, 0 replies; 20+ messages in thread
From: Mike Galbraith @ 2007-08-28 14:55 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Linus Torvalds, Andrew Morton, linux-kernel, Peter Zijlstra

On Tue, 2007-08-28 at 16:46 +0200, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> 
> > On Tue, 2007-08-28 at 13:32 +0200, Ingo Molnar wrote:
> > > Linus, please pull the latest scheduler git tree from:
> > > 
> > >   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git
> > > 
> > > no big changes - 5 small fixes and 1 small cleanup:
> > 
> > FWIW, I spent a few hours testing these patches with various loads, 
> > and all was peachy here.  No multimedia or interactivity aberrations 
> > noted.
> 
> great! Btw., there's another refinement Peter and me are working on (see 
> the patch below): to place new tasks into the existing 'scheduling flow' 
> in a more seemless way. In practice this should mean less firefox spikes 
> during a kbuild workload. If you have some time to try it, could you add 
> the patch below to your tree too, and see what happens during fork-happy 
> workloads? It does not seem to be overly urgent to apply at the moment, 
> but it is a nice touch i think.

Sure, I'll give it a try.  (i was just adding likely post 24 merge
candidates to give them some runtime anyway, one more to the queue)

	-Mike


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-28 14:11 ` Mike Galbraith
@ 2007-08-28 14:46   ` Ingo Molnar
  2007-08-28 14:55     ` Mike Galbraith
  0 siblings, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2007-08-28 14:46 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Linus Torvalds, Andrew Morton, linux-kernel, Peter Zijlstra


* Mike Galbraith <efault@gmx.de> wrote:

> On Tue, 2007-08-28 at 13:32 +0200, Ingo Molnar wrote:
> > Linus, please pull the latest scheduler git tree from:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git
> > 
> > no big changes - 5 small fixes and 1 small cleanup:
> 
> FWIW, I spent a few hours testing these patches with various loads, 
> and all was peachy here.  No multimedia or interactivity aberrations 
> noted.

great! Btw., there's another refinement Peter and me are working on (see 
the patch below): to place new tasks into the existing 'scheduling flow' 
in a more seemless way. In practice this should mean less firefox spikes 
during a kbuild workload. If you have some time to try it, could you add 
the patch below to your tree too, and see what happens during fork-happy 
workloads? It does not seem to be overly urgent to apply at the moment, 
but it is a nice touch i think.

	Ingo

------------------------>
Subject: sched: place new tasks in the middle of the task pool
From: Peter Zijlstra <a.p.zijlstra@chello.nl>

Place new tasks in the middle of the wait_runtime average. This smoothes 
out latency spikes caused by freshly started tasks, without being unfair 
to those tasks. Basically new tasks start right into the 'flow' of 
wait_runtime that exists in the system at that moment.

[ mingo@elte.hu: changed it to use cfs_rq->wait_runtime ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/sched.c      |    1 
 kernel/sched_fair.c |   59 +++++++++++++++++++++++++++++-----------------------
 2 files changed, 33 insertions(+), 27 deletions(-)

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -858,7 +858,6 @@ static void dec_nr_running(struct task_s
 
 static void set_load_weight(struct task_struct *p)
 {
-	task_rq(p)->cfs.wait_runtime -= p->se.wait_runtime;
 	p->se.wait_runtime = 0;
 
 	if (task_has_rt_policy(p)) {
Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -86,8 +86,8 @@ unsigned int sysctl_sched_features __rea
 		SCHED_FEAT_SLEEPER_AVG		*0 |
 		SCHED_FEAT_SLEEPER_LOAD_AVG	*1 |
 		SCHED_FEAT_PRECISE_CPU_LOAD	*1 |
-		SCHED_FEAT_START_DEBIT		*1 |
-		SCHED_FEAT_SKIP_INITIAL		*0;
+		SCHED_FEAT_START_DEBIT		*0 |
+		SCHED_FEAT_SKIP_INITIAL		*1;
 
 extern struct sched_class fair_sched_class;
 
@@ -194,6 +194,8 @@ __enqueue_entity(struct cfs_rq *cfs_rq, 
 	update_load_add(&cfs_rq->load, se->load.weight);
 	cfs_rq->nr_running++;
 	se->on_rq = 1;
+
+	cfs_rq->wait_runtime += se->wait_runtime;
 }
 
 static inline void
@@ -205,6 +207,8 @@ __dequeue_entity(struct cfs_rq *cfs_rq, 
 	update_load_sub(&cfs_rq->load, se->load.weight);
 	cfs_rq->nr_running--;
 	se->on_rq = 0;
+
+	cfs_rq->wait_runtime -= se->wait_runtime;
 }
 
 static inline struct rb_node *first_fair(struct cfs_rq *cfs_rq)
@@ -326,9 +330,9 @@ __add_wait_runtime(struct cfs_rq *cfs_rq
 static void
 add_wait_runtime(struct cfs_rq *cfs_rq, struct sched_entity *se, long delta)
 {
-	schedstat_add(cfs_rq, wait_runtime, -se->wait_runtime);
+	cfs_rq->wait_runtime -= se->wait_runtime;
 	__add_wait_runtime(cfs_rq, se, delta);
-	schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
+	cfs_rq->wait_runtime += se->wait_runtime;
 }
 
 /*
@@ -574,7 +578,6 @@ static void __enqueue_sleeper(struct cfs
 
 	prev_runtime = se->wait_runtime;
 	__add_wait_runtime(cfs_rq, se, delta_fair);
-	schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
 	delta_fair = se->wait_runtime - prev_runtime;
 
 	/*
@@ -662,7 +665,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
 			if (tsk->state & TASK_UNINTERRUPTIBLE)
 				se->block_start = rq_of(cfs_rq)->clock;
 		}
-		cfs_rq->wait_runtime -= se->wait_runtime;
 #endif
 	}
 	__dequeue_entity(cfs_rq, se);
@@ -671,7 +673,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
 /*
  * Preempt the current task with a newly woken task if needed:
  */
-static int
+static void
 __check_preempt_curr_fair(struct cfs_rq *cfs_rq, struct sched_entity *se,
 			  struct sched_entity *curr, unsigned long granularity)
 {
@@ -684,9 +686,8 @@ __check_preempt_curr_fair(struct cfs_rq 
 	 */
 	if (__delta > niced_granularity(curr, granularity)) {
 		resched_task(rq_of(cfs_rq)->curr);
-		return 1;
+		curr->prev_sum_exec_runtime = curr->sum_exec_runtime;
 	}
-	return 0;
 }
 
 static inline void
@@ -762,8 +763,7 @@ static void entity_tick(struct cfs_rq *c
 	if (delta_exec > ideal_runtime)
 		gran = 0;
 
-	if (__check_preempt_curr_fair(cfs_rq, next, curr, gran))
-		curr->prev_sum_exec_runtime = curr->sum_exec_runtime;
+	__check_preempt_curr_fair(cfs_rq, next, curr, gran);
 }
 
 /**************************************************
@@ -1087,6 +1087,8 @@ static void task_tick_fair(struct rq *rq
 	}
 }
 
+#define swap(a,b) do { __typeof__(a) tmp = (a); (a) = (b); (b)=tmp; } while (0)
+
 /*
  * Share the fairness runtime between parent and child, thus the
  * total amount of pressure for CPU stays equal - new tasks
@@ -1102,14 +1104,27 @@ static void task_new_fair(struct rq *rq,
 	sched_info_queued(p);
 
 	update_curr(cfs_rq);
-	update_stats_enqueue(cfs_rq, se);
+	if ((long)cfs_rq->wait_runtime < 0)
+		se->wait_runtime = (long)cfs_rq->wait_runtime /
+				(long)cfs_rq->nr_running;
 	/*
-	 * Child runs first: we let it run before the parent
-	 * until it reschedules once. We set up the key so that
-	 * it will preempt the parent:
+	 * The statistical average of wait_runtime is about
+	 * -granularity/2, so initialize the task with that:
 	 */
-	se->fair_key = curr->fair_key -
-		niced_granularity(curr, sched_granularity(cfs_rq)) - 1;
+	if (sysctl_sched_features & SCHED_FEAT_START_DEBIT) {
+		__add_wait_runtime(cfs_rq, se,
+			-niced_granularity(se, sched_granularity(cfs_rq))/2);
+	}
+
+	update_stats_enqueue(cfs_rq, se);
+
+	if (sysctl_sched_child_runs_first && (se->fair_key > curr->fair_key)) {
+		dequeue_entity(cfs_rq, curr, 0);
+		swap(se->wait_runtime, curr->wait_runtime);
+		update_stats_enqueue(cfs_rq, se);
+		enqueue_entity(cfs_rq, curr, 0);
+	}
+
 	/*
 	 * The first wait is dominated by the child-runs-first logic,
 	 * so do not credit it with that waiting time yet:
@@ -1117,16 +1132,8 @@ static void task_new_fair(struct rq *rq,
 	if (sysctl_sched_features & SCHED_FEAT_SKIP_INITIAL)
 		se->wait_start_fair = 0;
 
-	/*
-	 * The statistical average of wait_runtime is about
-	 * -granularity/2, so initialize the task with that:
-	 */
-	if (sysctl_sched_features & SCHED_FEAT_START_DEBIT) {
-		se->wait_runtime = -(sched_granularity(cfs_rq) / 2);
-		schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
-	}
-
 	__enqueue_entity(cfs_rq, se);
+	__check_preempt_curr_fair(cfs_rq, __pick_next_entity(cfs_rq), curr, 0);
 }
 
 #ifdef CONFIG_FAIR_GROUP_SCHED

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-28 11:32 Ingo Molnar
@ 2007-08-28 14:11 ` Mike Galbraith
  2007-08-28 14:46   ` Ingo Molnar
  0 siblings, 1 reply; 20+ messages in thread
From: Mike Galbraith @ 2007-08-28 14:11 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Linus Torvalds, Andrew Morton, linux-kernel, Peter Zijlstra

On Tue, 2007-08-28 at 13:32 +0200, Ingo Molnar wrote:
> Linus, please pull the latest scheduler git tree from:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git
> 
> no big changes - 5 small fixes and 1 small cleanup:

FWIW, I spent a few hours testing these patches with various loads, and
all was peachy here.  No multimedia or interactivity aberrations noted.

	-Mike


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-28 11:32 Ingo Molnar
  2007-08-28 14:11 ` Mike Galbraith
  0 siblings, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2007-08-28 11:32 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, linux-kernel, Peter Zijlstra, Mike Galbraith


Linus, please pull the latest scheduler git tree from:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

no big changes - 5 small fixes and 1 small cleanup:

- the only bug with a human-noticeable effect is a bonus-limit oneliner
  bug found and fixed by Mike: Mike has done interactivity testing of
  -rc4 and found a relatively minor but noticeable Amarok
  song-switch-latency increase under high load. (This bug was a
  side-effect of the recent adaptive-latency patch - mea culpa.)

- there's a fix for a new_task_fair() bug found by Ting Yang: Ting has
  done a comprehensive review of the latest CFS code and found this
  problem which caused a random jitter of 1 jiffy of the key value for
  newly started up tasks. Saw no immediate effects from this fix (this
  amount of jitter is noise in most cases and the effect averages out
  over longer time), but it's worth having the fix in .23 nevertheless.

- then there's a converge-to-ideal-latency change that fixes a
  pre-existing property of CFS. This is not a bug per se but is still
  worth fixing for .23 - the before/after chew-max output in the
  changelog shows the clear benefits in consistency of scheduling.
  Affects the preemption slowpath only. Should be human-unnoticeable.
  [ We would not have this fix if it wasnt for the de-HZ-ification
    change of the tunables, so i'm glad we got rid of the HZ uglies in 
    one go - they just hid this real problem. ]

- Peter noticed a bug in the SCHED_FEAT_SKIP_INITIAL code - but this
  is off by default so it's a NOP on the default kernel.

- a small schedstat fix [NOP for defconfig]. This bug was there since
  the first CFS commit.

- a small task_new_fair() cleanup [NOP].

	Ingo

------------------>
Ingo Molnar (4):
      sched: make the scheduler converge to the ideal latency
      sched: fix wait_start_fair condition in update_stats_wait_end()
      sched: small schedstat fix
      sched: clean up task_new_fair()

Mike Galbraith (1):
      sched: fix sleeper bonus limit

Ting Yang (1):
      sched: call update_curr() in task_tick_fair()

 include/linux/sched.h |    1 +
 kernel/sched.c        |    1 +
 kernel/sched_fair.c   |   46 +++++++++++++++++++++++++++++++++++-----------
 3 files changed, 37 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-25 17:23     ` Ingo Molnar
  2007-08-25 20:43       ` Ingo Molnar
@ 2007-08-25 21:20       ` Peter Zijlstra
  1 sibling, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2007-08-25 21:20 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Linus Torvalds, Andrew Morton, linux-kernel

On Sat, 2007-08-25 at 19:23 +0200, Ingo Molnar wrote:

> Peter and me tested this all day with various workloads and extreme-load 
> behavior has improved all over the place
 
Adaptive granularity makes a large difference for me on my somewhat
ancient laptop (1200 MHz). When browsing the interweb using firefox (or
trying to) while doing a (non-niced) kbuild -j5 the difference in
interactivity is significant.
 
[ kbuild -j5 was quite unbearable on 2.6.22 - so CFS is a clear win in
any case ]
 
The reduced latency is clearly noticable in a much smoother scroll
behaviour. Whereas both still present a usable browsing experience the
clear reduction in latency spikes makes it much more pleasant.



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-25 17:23     ` Ingo Molnar
@ 2007-08-25 20:43       ` Ingo Molnar
  2007-08-25 21:20       ` Peter Zijlstra
  1 sibling, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-25 20:43 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel, Peter Zijlstra


* Ingo Molnar <mingo@elte.hu> wrote:

>    git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git
> 
> Find the shortlog further below. There are 3 commits in it: adaptive 
> granularity, a subsequent cleanup, and a lockdep sysctl bug Peter 
> noticed while hacking on this. (the bug was introduced with the 
> initial CFS commits but nobody noticed because the lockdep sysctls are 
> rarely used.)

hm, a small (and mostly harmless) buglet sneaked into it: the wakeup 
granularity and the runtime limit is now dependent on sched_latency - 
while it should be dependent on min_granularity and latency. To pick up 
that fix please pull from:

    git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

(ontop of your very latest git tree)

the effect of this bug was too high wakeup latency that could cause 
audio skipping on small-audio-buffer setups. (didnt happen on mine, they 
have large enough buffers.)

	Ingo

------------------>
Ingo Molnar (1):
      sched: s/sched_latency/sched_min_granularity

 sched.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-24 19:37   ` Ingo Molnar
@ 2007-08-25 17:23     ` Ingo Molnar
  2007-08-25 20:43       ` Ingo Molnar
  2007-08-25 21:20       ` Peter Zijlstra
  0 siblings, 2 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-25 17:23 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel, Peter Zijlstra


* Ingo Molnar <mingo@elte.hu> wrote:

> We'll see - people can still readily tweak these values under 
> CONFIG_SCHED_DEBUG=y and give us feedback, and if there's enough 
> demand for very-finegrained scheduling (throughput be damned), we 
> could introduce yet another config option or enable the runtime 
> tunable unconditionally. (Or maybe the best option is Peter Zijlstra's 
> adaptive granularity idea, that gives the best of both worlds.)

hm, glxgears smoothness regresses with the de-HZ-ification change: with 
an increasing background load the turning of the wheels quickly becomes 
visually ugly - with small ruckles instead of smooth rotation.

The reason for that is that the 20 msec granularity on my testbox (which 
is a dual-core box, so the default 10msec turns into 20msec) turns into 
40, 60, 80, 100 msec 'observed latency' for glxgears as load increases 
to 2x, 3x, 4x etc - and a 100 msec pause in rotation is easily 
perceivable to the human eye (and brain). Before that the delay curve 
with increasing load was 4msec/8msec/12msec etc.

Due to the removal of the HZ dependency we now have upset the 
granularity picture anyway, so i believe we should do the adaptive 
granularity thing right now. That will aim for a 40msec task-observable 
latency, in a load-independent manner. (!) (This is an approach we 
couldnt even dream of with the previous, fixed-timeslice scheduler.)

The code is simple (and it is all in the slowpath), it in essence boils 
down to this new code:

 +static long
 +sched_granularity(struct cfs_rq *cfs_rq)
 +{
 +       unsigned int gran = sysctl_sched_latency;
 +       unsigned int nr = cfs_rq->nr_running;
 +
 +       if (nr > 1) {
 +               gran = gran/nr - gran/nr/nr;
 +               gran = max(gran, sysctl_sched_granularity);
 +       }
 +
 +       return gran;
 +}
 
IMO it is a good compromise between long slicing and short slicing: 
there are two values, one is the "CPU-bound task latency the scheduler 
aims for", the second one is a minimum granularity (to not do too many 
context-switches).

Peter and me tested this all day with various workloads and extreme-load 
behavior has improved all over the place - while the server benchmarks 
(which want less preemption) are still fine too. The glxgear ruckles are 
all gone.

If you do not disagree with this (it's pretty late in the game with more 
than 1 month spent from the kernel cycle already), please pull the 
latest scheduler tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

Find the shortlog further below. There are 3 commits in it: adaptive 
granularity, a subsequent cleanup, and a lockdep sysctl bug Peter 
noticed while hacking on this. (the bug was introduced with the initial 
CFS commits but nobody noticed because the lockdep sysctls are rarely 
used.)

The linecount increase is mostly due to the comments added to explain 
the "gran = lat/nr - lat/nr/nr" magic formula and due to the extra 
parameter.

Tested on 32-bit and 64-bit x86, and with a few make randconfig build 
tests too.

	Ingo

------------------>
Ingo Molnar (1):
      sched: cleanup, sched_granularity -> sched_min_granularity

Peter Zijlstra (2):
      sched: fix CONFIG_SCHED_DEBUG dependency of lockdep sysctls
      sched: adaptive scheduler granularity

 include/linux/sched.h |    3 +
 kernel/sched.c        |   16 ++++++----
 kernel/sched_fair.c   |   77 ++++++++++++++++++++++++++++++++++++++++++--------
 kernel/sysctl.c       |   33 ++++++++++++++-------
 4 files changed, 99 insertions(+), 30 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-24 18:09 ` Linus Torvalds
@ 2007-08-24 19:37   ` Ingo Molnar
  2007-08-25 17:23     ` Ingo Molnar
  2007-08-31  1:58   ` Roman Zippel
  1 sibling, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2007-08-24 19:37 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Fri, 24 Aug 2007, Ingo Molnar wrote:
> > 
> > Then there's also a change/tweak that increases the default 
> > granularity: it's still well below human perception so should not be 
> > noticeable, but servers win a bit from less preemption of CPU-bound 
> > tasks. (this is also the first step towards eliminating HZ from the 
> > granularity default calculation.)
> 
> Your explanation makes NO sense.
> 
> It doesn't eliminate HZ at all. It's still there, and it's still 
> totally bogus.

fair enough, and i fixed that.

( i called the previous patch the "first step" because i was too chicken
  to pick a single granularity default :-/ )

for the current queue i went for settings close to that of HZ=250 - 
that's the most common HZ variant that was tested previously. That means 
10 msec on a 1-way box, 20 msec on a 2-way box, 30 msec on a 4-way box, 
etc. (up to a 100 msecs ceiling.)

> Please just *remove* that thing. It has no possible value! You claim 
> that the preemption granularity is in "ns", and that it defaults to "3 
> msec", but it does no such thing at all, even with your patch. It 
> does:
> 
> 	unsigned int sysctl_sched_granularity __read_mostly = 3000000000ULL/HZ;
> 
> which is just total and utter CRAP!

ok, i've removed that and all the other HZ hacks too. I've uploaded a 
new queue with that fixed (and all other patches unchanged):

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

booted it on a number of boxes (32-bit and 64-bit). The question will be 
desktop behavior. I typically run my test-desktops with HZ=100 to make 
sure that desktop workloads are fine with large granularity values and 
coarse timers too, so i'm reasonably positive about this default.

We'll see - people can still readily tweak these values under 
CONFIG_SCHED_DEBUG=y and give us feedback, and if there's enough demand 
for very-finegrained scheduling (throughput be damned), we could 
introduce yet another config option or enable the runtime tunable 
unconditionally. (Or maybe the best option is Peter Zijlstra's adaptive 
granularity idea, that gives the best of both worlds.)

	Ingo

------------------>
Bruce Ashfield (1):
      sched: CONFIG_SCHED_GROUP_FAIR=y fixlet

Dmitry Adamushko (1):
      sched: optimize task_tick_rt() a bit

Ingo Molnar (3):
      sched: remove HZ dependency from the granularity default
      sched: tidy up and simplify the bonus balance
      sched: fix startup penalty calculation

Peter Zijlstra (2):
      sched: simplify bonus calculation #1
      sched: simplify bonus calculation #2

Sven-Thorsten Dietrich (1):
      sched: simplify can_migrate_task()

 sched.c      |    8 +-------
 sched_fair.c |   35 +++++++++++++++++++----------------
 sched_rt.c   |   11 ++++++++---
 3 files changed, 28 insertions(+), 26 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [git pull request] scheduler updates
  2007-08-24 14:12 Ingo Molnar
@ 2007-08-24 18:09 ` Linus Torvalds
  2007-08-24 19:37   ` Ingo Molnar
  2007-08-31  1:58   ` Roman Zippel
  0 siblings, 2 replies; 20+ messages in thread
From: Linus Torvalds @ 2007-08-24 18:09 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Andrew Morton, linux-kernel



On Fri, 24 Aug 2007, Ingo Molnar wrote:
> 
> Then there's also a change/tweak that increases the default granularity: 
> it's still well below human perception so should not be noticeable, but 
> servers win a bit from less preemption of CPU-bound tasks. (this is also 
> the first step towards eliminating HZ from the granularity default 
> calculation.)

Your explanation makes NO sense.

It doesn't eliminate HZ at all. It's still there, and it's still totally 
bogus.

Please just *remove* that thing. It has no possible value! You claim that 
the preemption granularity is in "ns", and that it defaults to "3 msec", 
but it does no such thing at all, even with your patch. It does:

	unsigned int sysctl_sched_granularity __read_mostly = 3000000000ULL/HZ;

which is just total and utter CRAP!

Why the hell can't you just make the code sane and do what the comment 
*says* it does, and just admit that HZ has nothing what-so-ever to do with 
that thing, and then you do

	unsigned int sysctl_sched_granularity __read_mostly = 3000000ULL;

and be done with it. Instead of this *insane* expectation that HZ is 
always 1000, and any other value means that you want bigger granularity, 
which is not true and makes no sense.

So dammit, stop writing these totally bogus "explanations". If you have a 
reason why the granularity needs to be HZ-dependent, *document* that 
reason, and make the code actually match the comment, instead of 
continually documenting things that SIMPLY ARE NOT TRUE.

Ingo, I'm not going to pull this kind of antics and crap.

		Linus

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-24 14:12 Ingo Molnar
  2007-08-24 18:09 ` Linus Torvalds
  0 siblings, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2007-08-24 14:12 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

It includes 8 commits, 3 of which are important: the most important 
change is a bugfix to the new task startup penalty code. This could 
explain the task-startup unpredictability problem reported by Al Boldi.

Then there's also a change/tweak that increases the default granularity: 
it's still well below human perception so should not be noticeable, but 
servers win a bit from less preemption of CPU-bound tasks. (this is also 
the first step towards eliminating HZ from the granularity default 
calculation.)

Plus a bonus balance inconsistency has been fixed: the previous logic 
was slightly inflatory of sleeper wait-runtime, without a 
counter-balance on runners. (I found no noticeable or measurable impact, 
other than a ~5% improvement in hackbench performance [due to less 
preemption scheduling] and a slightly nicer looking /proc/sched_debug 
output when there are lots of sleepers.)

Five other, low-impact changes: a group-scheduling fixlet from Bruce 
Ashfield, two nice simplifications from Peter Zijlstra to the 
bonus-balance code (which eliminate a 64-bit multiplication and shrink 
the code), a QOI improvement from Dmitry Adamushko to RR RT task 
preemption [not strictly required for .23 but this has been in my tree 
for some time already with no ill effects and the code is obviously 
correct] and a dead code elimination fix from Sven-Thorsten Dietrich.

Test-built and test-booted on x86-32 and x86-64, and it passed a few 
dozen "make randconfig" builds as well.

	Ingo

------------------>
Bruce Ashfield (1):
      sched: CONFIG_SCHED_GROUP_FAIR=y fixlet

Dmitry Adamushko (1):
      sched: optimize task_tick_rt() a bit

Ingo Molnar (3):
      sched: increase default granularity a bit
      sched: tidy up and simplify the bonus balance
      sched: fix startup penalty calculation

Peter Zijlstra (2):
      sched: simplify bonus calculation #1
      sched: simplify bonus calculation #2

Sven-Thorsten Dietrich (1):
      sched: simplify can_migrate_task()

 sched.c      |    6 ------
 sched_fair.c |   26 +++++++++++++++-----------
 sched_rt.c   |   11 ++++++++---
 3 files changed, 23 insertions(+), 20 deletions(-)


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-23 16:07 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-23 16:07 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

It includes six fixes: an s390 task-accounting fix from Christian 
Borntraeger, sysctl directory permission fixes from Eric W. Biederman, 
an SMT/MC balancing fix from Suresh Siddha (we under-balanced) and 
another fix from Suresh for debugging tweak side-effect. Plus there's a 
sched_clock() quality fix for CPUs that stop the TSC in idle (acked by 
Len Brown) and a reniced-tasks fixlet.

the SMT/MC blancing fix has the highest risk - but since it causes 
slightly more balancing (instead of less balancing, which is the more 
risky action) it should be pretty safe. Key workloads still seem fine. 
Tested on 32-bit and 64-bit x86 and it has passed 200+ make randconfig 
build tests.

	Ingo

---------------->
Christian Borntraeger (1):
      sched: accounting regression since rc1

Eric W. Biederman (1):
      sched: fix sysctl directory permissions

Ingo Molnar (2):
      sched: sched_clock_idle_[sleep|wakeup]_event()
      sched: tweak the sched_runtime_limit tunable

Suresh Siddha (2):
      sched: fix broken SMT/MC optimizations
      sched: skip updating rq's next_balance under null SD

 arch/i386/kernel/tsc.c        |    1 
 drivers/acpi/processor_idle.c |   32 +++++++++++++++----
 fs/proc/array.c               |   44 +++++++++++++++++----------
 include/linux/sched.h         |    5 +--
 kernel/sched.c                |   68 +++++++++++++++++++++++++++++++-----------
 kernel/sched_debug.c          |    3 +
 6 files changed, 110 insertions(+), 43 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-12 16:32 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-12 16:32 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

Linus, please pull the latest scheduler git tree from:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

three bugfixes:

- a nice fix from eagle-eye Oleg for a subtle typo in the balancing
  code, the effect of this bug was more agressive idle balancing. This
  bug was introduced by one of the original CFS commits.

- a round of global->static fixes from Adrian Bunk - this change,
  besides the cleanup effect, chops 100 bytes off sched.o.

- Peter Zijlstra noticed a sleeper-bonus bug. I kept this patch under
  observation and testing this past week and saw no ill effects so far. 
  It could fix two suspected regressions. (It could improve Kasper
  Sandberg's workload and it could improve the sleeper/runner
  problem/bug Roman Zippel was seeing.)

test-built and test-booted on x86-32 and x86-64, and did a dozen of 
randconfig builds for good measure (which uncovered two new build errors 
in latest -git).

Thanks,

	Ingo

--------------->
Adrian Bunk (1):
      sched: make global code static

Ingo Molnar (1):
      sched: fix sleeper bonus

Oleg Nesterov (1):
      sched: run_rebalance_domains: s/SCHED_IDLE/CPU_IDLE/

 include/linux/cpu.h |    2 --
 kernel/sched.c      |   48 ++++++++++++++++++++++++------------------------
 kernel/sched_fair.c |   12 ++++++------
 3 files changed, 30 insertions(+), 32 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-10 21:22 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-10 21:22 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

this includes a regression fix and two minor fixes. The regression was 
noticed today by Arjan on the F8-Test1 kernel (which uses .23-rc2): if 
his laptop boots from battery then cpu_khz gets mis-detected and 
subsequently sched_clock() runs too fast - causing interactivity 
problems. This was a pre-existing sched_clock() regression and those 
sched_clock() problems are being addressed by Andi's cpufreq sched-clock 
patchset, but meanwhile i've fixed the regression by making the 
rq->clock logic more robust against such type of sched_clock() 
anomalies. (it was already robust against time warps) Arjan tested the 
fix and it solved the problem. There's also a small 
kernel-address-information-leak fix for the SCHED_DEBUG case noticed by 
Arjan and a fix for a SCHED_GROUP_FAIR branch (not enabled upstream, but 
still working if enabled manually).

	Ingo

---------------->
Ingo Molnar (3):
      sched: improve rq-clock overflow logic
      sched: fix typo in the FAIR_GROUP_SCHED branch
      sched debug: dont print kernel address in /proc/sched_debug

 sched.c       |   15 +++++++++++++--
 sched_debug.c |    2 +-
 sched_fair.c  |    7 +++----
 3 files changed, 17 insertions(+), 7 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-08 20:30 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-08 20:30 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

the high commit count is scary, but it's all low-risk items: the main 
reason is the safe and gradual elimination of a widely used 64-bit 
function argument: the 64-bit "now" timestamp. About 40 of those commits 
are identity transformations that prepare the real change in a safe way, 
and the rest is obvious and safe as well. Besides the obvious and nice 
cleanup factor, these changes are necessary for 3 reasons: firstly they 
address the "there's too much 64-bit stuff in the scheduler" 
observation. Secondly, it's not directly visible but these changes also 
act as a correctness fix for an obscure (and minor) but 
not-too-pretty-to-fix accounting bug: idle_balance() had its own 
internal notion of 'now', separate from that of schedule(). Thirdly, 
this debloats sched.o quite significantly:

on 32-bit (smp, nondebug), it's almost 1k less code:

   text    data     bss     dec     hex filename
  34869    3066      20   37955    9443 sched.o.before
  33972    3066      24   37062    90c6 sched.o.after

but even on 64-bit platforms it's noticeable:

   text    data     bss     dec     hex filename
  28652    4162      24   32838    8046 sched.o.before
  28064    4162      24   32250    7dfa sched.o.after

and that's a speedup as well, because these parameters were passed all 
around the fastpath.

It was the safest to do it this way (considering that we are post -rc2 
already), together in one commit these changes would have been much less 
obvious to validate and apply. (It's of course all fully bisectable and 
every step builds and boots fine.)

besides this elimination of the 64-bit timestamp parameter passing 
between (almost all) scheduler functions, there are 8 other fixes that 
are not identity transformations:

 - Peter Williams reviewed the smpnice load-balancer and noticed a few 
   leftover items that are unnecessary now (i have re-tested 
   load-balancing behavior and it's all still fine)

 - binary sysctl cleanup from Alexey Dobriyan

 - two small accounting fixes

 - reniced tasks fixes: a key calculation fix (i re-checked key nice
   workloads and this has no real impact [other than improving them 
   slightly] - the other side of the branch fixed up the effects of this 
   - otherwise we'd have noticed this sooner), and two rounding 
   precision improvements that act against error accumulation.

 - sleeper_bonus should be batched by sched_granularity and not by 
   stat_granularity. (this has almost no effect in practice, but a 
   speedup that pushes the only 64-bit division in CFS into a slowpath.)

then are are also two non-code documentation updates and minor cleanups 
and uninlining.

Nevertheless, to be safe i have also done over 200 'make randconfig; 
make -j bzImage' build tests:

   #define UTS_VERSION "#231 SMP Wed Aug 8 21:34:24 CEST 2007"

all of which passed fine. Booted (and extensively tested) on x86-32 and 
x64-32 as well, both UP and SMP - UP, 2-way to 8-way systems.

	Ingo

------------------>

Alexey Dobriyan (1):
      sched: remove binary sysctls from kernel.sched_domain

Josh Triplett (1):
      sched: mark print_cfs_stats static

Peter Williams (2):
      sched: simplify move_tasks()
      sched: fix bug in balance_tasks()

Thomas Voegtle (1):
      sched: mention CONFIG_SCHED_DEBUG in documentation

Ulrich Drepper (1):
      sched: clean up sched_getaffinity()

Ingo Molnar (55):
      sched: batch sleeper bonus
      sched: reorder update_cpu_load(rq) with the ->task_tick() call
      sched: uninline rq_clock()
      sched: schedule() speedup
      sched: clean up delta_mine
      sched: delta_exec accounting fix
      sched: document nice levels
      sched: add [__]update_rq_clock(rq)
      sched: eliminate rq_clock() use
      sched: remove rq_clock()
      sched: eliminate __rq_clock() use
      sched: remove __rq_clock()
      sched: remove 'now' use from assignments
      sched: remove the 'u64 now' parameter from print_cfs_rq()
      sched: remove the 'u64 now' parameter from update_curr()
      sched: remove the 'u64 now' parameter from update_stats_wait_start()
      sched: remove the 'u64 now' parameter from update_stats_enqueue()
      sched: remove the 'u64 now' parameter from __update_stats_wait_end()
      sched: remove the 'u64 now' parameter from update_stats_wait_end()
      sched: remove the 'u64 now' parameter from update_stats_curr_start()
      sched: remove the 'u64 now' parameter from update_stats_dequeue()
      sched: remove the 'u64 now' parameter from update_stats_curr_end()
      sched: remove the 'u64 now' parameter from __enqueue_sleeper()
      sched: remove the 'u64 now' parameter from enqueue_sleeper()
      sched: remove the 'u64 now' parameter from enqueue_entity()
      sched: remove the 'u64 now' parameter from dequeue_entity()
      sched: remove the 'u64 now' parameter from set_next_entity()
      sched: remove the 'u64 now' parameter from pick_next_entity()
      sched: remove the 'u64 now' parameter from put_prev_entity()
      sched: remove the 'u64 now' parameter from update_curr_rt()
      sched: remove the 'u64 now' parameter from ->enqueue_task()
      sched: remove the 'u64 now' parameter from ->dequeue_task()
      sched: remove the 'u64 now' parameter from ->pick_next_task()
      sched: remove the 'u64 now' parameter from pick_next_task()
      sched: remove the 'u64 now' parameter from ->put_prev_task()
      sched: remove the 'u64 now' parameter from ->task_new()
      sched: remove the 'u64 now' parameter from update_curr_load()
      sched: remove the 'u64 now' parameter from inc_load()
      sched: remove the 'u64 now' parameter from dec_load()
      sched: remove the 'u64 now' parameter from inc_nr_running()
      sched: remove the 'u64 now' parameter from dec_nr_running()
      sched: remove the 'u64 now' parameter from enqueue_task()
      sched: remove the 'u64 now' parameter from dequeue_task()
      sched: remove the 'u64 now' parameter from deactivate_task()
      sched: remove the 'u64 now' local variables
      sched debug: remove the 'u64 now' parameter from print_task()/_rq()
      sched: move the __update_rq_clock() call to scheduler_tick()
      sched: remove __update_rq_clock() call from entity_tick()
      sched: clean up set_curr_task_fair()
      sched: optimize activate_task()
      sched: optimize update_rq_clock() calls in the load-balancer
      sched: make the multiplication table more accurate
      sched: round a bit better
      sched: fix update_stats_enqueue() reniced codepath
      sched: refine negative nice level granularity

 Documentation/sched-design-CFS.txt  |    2 
 Documentation/sched-nice-design.txt |  108 +++++++++++
 include/linux/sched.h               |   20 --
 kernel/sched.c                      |  339 ++++++++++++++++++------------------
 kernel/sched_debug.c                |   16 -
 kernel/sched_fair.c                 |  212 ++++++++++------------
 kernel/sched_idletask.c             |   10 -
 kernel/sched_rt.c                   |   48 +----
 8 files changed, 421 insertions(+), 334 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-08-02 16:08 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-02 16:08 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

these are all low-risk sched.o and task_struct debloating patches:

   text    data     bss     dec     hex filename
  37033    3066      20   40119    9cb7 sched.o.debug.before
  34840    3066      20   37926    9426 sched.o.debug.after

   text    data     bss     dec     hex filename
  28997    2726      16   31739    7bfb sched.o.before
  27991    2726      16   30733    780d sched.o.after

1006 bytes of code off in the nondebug case (this also speeds things up) 
and 2193 bytes of code off in the debug case. The size of sched.o is now 
1k smaller than it was before CFS on SMP, and within 1k of its old size 
on UP. (Further reduction is possible, there is another patch that 
shaves off another 500 bytes but it needs some more testing.)

also a nice smpnice cleanup/simplification from Peter Williams.

built and booted on x86-32 and x86-64, built allnoconfig and 
allyesconfig, and for good measure it also passed 38 iterations of 'make 
randconfig; make -j vmlinux' builds without any failure.

Thanks!

	Ingo

------------------->

Ingo Molnar (10):
      sched: remove cache_hot_time
      sched: calc_delta_mine(): use fixed limit
      sched: uninline calc_delta_mine()
      sched: uninline inc/dec_nr_running()
      sched: ->task_new cleanup
      sched: move load-calculation functions
      sched: add schedstat_set() API
      sched: use schedstat_set() API
      sched: reduce debug code
      sched: reduce task_struct size

Peter Williams (1):
      sched: tidy up left over smpnice code

 include/linux/sched.h    |   24 +++--
 include/linux/topology.h |    1 
 kernel/sched.c           |  193 +++++++++++++++++++++++------------------------
 kernel/sched_debug.c     |   22 +++--
 kernel/sched_fair.c      |   21 +----
 kernel/sched_rt.c        |   14 ---
 kernel/sched_stats.h     |    2 
 7 files changed, 134 insertions(+), 143 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-07-26 12:08 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-26 12:08 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

there's 8 commits in this tree - only one modifies scheduling behavior 
(and even that one only slightly so): a fix for a (minor) 
SMP-fairness-balancing problem.

There is one update/fix to the (upstream still unused) cpu_clock() API. 
[ which API will replace all the current (and buggy) in-tree uses of 
  sched_clock(). ]

There are also two small facilities added: preempt-notifiers (which is 
disabled and not selectable by the user and hence a NOP) needed by 
future KVM and other virtualization work and they'd like to see this 
offered by the upstream kernel. There's also the new 
above_background_load() inline function (unused at the moment). The 
presence of these two facilities causes no change at all to the kernel 
image:

    text    data     bss     dec     hex filename
 5573413  679332 3842048 10094793         9a08c9 vmlinux.before
 5573413  679332 3842048 10094793         9a08c9 vmlinux.after

so i thought this would be fine for a post-rc1 merge too.

There's also two small cleanup patches, a documentation update, and a 
debugging enhancement/helper: i've merged Nick's long-pending 
sysctl-domain-tree debug patch that has been in -mm for 3 years 
meanwhile. (It depends on CONFIG_SCHED_DEBUG and has no effect on 
scheduling by default even if enabled.)

passes allyesconfig, allnoconfig and distro build, boots and works fine 
on 32-bit and 64-bit x86 as well. (and is expected to work fine on every 
architecture)

	Ingo

-------------------->
Avi Kivity (1):
      sched: arch preempt notifier mechanism

Con Kolivas (1):
      sched: add above_background_load() function

Ingo Molnar (2):
      sched: increase SCHED_LOAD_SCALE_FUZZ
      sched: make cpu_clock() not use the rq clock

Joachim Deguara (1):
      sched: update Documentation/sched-stats.txt

Josh Triplett (1):
      sched: mark sysrq_sched_debug_show() static

Nick Piggin (1):
      sched: debug feature - make the sched-domains tree runtime-tweakable

Satoru Takeuchi (1):
      sched: remove unused rq->load_balance_class

 Documentation/sched-stats.txt |  195 ++++++++++++++++++++--------------------
 include/linux/preempt.h       |   44 +++++++++
 include/linux/sched.h         |   23 ++++
 kernel/Kconfig.preempt        |    3 
 kernel/sched.c                |  204 ++++++++++++++++++++++++++++++++++++++++--
 kernel/sched_debug.c          |    2 
 6 files changed, 365 insertions(+), 106 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-07-19 16:50 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-19 16:50 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

4 small changes only. It includes an cleanup: Ralf Baechle noticed that 
sched_cacheflush() is now unused, a new kernel-internal API for future 
use (cpu_clock(cpu)), and two SMP balancer fixes from Suresh Siddha. The 
balancer fixes are the only functional bits. Tested on x86-32bit and 
x86-64bit, build-tested on allyesconfig and allnoconfig. I re-checked a 
few SMP balancing scenarios due to the balancer fixes and kept those 
changes in my tree for a few days, and they are working fine here.

Thanks,

	Ingo

--------------->
Ingo Molnar (1):
      sched: implement cpu_clock(cpu) high-speed time source

Ralf Baechle (1):
      sched: sched_cacheflush is now unused

Suresh Siddha (2):
      sched: fix newly idle load balance in case of SMT
      sched: fix the all pinned logic in load_balance_newidle()

 arch/ia64/kernel/setup.c     |    9 ---------
 include/asm-alpha/system.h   |   10 ----------
 include/asm-arm/system.h     |   10 ----------
 include/asm-arm26/system.h   |   10 ----------
 include/asm-i386/system.h    |    9 ---------
 include/asm-ia64/system.h    |    1 -
 include/asm-m32r/system.h    |   10 ----------
 include/asm-mips/system.h    |   10 ----------
 include/asm-parisc/system.h  |   11 -----------
 include/asm-powerpc/system.h |   10 ----------
 include/asm-ppc/system.h     |   10 ----------
 include/asm-s390/system.h    |   10 ----------
 include/asm-sh/system.h      |   10 ----------
 include/asm-sparc/system.h   |   10 ----------
 include/asm-sparc64/system.h |   10 ----------
 include/asm-x86_64/system.h  |    9 ---------
 include/linux/sched.h        |    7 +++++++
 kernel/sched.c               |   31 ++++++++++++++++++++++++++-----
 18 files changed, 33 insertions(+), 154 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [git pull request] scheduler updates
@ 2007-07-16  7:53 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-16  7:53 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

this includes low-risk changes that improve comments, remove dead code 
and fix whitespace/style problems.

Thanks!

	Ingo

--------------->
Ingo Molnar (5):
      sched: remove dead code from task_stime()
      sched: improve weight-array comments
      sched: document prio_to_wmult[]
      sched: prettify prio_to_wmult[]
      sched: fix up fs/proc/array.c whitespace problems

 fs/proc/array.c |   53 ++++++++++++++++++++++++++---------------------------
 kernel/sched.c  |   27 ++++++++++++++++++---------
 2 files changed, 44 insertions(+), 36 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2007-08-31  1:58 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-11 19:38 [git pull request] scheduler updates Ingo Molnar
2007-07-16  7:53 Ingo Molnar
2007-07-19 16:50 Ingo Molnar
2007-07-26 12:08 Ingo Molnar
2007-08-02 16:08 Ingo Molnar
2007-08-08 20:30 Ingo Molnar
2007-08-10 21:22 Ingo Molnar
2007-08-12 16:32 Ingo Molnar
2007-08-23 16:07 Ingo Molnar
2007-08-24 14:12 Ingo Molnar
2007-08-24 18:09 ` Linus Torvalds
2007-08-24 19:37   ` Ingo Molnar
2007-08-25 17:23     ` Ingo Molnar
2007-08-25 20:43       ` Ingo Molnar
2007-08-25 21:20       ` Peter Zijlstra
2007-08-31  1:58   ` Roman Zippel
2007-08-28 11:32 Ingo Molnar
2007-08-28 14:11 ` Mike Galbraith
2007-08-28 14:46   ` Ingo Molnar
2007-08-28 14:55     ` Mike Galbraith

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.