All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] O6int for interactivity
@ 2003-07-16 14:30 Con Kolivas
  2003-07-16 15:22 ` Felipe Alfaro Solana
                   ` (8 more replies)
  0 siblings, 9 replies; 44+ messages in thread
From: Con Kolivas @ 2003-07-16 14:30 UTC (permalink / raw)
  To: linux kernel mailing list
  Cc: Andrew Morton, Felipe Alfaro Solana, Zwane Mwaikambo

O*int patches trying to improve the interactivity of the 2.5/6 scheduler for 
desktops. It appears possible to do this without moving to nanosecond 
resolution.

This one makes a massive difference... Please test this to death.

Changes:
The big change is in the way sleep_avg is incremented. Any amount of sleep 
will now raise you by at least one priority with each wakeup. This causes 
massive differences to startup time, extremely rapid conversion to interactive 
state, and recovery from non-interactive state rapidly as well (prevents X 
stalling after thrashing around under high loads for many seconds).

The sleep buffer was dropped to just 10ms. This has the effect of causing mild 
round robinning of very interactive tasks if they run for more than 10ms. The 
requeuing was changed from (unlikely()) to an ordinary if.. branch as this 
will be hit much more now.

MAX_BONUS as a #define was made easier to understand

Idle tasks were made slightly less interactive to prevent cpu hogs from 
becoming interactive on their very first wakeup.

Con

This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be found 
here:
http://kernel.kolivas.org/2.5

and here:

--- linux-2.6.0-test1-mm1/kernel/sched.c	2003-07-16 20:27:32.000000000 +1000
+++ linux-2.6.0-testck1/kernel/sched.c	2003-07-17 00:13:24.000000000 +1000
@@ -76,9 +76,9 @@
 #define MIN_SLEEP_AVG		(HZ)
 #define MAX_SLEEP_AVG		(10*HZ)
 #define STARVATION_LIMIT	(10*HZ)
-#define SLEEP_BUFFER		(HZ/20)
+#define SLEEP_BUFFER		(HZ/100)
 #define NODE_THRESHOLD		125
-#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
+#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
 
 /*
  * If a task is 'interactive' then we reinsert it in the active
@@ -399,7 +399,7 @@ static inline void activate_task(task_t 
 		 */
 		if (sleep_time > MIN_SLEEP_AVG){
 			p->avg_start = jiffies - MIN_SLEEP_AVG;
-			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
+			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
 				MAX_BONUS;
 		} else {
 			/*
@@ -413,14 +413,10 @@ static inline void activate_task(task_t 
 			p->sleep_avg += sleep_time;
 
 			/*
-			 * Give a bonus to tasks that wake early on to prevent
-			 * the problem of the denominator in the bonus equation
-			 * from continually getting larger.
+			 * Processes that sleep get pushed to a higher priority
+			 * each time they sleep
 			 */
-			if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
-				p->sleep_avg += (runtime - p->sleep_avg) *
-					(MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
-					(MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
+			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
 
 			/*
 			 * Keep a small buffer of SLEEP_BUFFER sleep_avg to
@@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int 
 			enqueue_task(p, rq->expired);
 		} else
 			enqueue_task(p, rq->active);
-	} else if (unlikely(p->prio < effective_prio(p))){
+	} else if (p->prio < effective_prio(p)){
 		/*
 		 * Tasks that have lowered their priority are put to the end
 		 * of the active array with their remaining timeslice


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
@ 2003-07-16 15:22 ` Felipe Alfaro Solana
  2003-07-16 19:55   ` Marc-Christian Petersen
  2003-07-16 17:08 ` Valdis.Kletnieks
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 44+ messages in thread
From: Felipe Alfaro Solana @ 2003-07-16 15:22 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux kernel mailing list, Andrew Morton, Zwane Mwaikambo

On Wed, 2003-07-16 at 16:30, Con Kolivas wrote:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for 
> desktops. It appears possible to do this without moving to nanosecond 
> resolution.
> 
> This one makes a massive difference... Please test this to death.

Oh, my god... This is nearly perfect! :-)
On 2.6.0-test1-mm1 with o6int.patch, I can't reproduce XMMS initial
starvation anymore and X feels smoother under heavy load.
Nice... ;-)


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
  2003-07-16 15:22 ` Felipe Alfaro Solana
@ 2003-07-16 17:08 ` Valdis.Kletnieks
  2003-07-16 21:59 ` Wiktor Wodecki
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Valdis.Kletnieks @ 2003-07-16 17:08 UTC (permalink / raw)
  To: Con Kolivas
  Cc: linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

[-- Attachment #1: Type: text/plain, Size: 955 bytes --]

On Thu, 17 Jul 2003 00:30:25 +1000, Con Kolivas said:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for 
> desktops. It appears possible to do this without moving to nanosecond 
> resolution.
> 
> This one makes a massive difference... Please test this to death.

This one looks *awesome* here - the base -mm1 version (which was -O5int if I
remember right) was still subject to very tiny stutters (sound like "clicks")
in xmms (everybody's favorite tester ;) under some conditions (changing folders
in the Exmh mail client was usually good for a click).  -O6int has stuttered
exactly once in the past hour, and that was with Exmh going, a large grep
running, a sudden influx of fetchmail/sendmail/procmail (probably 30-50 fork/
exec pairs/sec there), launching Mozilla (oink ;), and something else big all at the
same time (in other words, under as extreme load as this laptop ever sees in
actual production useage).

/Valdis

[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 15:22 ` Felipe Alfaro Solana
@ 2003-07-16 19:55   ` Marc-Christian Petersen
  0 siblings, 0 replies; 44+ messages in thread
From: Marc-Christian Petersen @ 2003-07-16 19:55 UTC (permalink / raw)
  To: Con Kolivas, Felipe Alfaro Solana
  Cc: linux kernel mailing list, Andrew Morton, Zwane Mwaikambo

On Wednesday 16 July 2003 17:22, Felipe Alfaro Solana wrote:

Hi Con,

> > This one makes a massive difference... Please test this to death.
> Oh, my god... This is nearly perfect! :-)
> On 2.6.0-test1-mm1 with o6int.patch, I can't reproduce XMMS initial
> starvation anymore and X feels smoother under heavy load.
> Nice... ;-)
hmm, I really wonder why I don't see any difference for my box.

1. "make -j2 bzImage modules" slows down my box alot.

2. kmail is slow like a dog while make -j2

3. xterm needs ~5seconds while make -j2 to open up

4. xmms does not skip

5. I've tried Felipe's suggestions, they are:
	#define PRIO_BONUS_RATIO        45
	#define INTERACTIVE_DELTA       4
	#define MAX_SLEEP_AVG           (HZ)
	#define STARVATION_LIMIT        (HZ)
   At least with these changes kmail is much much faster but still not
   as fast as w/o the compilation. Xterm needs ~5seconds to open up.

6. Xterm: "ls -lsa" in a directory with ~1200 files

	2.6.0-test1-mm1 + O6int:
	------------------------
	real    0m12.468s
	user    0m0.170s
	sys     0m0.057s

	2.4.20-wolk4.4: O(1) from latest -aa tree
	-----------------------------------------
	real    0m0.689s
	user    0m0.031s
	sys     0m0.011s

7. playing an mpeg with mplayer while "make -j2 bzImage modules" let the movie
   skip some frames every ~10 seconds.

8. I've also tried min_timeslice == max_timeslice (10) w/o much difference :-(
   I remember that this helped alot in earlier 2.5 kernels.


I have to say that the XMMS issue is really less important for me. I want a 
kernel where I can "make -j<huge number> bzImage modules" and don't notice 
that compilation w/o renicing all the gcc instances.

Machine:
--------
Celeron 1,3GHz
512MB RAM
2x IDE (UDMA100) 60/40 GB
1GB SWAP, 512MB on each disk (same priority)
ext3fs (data=ordered)
anticipatory I/O scheduler
XFree 4.3
WindowMaker 0.82-CVS


Is my box the only one on earth which don't like the scheduler fixups? ;)

ciao, Marc


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
  2003-07-16 15:22 ` Felipe Alfaro Solana
  2003-07-16 17:08 ` Valdis.Kletnieks
@ 2003-07-16 21:59 ` Wiktor Wodecki
  2003-07-16 22:30   ` Con Kolivas
  2003-07-16 22:12 ` Davide Libenzi
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 44+ messages in thread
From: Wiktor Wodecki @ 2003-07-16 21:59 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4255 bytes --]

Hello,

I have been gone from the computer for a couple of hours, and now
everything is very chopping. For example I'm transfering some huge data
over nfs to another linux box (on sun, tho) where there is not enough
space available. It happens that in the middle of the copying the
process 'hangs'. I cannot interrupt it with ctrl-[cz], it takes a couple
of seconds (~20). I cannot tell for sure that the problem is with the O6
patch, since I haven't done it on an other kernel, yet.

On Thu, Jul 17, 2003 at 12:30:25AM +1000, Con Kolivas wrote:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for 
> desktops. It appears possible to do this without moving to nanosecond 
> resolution.
> 
> This one makes a massive difference... Please test this to death.
> 
> Changes:
> The big change is in the way sleep_avg is incremented. Any amount of sleep 
> will now raise you by at least one priority with each wakeup. This causes 
> massive differences to startup time, extremely rapid conversion to interactive 
> state, and recovery from non-interactive state rapidly as well (prevents X 
> stalling after thrashing around under high loads for many seconds).
> 
> The sleep buffer was dropped to just 10ms. This has the effect of causing mild 
> round robinning of very interactive tasks if they run for more than 10ms. The 
> requeuing was changed from (unlikely()) to an ordinary if.. branch as this 
> will be hit much more now.
> 
> MAX_BONUS as a #define was made easier to understand
> 
> Idle tasks were made slightly less interactive to prevent cpu hogs from 
> becoming interactive on their very first wakeup.
> 
> Con
> 
> This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be found 
> here:
> http://kernel.kolivas.org/2.5
> 
> and here:
> 
> --- linux-2.6.0-test1-mm1/kernel/sched.c	2003-07-16 20:27:32.000000000 +1000
> +++ linux-2.6.0-testck1/kernel/sched.c	2003-07-17 00:13:24.000000000 +1000
> @@ -76,9 +76,9 @@
>  #define MIN_SLEEP_AVG		(HZ)
>  #define MAX_SLEEP_AVG		(10*HZ)
>  #define STARVATION_LIMIT	(10*HZ)
> -#define SLEEP_BUFFER		(HZ/20)
> +#define SLEEP_BUFFER		(HZ/100)
>  #define NODE_THRESHOLD		125
> -#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
> +#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
>  
>  /*
>   * If a task is 'interactive' then we reinsert it in the active
> @@ -399,7 +399,7 @@ static inline void activate_task(task_t 
>  		 */
>  		if (sleep_time > MIN_SLEEP_AVG){
>  			p->avg_start = jiffies - MIN_SLEEP_AVG;
> -			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
> +			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
>  				MAX_BONUS;
>  		} else {
>  			/*
> @@ -413,14 +413,10 @@ static inline void activate_task(task_t 
>  			p->sleep_avg += sleep_time;
>  
>  			/*
> -			 * Give a bonus to tasks that wake early on to prevent
> -			 * the problem of the denominator in the bonus equation
> -			 * from continually getting larger.
> +			 * Processes that sleep get pushed to a higher priority
> +			 * each time they sleep
>  			 */
> -			if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
> -				p->sleep_avg += (runtime - p->sleep_avg) *
> -					(MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
> -					(MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
> +			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
>  
>  			/*
>  			 * Keep a small buffer of SLEEP_BUFFER sleep_avg to
> @@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int 
>  			enqueue_task(p, rq->expired);
>  		} else
>  			enqueue_task(p, rq->active);
> -	} else if (unlikely(p->prio < effective_prio(p))){
> +	} else if (p->prio < effective_prio(p)){
>  		/*
>  		 * Tasks that have lowered their priority are put to the end
>  		 * of the active array with their remaining timeslice
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Regards,

Wiktor Wodecki

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
                   ` (2 preceding siblings ...)
  2003-07-16 21:59 ` Wiktor Wodecki
@ 2003-07-16 22:12 ` Davide Libenzi
  2003-07-17  0:33   ` Con Kolivas
  2003-07-17  3:05 ` Wes Janzen
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 44+ messages in thread
From: Davide Libenzi @ 2003-07-16 22:12 UTC (permalink / raw)
  To: Con Kolivas
  Cc: linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

On Thu, 17 Jul 2003, Con Kolivas wrote:

> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
> desktops. It appears possible to do this without moving to nanosecond
> resolution.
>
> This one makes a massive difference... Please test this to death.
>
> Changes:
> The big change is in the way sleep_avg is incremented. Any amount of sleep
> will now raise you by at least one priority with each wakeup. This causes
> massive differences to startup time, extremely rapid conversion to interactive
> state, and recovery from non-interactive state rapidly as well (prevents X
> stalling after thrashing around under high loads for many seconds).
>
> The sleep buffer was dropped to just 10ms. This has the effect of causing mild
> round robinning of very interactive tasks if they run for more than 10ms. The
> requeuing was changed from (unlikely()) to an ordinary if.. branch as this
> will be hit much more now.

Con, I'll make a few notes on the code and a final comment.



> -#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
> +#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)

Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO - MAX_RT_PRIO)
and you will have another place to change if the number of slots will
change. If you want to clarify better, stick a comment.



> +			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;

I don't have the full code so I cannot see what "runtime" is, but if
"runtime" is the time the task ran, this is :

p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;

(in any case a non-decreasing function of "runtime" )
Are you sure you want to reward tasks that actually ran more ?


Con, you cannot follow the XMMS thingy otherwise you'll end up bolting in
the XMMS sleep->burn pattern and you'll probably break the make-j+RealPlay
for example. MultiMedia players are really tricky since they require strict
timings and forces you to create a special super-interactive treatment
inside the code. Interactivity in my box running moderate high loads is
very good for my desktop use. Maybe audio will skip here (didn't try) but
I believe that following the fix-XMMS thingy is really bad. I believe we
should try to make the desktop to feel interactive with human tollerances
and not with strict timings like MM apps. If the audio skips when dragging
like crazy a X window using the filled mode on a slow CPU, we shouldn't be
much worried about it for example. If audio skip when hitting the refresh
button of Mozilla, then yes it should be fixed. And the more you add super
interactive patterns, the more the scheduler will be exploitable. I
recommend you after doing changes to get this :

http://www.xmailserver.org/linux-patches/irman2.c

and run it with different -n (number of tasks) and -b (CPU burn ms time).
At the same time try to build a kernel for example. Then you will realize
that interactivity is not the bigger problem that the scheduler has right
now.




- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 21:59 ` Wiktor Wodecki
@ 2003-07-16 22:30   ` Con Kolivas
  0 siblings, 0 replies; 44+ messages in thread
From: Con Kolivas @ 2003-07-16 22:30 UTC (permalink / raw)
  To: Wiktor Wodecki, Wiktor Wodecki; +Cc: linux-kernel

On Thu, 17 Jul 2003 07:59, Wiktor Wodecki wrote:
> Hello,
>
> I have been gone from the computer for a couple of hours, and now
> everything is very chopping. For example I'm transfering some huge data

Small bug I noticed after sleeping on the patch. I'll post a fix later today.

Con

> over nfs to another linux box (on sun, tho) where there is not enough
> space available. It happens that in the middle of the copying the
> process 'hangs'. I cannot interrupt it with ctrl-[cz], it takes a couple
> of seconds (~20). I cannot tell for sure that the problem is with the O6
> patch, since I haven't done it on an other kernel, yet.
>
> On Thu, Jul 17, 2003 at 12:30:25AM +1000, Con Kolivas wrote:
> > O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> > for desktops. It appears possible to do this without moving to nanosecond
> > resolution.
> >
> > This one makes a massive difference... Please test this to death.
> >
> > Changes:
> > The big change is in the way sleep_avg is incremented. Any amount of
> > sleep will now raise you by at least one priority with each wakeup. This
> > causes massive differences to startup time, extremely rapid conversion to
> > interactive state, and recovery from non-interactive state rapidly as
> > well (prevents X stalling after thrashing around under high loads for
> > many seconds).
> >
> > The sleep buffer was dropped to just 10ms. This has the effect of causing
> > mild round robinning of very interactive tasks if they run for more than
> > 10ms. The requeuing was changed from (unlikely()) to an ordinary if..
> > branch as this will be hit much more now.
> >
> > MAX_BONUS as a #define was made easier to understand
> >
> > Idle tasks were made slightly less interactive to prevent cpu hogs from
> > becoming interactive on their very first wakeup.
> >
> > Con
> >
> > This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be
> > found here:
> > http://kernel.kolivas.org/2.5
> >
> > and here:
> >
> > --- linux-2.6.0-test1-mm1/kernel/sched.c	2003-07-16 20:27:32.000000000
> > +1000 +++ linux-2.6.0-testck1/kernel/sched.c	2003-07-17
> > 00:13:24.000000000 +1000 @@ -76,9 +76,9 @@
> >  #define MIN_SLEEP_AVG		(HZ)
> >  #define MAX_SLEEP_AVG		(10*HZ)
> >  #define STARVATION_LIMIT	(10*HZ)
> > -#define SLEEP_BUFFER		(HZ/20)
> > +#define SLEEP_BUFFER		(HZ/100)
> >  #define NODE_THRESHOLD		125
> > -#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO /
> > 100) +#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
> >
> >  /*
> >   * If a task is 'interactive' then we reinsert it in the active
> > @@ -399,7 +399,7 @@ static inline void activate_task(task_t
> >  		 */
> >  		if (sleep_time > MIN_SLEEP_AVG){
> >  			p->avg_start = jiffies - MIN_SLEEP_AVG;
> > -			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
> > +			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
> >  				MAX_BONUS;
> >  		} else {
> >  			/*
> > @@ -413,14 +413,10 @@ static inline void activate_task(task_t
> >  			p->sleep_avg += sleep_time;
> >
> >  			/*
> > -			 * Give a bonus to tasks that wake early on to prevent
> > -			 * the problem of the denominator in the bonus equation
> > -			 * from continually getting larger.
> > +			 * Processes that sleep get pushed to a higher priority
> > +			 * each time they sleep
> >  			 */
> > -			if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
> > -				p->sleep_avg += (runtime - p->sleep_avg) *
> > -					(MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
> > -					(MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
> > +			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime /
> > MAX_BONUS;
> >
> >  			/*
> >  			 * Keep a small buffer of SLEEP_BUFFER sleep_avg to
> > @@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int
> >  			enqueue_task(p, rq->expired);
> >  		} else
> >  			enqueue_task(p, rq->active);
> > -	} else if (unlikely(p->prio < effective_prio(p))){
> > +	} else if (p->prio < effective_prio(p)){
> >  		/*
> >  		 * Tasks that have lowered their priority are put to the end
> >  		 * of the active array with their remaining timeslice
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> > in the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 22:12 ` Davide Libenzi
@ 2003-07-17  0:33   ` Con Kolivas
  2003-07-17  0:35     ` Davide Libenzi
  2003-07-17  0:48     ` Wade
  0 siblings, 2 replies; 44+ messages in thread
From: Con Kolivas @ 2003-07-17  0:33 UTC (permalink / raw)
  To: Davide Libenzi
  Cc: linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

Quoting Davide Libenzi <davidel@xmailserver.org>:

> On Thu, 17 Jul 2003, Con Kolivas wrote:
> 
> > O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> for
> > desktops. It appears possible to do this without moving to nanosecond
> > resolution.
> >
> > This one makes a massive difference... Please test this to death.
> >
> > Changes:
> > The big change is in the way sleep_avg is incremented. Any amount of sleep
> > will now raise you by at least one priority with each wakeup. This causes
> > massive differences to startup time, extremely rapid conversion to
> interactive
> > state, and recovery from non-interactive state rapidly as well (prevents X
> > stalling after thrashing around under high loads for many seconds).
> >
> > The sleep buffer was dropped to just 10ms. This has the effect of causing
> mild
> > round robinning of very interactive tasks if they run for more than 10ms.
> The
> > requeuing was changed from (unlikely()) to an ordinary if.. branch as this
> > will be hit much more now.
> 
> Con, I'll make a few notes on the code and a final comment.
> 
> 
> 
> > -#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * 
PRIO_BONUS_RATIO /
> 100)
> > +#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
> 
> Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO -
> MAX_RT_PRIO)
> and you will have another place to change if the number of slots will
> change. If you want to clarify better, stick a comment.

Granted. Will revert. If you don't understand it you shouldn't be fiddling with 
it I agree.

> 
> 
> 
> > +			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) 
* runtime /
> MAX_BONUS;
> 
> I don't have the full code so I cannot see what "runtime" is, but if
> "runtime" is the time the task ran, this is :
> 
> p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> 
> (in any case a non-decreasing function of "runtime" )
> Are you sure you want to reward tasks that actually ran more ?


That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG. Fix will 
be posted very soon.

> 
> 
> Con, you cannot follow the XMMS thingy otherwise you'll end up bolting in
> the XMMS sleep->burn pattern and you'll probably break the make-j+RealPlay
> for example. MultiMedia players are really tricky since they require strict
> timings and forces you to create a special super-interactive treatment
> inside the code. Interactivity in my box running moderate high loads is
> very good for my desktop use. Maybe audio will skip here (didn't try) but
> I believe that following the fix-XMMS thingy is really bad. I believe we
> should try to make the desktop to feel interactive with human tollerances
> and not with strict timings like MM apps. If the audio skips when dragging
> like crazy a X window using the filled mode on a slow CPU, we shouldn't be
> much worried about it for example. If audio skip when hitting the refresh
> button of Mozilla, then yes it should be fixed. And the more you add super
> interactive patterns, the more the scheduler will be exploitable. I
> recommend you after doing changes to get this :
> 
> http://www.xmailserver.org/linux-patches/irman2.c
> 
> and run it with different -n (number of tasks) and -b (CPU burn ms time).
> At the same time try to build a kernel for example. Then you will realize
> that interactivity is not the bigger problem that the scheduler has right
> now.

Please don't assume I'm writing an xmms scheduler. I've done a lot more testing 
than xmms.

Con

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  0:33   ` Con Kolivas
@ 2003-07-17  0:35     ` Davide Libenzi
  2003-07-17  1:12       ` Con Kolivas
  2003-07-17  0:48     ` Wade
  1 sibling, 1 reply; 44+ messages in thread
From: Davide Libenzi @ 2003-07-17  0:35 UTC (permalink / raw)
  To: Con Kolivas
  Cc: linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

On Thu, 17 Jul 2003, Con Kolivas wrote:

> > > +			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
> * runtime /
> > MAX_BONUS;
> >
> > I don't have the full code so I cannot see what "runtime" is, but if
> > "runtime" is the time the task ran, this is :
> >
> > p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> >
> > (in any case a non-decreasing function of "runtime" )
> > Are you sure you want to reward tasks that actually ran more ?
>
>
> That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG. Fix will
> be posted very soon.

Con, it is not the limit. You're making sleep_avg a non-decreasing
function of "runtime". Basically you are rewarding tasks that did burn
more CPU (if runtime is what the name suggests). Are you sure this is what
you want ?


> > Con, you cannot follow the XMMS thingy otherwise you'll end up bolting in
> > the XMMS sleep->burn pattern and you'll probably break the make-j+RealPlay
> > for example. MultiMedia players are really tricky since they require strict
> > timings and forces you to create a special super-interactive treatment
> > inside the code. Interactivity in my box running moderate high loads is
> > very good for my desktop use. Maybe audio will skip here (didn't try) but
> > I believe that following the fix-XMMS thingy is really bad. I believe we
> > should try to make the desktop to feel interactive with human tollerances
> > and not with strict timings like MM apps. If the audio skips when dragging
> > like crazy a X window using the filled mode on a slow CPU, we shouldn't be
> > much worried about it for example. If audio skip when hitting the refresh
> > button of Mozilla, then yes it should be fixed. And the more you add super
> > interactive patterns, the more the scheduler will be exploitable. I
> > recommend you after doing changes to get this :
> >
> > http://www.xmailserver.org/linux-patches/irman2.c
> >
> > and run it with different -n (number of tasks) and -b (CPU burn ms time).
> > At the same time try to build a kernel for example. Then you will realize
> > that interactivity is not the bigger problem that the scheduler has right
> > now.
>
> Please don't assume I'm writing an xmms scheduler. I've done a lot more testing
> than xmms.

Ok, I'm feeling better already ;)



- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  0:33   ` Con Kolivas
  2003-07-17  0:35     ` Davide Libenzi
@ 2003-07-17  0:48     ` Wade
  2003-07-17  1:15       ` Con Kolivas
  1 sibling, 1 reply; 44+ messages in thread
From: Wade @ 2003-07-17  0:48 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux-kernel

Con Kolivas wrote:
> Quoting Davide Libenzi <davidel@xmailserver.org>:
> 
> 
>>On Thu, 17 Jul 2003, Con Kolivas wrote:
>>
>>
>>>O*int patches trying to improve the interactivity of the 2.5/6 scheduler
>>
>>for
>>
>>>desktops. It appears possible to do this without moving to nanosecond
>>>resolution.
>>>
>>>This one makes a massive difference... Please test this to death.
>>>
>>>Changes:
>>>The big change is in the way sleep_avg is incremented. Any amount of sleep
>>>will now raise you by at least one priority with each wakeup. This causes
>>>massive differences to startup time, extremely rapid conversion to
>>
>>interactive
>>
>>>state, and recovery from non-interactive state rapidly as well (prevents X
>>>stalling after thrashing around under high loads for many seconds).
>>>
>>>The sleep buffer was dropped to just 10ms. This has the effect of causing
>>
>>mild
>>
>>>round robinning of very interactive tasks if they run for more than 10ms.
>>
>>The
>>
>>>requeuing was changed from (unlikely()) to an ordinary if.. branch as this
>>>will be hit much more now.
>>
>>Con, I'll make a few notes on the code and a final comment.
>>
>>
>>
>>
>>>-#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * 
> 
> PRIO_BONUS_RATIO /
> 
>>100)
>>
>>>+#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
>>
>>Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO -
>>MAX_RT_PRIO)
>>and you will have another place to change if the number of slots will
>>change. If you want to clarify better, stick a comment.
> 
> 
> Granted. Will revert. If you don't understand it you shouldn't be fiddling with 
> it I agree.
> 
> 
>>
>>
>>>+			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) 
> 
> * runtime /
> 
>>MAX_BONUS;
>>
>>I don't have the full code so I cannot see what "runtime" is, but if
>>"runtime" is the time the task ran, this is :
>>
>>p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
>>
>>(in any case a non-decreasing function of "runtime" )
>>Are you sure you want to reward tasks that actually ran more ?
> 
> 
> 
> That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG. Fix will 
> be posted very soon.

Should any harm come from running 06int without the phantom patch
mentioned?



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  0:35     ` Davide Libenzi
@ 2003-07-17  1:12       ` Con Kolivas
  0 siblings, 0 replies; 44+ messages in thread
From: Con Kolivas @ 2003-07-17  1:12 UTC (permalink / raw)
  To: Davide Libenzi
  Cc: linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

On Thu, 17 Jul 2003 10:35, Davide Libenzi wrote:
> On Thu, 17 Jul 2003, Con Kolivas wrote:
> > > > +			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
> >
> > * runtime /
> >
> > > MAX_BONUS;
> > >
> > > I don't have the full code so I cannot see what "runtime" is, but if
> > > "runtime" is the time the task ran, this is :
> > >
> > > p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> > >
> > > (in any case a non-decreasing function of "runtime" )
> > > Are you sure you want to reward tasks that actually ran more ?
> >
> > That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG.
> > Fix will be posted very soon.
>
> Con, it is not the limit. You're making sleep_avg a non-decreasing
> function of "runtime". Basically you are rewarding tasks that did burn
> more CPU (if runtime is what the name suggests). Are you sure this is what
> you want ?

It's not cpu runtime; it is time since starting the process.

>
> > > Con, you cannot follow the XMMS thingy otherwise you'll end up bolting
> > > in the XMMS sleep->burn pattern and you'll probably break the
> > > make-j+RealPlay for example. MultiMedia players are really tricky since
> > > they require strict timings and forces you to create a special
> > > super-interactive treatment inside the code. Interactivity in my box
> > > running moderate high loads is very good for my desktop use. Maybe
> > > audio will skip here (didn't try) but I believe that following the
> > > fix-XMMS thingy is really bad. I believe we should try to make the
> > > desktop to feel interactive with human tollerances and not with strict
> > > timings like MM apps. If the audio skips when dragging like crazy a X
> > > window using the filled mode on a slow CPU, we shouldn't be much
> > > worried about it for example. If audio skip when hitting the refresh
> > > button of Mozilla, then yes it should be fixed. And the more you add
> > > super interactive patterns, the more the scheduler will be exploitable.
> > > I recommend you after doing changes to get this :
> > >
> > > http://www.xmailserver.org/linux-patches/irman2.c
> > >
> > > and run it with different -n (number of tasks) and -b (CPU burn ms
> > > time). At the same time try to build a kernel for example. Then you
> > > will realize that interactivity is not the bigger problem that the
> > > scheduler has right now.
> >
> > Please don't assume I'm writing an xmms scheduler. I've done a lot more
> > testing than xmms.
>
> Ok, I'm feeling better already ;)

Me too :)

Con


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  0:48     ` Wade
@ 2003-07-17  1:15       ` Con Kolivas
  2003-07-17  1:27         ` Eugene Teo
  0 siblings, 1 reply; 44+ messages in thread
From: Con Kolivas @ 2003-07-17  1:15 UTC (permalink / raw)
  To: Wade; +Cc: linux-kernel

On Thu, 17 Jul 2003 10:48, Wade wrote:
> Con Kolivas wrote:
> > Quoting Davide Libenzi <davidel@xmailserver.org>:
> >>On Thu, 17 Jul 2003, Con Kolivas wrote:
> >>>O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> >>
> >>for
> >>
> >>>desktops. It appears possible to do this without moving to nanosecond
> >>>resolution.
> >>>
> >>>This one makes a massive difference... Please test this to death.
> >>>
> >>>Changes:
> >>>The big change is in the way sleep_avg is incremented. Any amount of
> >>> sleep will now raise you by at least one priority with each wakeup.
> >>> This causes massive differences to startup time, extremely rapid
> >>> conversion to
> >>
> >>interactive
> >>
> >>>state, and recovery from non-interactive state rapidly as well (prevents
> >>> X stalling after thrashing around under high loads for many seconds).
> >>>
> >>>The sleep buffer was dropped to just 10ms. This has the effect of
> >>> causing
> >>
> >>mild
> >>
> >>>round robinning of very interactive tasks if they run for more than
> >>> 10ms.
> >>
> >>The
> >>
> >>>requeuing was changed from (unlikely()) to an ordinary if.. branch as
> >>> this will be hit much more now.
> >>
> >>Con, I'll make a few notes on the code and a final comment.
> >>
> >>>-#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) *
> >
> > PRIO_BONUS_RATIO /
> >
> >>100)
> >>
> >>>+#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
> >>
> >>Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO -
> >>MAX_RT_PRIO)
> >>and you will have another place to change if the number of slots will
> >>change. If you want to clarify better, stick a comment.
> >
> > Granted. Will revert. If you don't understand it you shouldn't be
> > fiddling with it I agree.
> >
> >>>+			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
> >
> > * runtime /
> >
> >>MAX_BONUS;
> >>
> >>I don't have the full code so I cannot see what "runtime" is, but if
> >>"runtime" is the time the task ran, this is :
> >>
> >>p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> >>
> >>(in any case a non-decreasing function of "runtime" )
> >>Are you sure you want to reward tasks that actually ran more ?
> >
> > That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG.
> > Fix will be posted very soon.
>
> Should any harm come from running 06int without the phantom patch
> mentioned?

No harm, but applications that have been running for a while (?hours) will 
eventually not run quite as smoothly. I promise it is only one 10 minute 
kernel compile away :)

Con


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  1:15       ` Con Kolivas
@ 2003-07-17  1:27         ` Eugene Teo
  0 siblings, 0 replies; 44+ messages in thread
From: Eugene Teo @ 2003-07-17  1:27 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Wade, linux-kernel

<quote sender="Con Kolivas">
> On Thu, 17 Jul 2003 10:48, Wade wrote:
> > Con Kolivas wrote:
> > > Quoting Davide Libenzi <davidel@xmailserver.org>:
> > >>On Thu, 17 Jul 2003, Con Kolivas wrote:
> > Should any harm come from running 06int without the phantom patch
> > mentioned?
> 
> No harm, but applications that have been running for a while (?hours) will 
> eventually not run quite as smoothly. I promise it is only one 10 minute 
> kernel compile away :)

I am compiling my kernel with your patch. Is there any particular reason
why running applications for a while will eventually not run smoothly?
Ok, I shall find out myself :D

Eugene


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
                   ` (3 preceding siblings ...)
  2003-07-16 22:12 ` Davide Libenzi
@ 2003-07-17  3:05 ` Wes Janzen
  2003-07-17  9:05 ` Alex Riesen
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Wes Janzen @ 2003-07-17  3:05 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux kernel mailing list

This is much better than 2.5.75-mm3 with your patch.  XMMS seems to work 
well, and applications in X are much more responsive which is more 
important to me anyway.  Part of the sluggishness with 2.5.75-mm3 was 
that the radeon DRM code apparently had problems.  X was using 20-30% of 
the CPU time which probably contributed to the laggy feeling.  Still 
even though 2.5.73-mm3 was crippled, this is also better than any of the 
other 2.5.7x patched kernels I've tried.

Windows are pretty slow to redraw under load though sometimes.  If I 
have gimp scaling an image on workspace 2 and then switch back to 1 it 
can take up to 7 seconds to redraw Mozilla or a terminal.  This seems to 
get better though the longer X has been running, once X has been up for 
~15 minutes I can't get it to occur anymore.

XMMS still skips when my xscreensaver changes to a different 
screensaver, but it's not as bad as before and I'm not expecting 
miracles.  I would rather a program start fast than take forever to 
start just to avoid stalling XMMS.

I'm running a K6-2 400 w/384MB.

I'll keep testing and let you know if I find any problems...

Thanks,

Wes

Con Kolivas wrote:

>O*int patches trying to improve the interactivity of the 2.5/6 scheduler for 
>desktops. It appears possible to do this without moving to nanosecond 
>resolution.
>
>This one makes a massive difference... Please test this to death.
>
>Changes:
>The big change is in the way sleep_avg is incremented. Any amount of sleep 
>will now raise you by at least one priority with each wakeup. This causes 
>massive differences to startup time, extremely rapid conversion to interactive 
>state, and recovery from non-interactive state rapidly as well (prevents X 
>stalling after thrashing around under high loads for many seconds).
>
>The sleep buffer was dropped to just 10ms. This has the effect of causing mild 
>round robinning of very interactive tasks if they run for more than 10ms. The 
>requeuing was changed from (unlikely()) to an ordinary if.. branch as this 
>will be hit much more now.
>
>MAX_BONUS as a #define was made easier to understand
>
>Idle tasks were made slightly less interactive to prevent cpu hogs from 
>becoming interactive on their very first wakeup.
>
>Con
>
>This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be found 
>here:
>http://kernel.kolivas.org/2.5
>
>and here:
>
>--- linux-2.6.0-test1-mm1/kernel/sched.c	2003-07-16 20:27:32.000000000 +1000
>+++ linux-2.6.0-testck1/kernel/sched.c	2003-07-17 00:13:24.000000000 +1000
>@@ -76,9 +76,9 @@
> #define MIN_SLEEP_AVG		(HZ)
> #define MAX_SLEEP_AVG		(10*HZ)
> #define STARVATION_LIMIT	(10*HZ)
>-#define SLEEP_BUFFER		(HZ/20)
>+#define SLEEP_BUFFER		(HZ/100)
> #define NODE_THRESHOLD		125
>-#define MAX_BONUS		((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
>+#define MAX_BONUS		(40 * PRIO_BONUS_RATIO / 100)
> 
> /*
>  * If a task is 'interactive' then we reinsert it in the active
>@@ -399,7 +399,7 @@ static inline void activate_task(task_t 
> 		 */
> 		if (sleep_time > MIN_SLEEP_AVG){
> 			p->avg_start = jiffies - MIN_SLEEP_AVG;
>-			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
>+			p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
> 				MAX_BONUS;
> 		} else {
> 			/*
>@@ -413,14 +413,10 @@ static inline void activate_task(task_t 
> 			p->sleep_avg += sleep_time;
> 
> 			/*
>-			 * Give a bonus to tasks that wake early on to prevent
>-			 * the problem of the denominator in the bonus equation
>-			 * from continually getting larger.
>+			 * Processes that sleep get pushed to a higher priority
>+			 * each time they sleep
> 			 */
>-			if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
>-				p->sleep_avg += (runtime - p->sleep_avg) *
>-					(MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
>-					(MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
>+			p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
> 
> 			/*
> 			 * Keep a small buffer of SLEEP_BUFFER sleep_avg to
>@@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int 
> 			enqueue_task(p, rq->expired);
> 		} else
> 			enqueue_task(p, rq->active);
>-	} else if (unlikely(p->prio < effective_prio(p))){
>+	} else if (p->prio < effective_prio(p)){
> 		/*
> 		 * Tasks that have lowered their priority are put to the end
> 		 * of the active array with their remaining timeslice
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at  http://www.tux.org/lkml/
>
>  
>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
                   ` (4 preceding siblings ...)
  2003-07-17  3:05 ` Wes Janzen
@ 2003-07-17  9:05 ` Alex Riesen
  2003-07-17  9:14   ` Con Kolivas
       [not found] ` <Pine.LNX.4.55.0307161241280.4787@bigblue.dev.mcafeelabs.co m>
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 44+ messages in thread
From: Alex Riesen @ 2003-07-17  9:05 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux kernel mailing list

Con Kolivas, Wed, Jul 16, 2003 16:30:25 +0200:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for 
> desktops. It appears possible to do this without moving to nanosecond 
> resolution.
> 
> This one makes a massive difference... Please test this to death.
> 

tar ztf file.tar.gz and make something somehow do not like each other.
Usually it is tar, which became very slow. And listings of directories
are slow if system is under load (about 3), too (most annoying).

UP P3-700, preempt. 2.6.0-test1-mm1 + O6-int.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  9:05 ` Alex Riesen
@ 2003-07-17  9:14   ` Con Kolivas
  2003-07-18  7:38     ` Alex Riesen
  0 siblings, 1 reply; 44+ messages in thread
From: Con Kolivas @ 2003-07-17  9:14 UTC (permalink / raw)
  To: alexander.riesen; +Cc: linux kernel mailing list

Quoting Alex Riesen <alexander.riesen@synopsys.COM>:

> Con Kolivas, Wed, Jul 16, 2003 16:30:25 +0200:
> > O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> for 
> > desktops. It appears possible to do this without moving to nanosecond 
> > resolution.
> > 
> > This one makes a massive difference... Please test this to death.
> > 
> 
> tar ztf file.tar.gz and make something somehow do not like each other.
> Usually it is tar, which became very slow. And listings of directories
> are slow if system is under load (about 3), too (most annoying).
> 
> UP P3-700, preempt. 2.6.0-test1-mm1 + O6-int.

Thanks for testing. It is distinctly possible that O6.1 addresses this problem. 
Can you please test that? It applies on top of O6 and only requires a recompile 
of sched.o.

http://kernel.kolivas.org/2.5

Con

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found] ` <Pine.LNX.4.55.0307161241280.4787@bigblue.dev.mcafeelabs.co m>
@ 2003-07-18  5:38   ` Mike Galbraith
  2003-07-18  6:34     ` Nick Piggin
  2003-07-18 13:46     ` Davide Libenzi
  0 siblings, 2 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18  5:38 UTC (permalink / raw)
  To: Davide Libenzi
  Cc: Con Kolivas, linux kernel mailing list, Andrew Morton,
	Felipe Alfaro Solana, Zwane Mwaikambo

[-- Attachment #1: Type: text/plain, Size: 591 bytes --]

At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:

>http://www.xmailserver.org/linux-patches/irman2.c
>
>and run it with different -n (number of tasks) and -b (CPU burn ms time).
>At the same time try to build a kernel for example. Then you will realize
>that interactivity is not the bigger problem that the scheduler has right
>now.

I added an irman2 load to contest.  Con's changes 06+06.1 stomped it flat 
[1].  irman2 is modified to run for 30s at a time, but with default parameters.

         -Mike

[1] imho a little too flat.  it also made a worrisome impression on apache 
bench

[-- Attachment #2: contest.txt --]
[-- Type: text/plain, Size: 3105 bytes --]

no_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	153	94.8	0.0	0.0	1.00
2.5.70               1	153	94.1	0.0	0.0	1.00
2.6.0-test1          1	153	94.1	0.0	0.0	1.00
2.6.0-test1-mm1      1	152	94.7	0.0	0.0	1.00
cacherun:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	146	98.6	0.0	0.0	0.95
2.5.70               1	146	98.6	0.0	0.0	0.95
2.6.0-test1          1	146	98.6	0.0	0.0	0.95
2.6.0-test1-mm1      1	146	98.6	0.0	0.0	0.96
process_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	331	43.8	90.0	55.3	2.16
2.5.70               1	199	72.4	27.0	25.5	1.30
2.6.0-test1          1	264	54.5	61.0	44.3	1.73
2.6.0-test1-mm1      1	323	44.9	88.0	54.2	2.12
ctar_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	190	77.9	0.0	0.0	1.24
2.5.70               1	186	80.1	0.0	0.0	1.22
2.6.0-test1          1	213	70.4	0.0	0.0	1.39
2.6.0-test1-mm1      1	207	72.5	0.0	0.0	1.36
xtar_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	196	75.0	0.0	3.1	1.28
2.5.70               1	195	75.9	0.0	3.1	1.27
2.6.0-test1          1	193	76.7	1.0	4.1	1.26
2.6.0-test1-mm1      1	195	75.9	1.0	4.1	1.28
io_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	437	34.6	69.1	15.1	2.86
2.5.70               1	401	37.7	72.3	17.4	2.62
2.6.0-test1          1	243	61.3	48.1	17.3	1.59
2.6.0-test1-mm1      1	336	44.9	64.5	17.3	2.21
io_other:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	387	38.8	69.3	17.3	2.53
2.5.70               1	422	36.0	75.3	17.1	2.76
2.6.0-test1          1	258	57.8	55.8	19.0	1.69
2.6.0-test1-mm1      1	330	45.5	63.2	17.2	2.17
read_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	221	67.0	9.4	3.6	1.44
2.5.70               1	220	67.3	9.4	3.6	1.44
2.6.0-test1          1	200	74.0	9.7	4.0	1.31
2.6.0-test1-mm1      1	190	78.4	9.2	4.2	1.25
list_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	203	71.4	99.0	20.2	1.33
2.5.70               1	205	70.7	104.0	20.5	1.34
2.6.0-test1          1	199	72.4	102.0	21.6	1.30
2.6.0-test1-mm1      1	193	75.1	91.0	19.7	1.27
mem_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	256	57.8	34.0	1.2	1.67
2.5.70               1	252	58.7	33.0	1.2	1.65
2.6.0-test1          1	309	48.2	75.0	2.3	2.02
2.6.0-test1-mm1      1	277	53.8	38.0	1.4	1.82
dbench_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	517	28.8	5.0	35.6	3.38
2.5.70               1	424	35.1	3.0	26.7	2.77
2.6.0-test1          1	347	42.7	4.0	39.5	2.27
2.6.0-test1-mm1      1	377	39.8	4.0	36.9	2.48
ab_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69               1	300	48.3	46.0	21.7	1.96
2.5.70               1	300	48.0	46.0	22.0	1.96
2.6.0-test1          1	281	51.6	50.0	25.6	1.84
2.6.0-test1-mm1      1	229	63.3	30.0	18.3	1.51
irman2_load:
Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.6.0-test1          1	999	14.5	31.0	0.0	6.53
2.6.0-test1-mm1      1	210	69.0	6.0	0.0	1.38

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18  5:38   ` Mike Galbraith
@ 2003-07-18  6:34     ` Nick Piggin
  2003-07-18 10:18       ` Mike Galbraith
  2003-07-18 13:46     ` Davide Libenzi
  1 sibling, 1 reply; 44+ messages in thread
From: Nick Piggin @ 2003-07-18  6:34 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Davide Libenzi, Con Kolivas, linux kernel mailing list,
	Andrew Morton, Felipe Alfaro Solana, Zwane Mwaikambo



Mike Galbraith wrote:

> At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
>
>> http://www.xmailserver.org/linux-patches/irman2.c
>>
>> and run it with different -n (number of tasks) and -b (CPU burn ms 
>> time).
>> At the same time try to build a kernel for example. Then you will 
>> realize
>> that interactivity is not the bigger problem that the scheduler has 
>> right
>> now.
>
>
> I added an irman2 load to contest.  Con's changes 06+06.1 stomped it 
> flat [1].  irman2 is modified to run for 30s at a time, but with 
> default parameters.
>
>         -Mike
>
> [1] imho a little too flat.  it also made a worrisome impression on 
> apache bench
>
>
>------------------------------------------------------------------------
>
>no_load:
>Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
>2.5.69               1	153	94.8	0.0	0.0	1.00
>2.5.70               1	153	94.1	0.0	0.0	1.00
>2.6.0-test1          1	153	94.1	0.0	0.0	1.00
>2.6.0-test1-mm1      1	152	94.7	0.0	0.0	1.00
>cacherun:
>Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
>2.5.69               1	146	98.6	0.0	0.0	0.95
>2.5.70               1	146	98.6	0.0	0.0	0.95
>2.6.0-test1          1	146	98.6	0.0	0.0	0.95
>2.6.0-test1-mm1      1	146	98.6	0.0	0.0	0.96
>process_load:
>Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
>2.5.69               1	331	43.8	90.0	55.3	2.16
>2.5.70               1	199	72.4	27.0	25.5	1.30
>2.6.0-test1          1	264	54.5	61.0	44.3	1.73
>2.6.0-test1-mm1      1	323	44.9	88.0	54.2	2.12
>ctar_load:
>Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
>2.5.69               1	190	77.9	0.0	0.0	1.24
>2.5.70               1	186	80.1	0.0	0.0	1.22
>2.6.0-test1          1	213	70.4	0.0	0.0	1.39
>2.6.0-test1-mm1      1	207	72.5	0.0	0.0	1.36
>xtar_load:
>Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
>2.5.69               1	196	75.0	0.0	3.1	1.28
>2.5.70               1	195	75.9	0.0	3.1	1.27
>2.6.0-test1          1	193	76.7	1.0	4.1	1.26
>2.6.0-test1-mm1      1	195	75.9	1.0	4.1	1.28
>io_load:
>Kernel          [runs]	Time	CPU%	Loads	LCPU%	Ratio
>2.5.69               1	437	34.6	69.1	15.1	2.86
>2.5.70               1	401	37.7	72.3	17.4	2.62
>2.6.0-test1          1	243	61.3	48.1	17.3	1.59
>2.6.0-test1-mm1      1	336	44.9	64.5	17.3	2.21
>

Looks like gcc is getting less priority after a read completes.
Keep an eye on this please.



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-17  9:14   ` Con Kolivas
@ 2003-07-18  7:38     ` Alex Riesen
       [not found]       ` <Pine.LNX.4.44.0307251628500.26172-300000@localhost.localdomain>
  0 siblings, 1 reply; 44+ messages in thread
From: Alex Riesen @ 2003-07-18  7:38 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux-kernel

Con Kolivas, Thu, Jul 17, 2003 11:14:55 +0200:
> > > O*int patches trying to improve the interactivity of the 2.5/6
> > > scheduler for desktops. It appears possible to do this without
> > > moving to nanosecond resolution.
> > 
> > tar ztf file.tar.gz and make something somehow do not like each other.
> > Usually it is tar, which became very slow. And listings of directories
> > are slow if system is under load (about 3), too (most annoying).
> > 
> > UP P3-700, preempt. 2.6.0-test1-mm1 + O6-int.
> 
> Thanks for testing. It is distinctly possible that O6.1 addresses this
> problem.  Can you please test that? It applies on top of O6 and only
> requires a recompile of sched.o.

Still no good. xine drops frames by kernel's make -j2, xmms skips while
bk pull (locally). Updates (after switching desktops in metacity) get
delayed for seconds (mozilla window redraws with www.kernel.org on it,
for example).

Priorities of xine threads were around 15-16, with one of them
constantly at 16 (the one with most cpu). gcc/as processes were 20-21.

That said, it feels better than before, though. And the last changes in
the scheduler seem to reveal more races in applications (a found rxvt
not checking for errors reading from pty).

-alex

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18  6:34     ` Nick Piggin
@ 2003-07-18 10:18       ` Mike Galbraith
  2003-07-18 10:31         ` Wiktor Wodecki
  2003-07-18 14:24         ` Con Kolivas
  0 siblings, 2 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18 10:18 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Davide Libenzi, Con Kolivas, linux kernel mailing list,
	Andrew Morton, Felipe Alfaro Solana, Zwane Mwaikambo

At 04:34 PM 7/18/2003 +1000, Nick Piggin wrote:

>Mike Galbraith wrote:
>
>>no_load:
>>Kernel          [runs]  Time    CPU%    Loads   LCPU%   Ratio
>>2.5.69               1  153     94.8    0.0     0.0     1.00
>>2.5.70               1  153     94.1    0.0     0.0     1.00
>>2.6.0-test1          1  153     94.1    0.0     0.0     1.00
>>2.6.0-test1-mm1      1  152     94.7    0.0     0.0     1.00
>>cacherun:
>>Kernel          [runs]  Time    CPU%    Loads   LCPU%   Ratio
>>2.5.69               1  146     98.6    0.0     0.0     0.95
>>2.5.70               1  146     98.6    0.0     0.0     0.95
>>2.6.0-test1          1  146     98.6    0.0     0.0     0.95
>>2.6.0-test1-mm1      1  146     98.6    0.0     0.0     0.96
>>process_load:
>>Kernel          [runs]  Time    CPU%    Loads   LCPU%   Ratio
>>2.5.69               1  331     43.8    90.0    55.3    2.16
>>2.5.70               1  199     72.4    27.0    25.5    1.30
>>2.6.0-test1          1  264     54.5    61.0    44.3    1.73
>>2.6.0-test1-mm1      1  323     44.9    88.0    54.2    2.12
>>ctar_load:
>>Kernel          [runs]  Time    CPU%    Loads   LCPU%   Ratio
>>2.5.69               1  190     77.9    0.0     0.0     1.24
>>2.5.70               1  186     80.1    0.0     0.0     1.22
>>2.6.0-test1          1  213     70.4    0.0     0.0     1.39
>>2.6.0-test1-mm1      1  207     72.5    0.0     0.0     1.36
>>xtar_load:
>>Kernel          [runs]  Time    CPU%    Loads   LCPU%   Ratio
>>2.5.69               1  196     75.0    0.0     3.1     1.28
>>2.5.70               1  195     75.9    0.0     3.1     1.27
>>2.6.0-test1          1  193     76.7    1.0     4.1     1.26
>>2.6.0-test1-mm1      1  195     75.9    1.0     4.1     1.28
>>io_load:
>>Kernel          [runs]  Time    CPU%    Loads   LCPU%   Ratio
>>2.5.69               1  437     34.6    69.1    15.1    2.86
>>2.5.70               1  401     37.7    72.3    17.4    2.62
>>2.6.0-test1          1  243     61.3    48.1    17.3    1.59
>>2.6.0-test1-mm1      1  336     44.9    64.5    17.3    2.21
>
>Looks like gcc is getting less priority after a read completes.
>Keep an eye on this please.

That _might_ (add salt) be priorities of kernel threads dropping too low.

I'm also seeing occasional total stalls under heavy I/O in the order of 
10-12 seconds (even the disk stops).  I have no idea if that's something in 
mm or the scheduler changes though, as I've yet to do any isolation and/or 
tinkering.  All I know at this point is that I haven't seen it in stock yet.

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 10:18       ` Mike Galbraith
@ 2003-07-18 10:31         ` Wiktor Wodecki
  2003-07-18 10:43           ` Con Kolivas
  2003-07-18 15:46           ` Mike Galbraith
  2003-07-18 14:24         ` Con Kolivas
  1 sibling, 2 replies; 44+ messages in thread
From: Wiktor Wodecki @ 2003-07-18 10:31 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Nick Piggin, Davide Libenzi, Con Kolivas,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

[-- Attachment #1: Type: text/plain, Size: 731 bytes --]

On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> That _might_ (add salt) be priorities of kernel threads dropping too low.
> 
> I'm also seeing occasional total stalls under heavy I/O in the order of 
> 10-12 seconds (even the disk stops).  I have no idea if that's something in 
> mm or the scheduler changes though, as I've yet to do any isolation and/or 
> tinkering.  All I know at this point is that I haven't seen it in stock yet.

I've seen this too while doing a huge nfs transfer from a 2.6 machine to
a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
which were recently, might be the scheduler, tho. Ah, and it is fully
reproducable.

-- 
Regards,

Wiktor Wodecki

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 10:31         ` Wiktor Wodecki
@ 2003-07-18 10:43           ` Con Kolivas
  2003-07-18 11:34             ` Wiktor Wodecki
  2003-07-18 15:46           ` Mike Galbraith
  1 sibling, 1 reply; 44+ messages in thread
From: Con Kolivas @ 2003-07-18 10:43 UTC (permalink / raw)
  To: Wiktor Wodecki, Wiktor Wodecki, Mike Galbraith
  Cc: Nick Piggin, Davide Libenzi, linux kernel mailing list,
	Andrew Morton, Felipe Alfaro Solana, Zwane Mwaikambo

On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
> On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> > That _might_ (add salt) be priorities of kernel threads dropping too low.
> >
> > I'm also seeing occasional total stalls under heavy I/O in the order of
> > 10-12 seconds (even the disk stops).  I have no idea if that's something
> > in mm or the scheduler changes though, as I've yet to do any isolation
> > and/or tinkering.  All I know at this point is that I haven't seen it in
> > stock yet.
>
> I've seen this too while doing a huge nfs transfer from a 2.6 machine to
> a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
> which were recently, might be the scheduler, tho. Ah, and it is fully
> reproducable.

Well I didn't want to post this yet because I'm not sure if it's a good 
workaround yet but it looks like a reasonable compromise, and since you have a 
testcase it will be interesting to see if it addresses it. It's possible that 
a task is being requeued every millisecond, and this is a little smarter. It 
allows cpu hogs to run for 100ms before being round robinned, but shorter for 
interactive tasks. Can you try this O7 which applies on top of O6.1 please:

available here:
http://kernel.kolivas.org/2.5

and here:

--- linux-2.6.0-test1-mm1/kernel/sched.c	2003-07-17 19:59:16.000000000 +1000
+++ linux-2.6.0-testck1/kernel/sched.c	2003-07-18 00:10:55.000000000 +1000
@@ -1310,10 +1310,12 @@ void scheduler_tick(int user_ticks, int 
 			enqueue_task(p, rq->expired);
 		} else
 			enqueue_task(p, rq->active);
-	} else if (p->prio < effective_prio(p)){
+	} else if (!((task_timeslice(p) - p->time_slice) %
+		 (MIN_TIMESLICE * (MAX_BONUS + 1 - p->sleep_avg * MAX_BONUS / MAX_SLEEP_AVG)))){
 		/*
-		 * Tasks that have lowered their priority are put to the end
-		 * of the active array with their remaining timeslice
+		 * Running tasks get requeued with their remaining timeslice
+		 * after a period proportional to how cpu intensive they are to
+		 * minimise the duration one interactive task can starve another
 		 */
 		dequeue_task(p, rq->active);
 		set_tsk_need_resched(p);


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 10:43           ` Con Kolivas
@ 2003-07-18 11:34             ` Wiktor Wodecki
  2003-07-18 11:38               ` Nick Piggin
  0 siblings, 1 reply; 44+ messages in thread
From: Wiktor Wodecki @ 2003-07-18 11:34 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Mike Galbraith, Nick Piggin, Davide Libenzi,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

[-- Attachment #1: Type: text/plain, Size: 2812 bytes --]

On Fri, Jul 18, 2003 at 08:43:05PM +1000, Con Kolivas wrote:
> On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
> > On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> > > That _might_ (add salt) be priorities of kernel threads dropping too low.
> > >
> > > I'm also seeing occasional total stalls under heavy I/O in the order of
> > > 10-12 seconds (even the disk stops).  I have no idea if that's something
> > > in mm or the scheduler changes though, as I've yet to do any isolation
> > > and/or tinkering.  All I know at this point is that I haven't seen it in
> > > stock yet.
> >
> > I've seen this too while doing a huge nfs transfer from a 2.6 machine to
> > a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
> > which were recently, might be the scheduler, tho. Ah, and it is fully
> > reproducable.
> 
> Well I didn't want to post this yet because I'm not sure if it's a good 
> workaround yet but it looks like a reasonable compromise, and since you have a 
> testcase it will be interesting to see if it addresses it. It's possible that 
> a task is being requeued every millisecond, and this is a little smarter. It 
> allows cpu hogs to run for 100ms before being round robinned, but shorter for 
> interactive tasks. Can you try this O7 which applies on top of O6.1 please:
> 
> available here:
> http://kernel.kolivas.org/2.5

sorry, the problem still persists. Aborting the cp takes less time, tho
(about 10 seconds now, before it was about 30 secs). I'm aborting during
a big file, FYI.

> 
> and here:
> 
> --- linux-2.6.0-test1-mm1/kernel/sched.c	2003-07-17 19:59:16.000000000 +1000
> +++ linux-2.6.0-testck1/kernel/sched.c	2003-07-18 00:10:55.000000000 +1000
> @@ -1310,10 +1310,12 @@ void scheduler_tick(int user_ticks, int 
>  			enqueue_task(p, rq->expired);
>  		} else
>  			enqueue_task(p, rq->active);
> -	} else if (p->prio < effective_prio(p)){
> +	} else if (!((task_timeslice(p) - p->time_slice) %
> +		 (MIN_TIMESLICE * (MAX_BONUS + 1 - p->sleep_avg * MAX_BONUS / MAX_SLEEP_AVG)))){
>  		/*
> -		 * Tasks that have lowered their priority are put to the end
> -		 * of the active array with their remaining timeslice
> +		 * Running tasks get requeued with their remaining timeslice
> +		 * after a period proportional to how cpu intensive they are to
> +		 * minimise the duration one interactive task can starve another
>  		 */
>  		dequeue_task(p, rq->active);
>  		set_tsk_need_resched(p);
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Regards,

Wiktor Wodecki

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 11:34             ` Wiktor Wodecki
@ 2003-07-18 11:38               ` Nick Piggin
  2003-07-19 10:59                 ` Wiktor Wodecki
  0 siblings, 1 reply; 44+ messages in thread
From: Nick Piggin @ 2003-07-18 11:38 UTC (permalink / raw)
  To: Wiktor Wodecki
  Cc: Con Kolivas, Mike Galbraith, Davide Libenzi,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo



Wiktor Wodecki wrote:

>On Fri, Jul 18, 2003 at 08:43:05PM +1000, Con Kolivas wrote:
>
>>On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
>>
>>>On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
>>>
>>>>That _might_ (add salt) be priorities of kernel threads dropping too low.
>>>>
>>>>I'm also seeing occasional total stalls under heavy I/O in the order of
>>>>10-12 seconds (even the disk stops).  I have no idea if that's something
>>>>in mm or the scheduler changes though, as I've yet to do any isolation
>>>>and/or tinkering.  All I know at this point is that I haven't seen it in
>>>>stock yet.
>>>>
>>>I've seen this too while doing a huge nfs transfer from a 2.6 machine to
>>>a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
>>>which were recently, might be the scheduler, tho. Ah, and it is fully
>>>reproducable.
>>>
>>Well I didn't want to post this yet because I'm not sure if it's a good 
>>workaround yet but it looks like a reasonable compromise, and since you have a 
>>testcase it will be interesting to see if it addresses it. It's possible that 
>>a task is being requeued every millisecond, and this is a little smarter. It 
>>allows cpu hogs to run for 100ms before being round robinned, but shorter for 
>>interactive tasks. Can you try this O7 which applies on top of O6.1 please:
>>
>>available here:
>>http://kernel.kolivas.org/2.5
>>
>
>sorry, the problem still persists. Aborting the cp takes less time, tho
>(about 10 seconds now, before it was about 30 secs). I'm aborting during
>a big file, FYI.
>

OK if the IO actually stops then it shouldn't be an IO scheduler or
request allocation problem, but could you try to capture a sysrq T
trace for me during the freeze.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18  5:38   ` Mike Galbraith
  2003-07-18  6:34     ` Nick Piggin
@ 2003-07-18 13:46     ` Davide Libenzi
  1 sibling, 0 replies; 44+ messages in thread
From: Davide Libenzi @ 2003-07-18 13:46 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, linux kernel mailing list, Andrew Morton,
	Felipe Alfaro Solana, Zwane Mwaikambo

On Fri, 18 Jul 2003, Mike Galbraith wrote:

> At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
>
> >http://www.xmailserver.org/linux-patches/irman2.c
> >
> >and run it with different -n (number of tasks) and -b (CPU burn ms time).
> >At the same time try to build a kernel for example. Then you will realize
> >that interactivity is not the bigger problem that the scheduler has right
> >now.
>
> I added an irman2 load to contest.  Con's changes 06+06.1 stomped it flat
> [1].  irman2 is modified to run for 30s at a time, but with default parameters.

In my case I cannot even estimate the time. It takes 8:33 ususally to do a
bzImage, and after 15 minutes I ctrl-c with only two lines printed in the
console. If you consider the ratio between the total number of lines that
a kernel build spits out, this couls have taken hours. Also, you might
want also to try a low number of processes with a short burn, like the new
patch seems to do to better hit mm players. Something like:

irman2 -n 10 -b 40

Guys, I'm saying this not because I do not appreciate the time Con is
spending on it. I just hate to see time spent in the wrong priorities.
Whatever super privileged sleep->burn pattern you code, it can be
exploited w/out a global throttle for the CPU time assigned to interactive
and non interactive tasks. This is Unix guys and it is used in multi-user
environments, we cannot ship with a flaw like this.



- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 10:18       ` Mike Galbraith
  2003-07-18 10:31         ` Wiktor Wodecki
@ 2003-07-18 14:24         ` Con Kolivas
  2003-07-18 15:50           ` Mike Galbraith
  1 sibling, 1 reply; 44+ messages in thread
From: Con Kolivas @ 2003-07-18 14:24 UTC (permalink / raw)
  To: Mike Galbraith, Nick Piggin
  Cc: Davide Libenzi, linux kernel mailing list, Andrew Morton,
	Felipe Alfaro Solana, Zwane Mwaikambo

On Fri, 18 Jul 2003 20:18, Mike Galbraith wrote:
> At 04:34 PM 7/18/2003 +1000, Nick Piggin wrote:
> >Mike Galbraith wrote:
> That _might_ (add salt) be priorities of kernel threads dropping too low.

Is there any good reason for the priorities of kernel threads to vary at all? 
In the original design they are subject to the same interactivity changes as 
other processes and I've maintained that but I can't see a good reason for it 
and plan to change it unless someone tells me otherwise.

Con


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found] ` <Pine.LNX.4.55.0307180630450.5077@bigblue.dev.mcafeelabs.co m>
@ 2003-07-18 15:41   ` Mike Galbraith
  0 siblings, 0 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18 15:41 UTC (permalink / raw)
  To: Davide Libenzi
  Cc: Con Kolivas, linux kernel mailing list, Andrew Morton,
	Felipe Alfaro Solana, Zwane Mwaikambo

At 06:46 AM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003, Mike Galbraith wrote:
>
> > At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
> >
> > >http://www.xmailserver.org/linux-patches/irman2.c
> > >
> > >and run it with different -n (number of tasks) and -b (CPU burn ms time).
> > >At the same time try to build a kernel for example. Then you will realize
> > >that interactivity is not the bigger problem that the scheduler has right
> > >now.
> >
> > I added an irman2 load to contest.  Con's changes 06+06.1 stomped it flat
> > [1].  irman2 is modified to run for 30s at a time, but with default 
> parameters.
>
>In my case I cannot even estimate the time. It takes 8:33 ususally to do a
>bzImage, and after 15 minutes I ctrl-c with only two lines printed in the
>console. If you consider the ratio between the total number of lines that
>a kernel build spits out, this couls have taken hours. Also, you might

Yeah, I noticed... it's a nasty little bugger.

>want also to try a low number of processes with a short burn, like the new
>patch seems to do to better hit mm players. Something like:
>
>irman2 -n 10 -b 40

If I hadn't done the restart after 30 seconds thing, I knew it would take 
ages.  I wanted something to see contrast, not a life sentence ;-)

>Guys, I'm saying this not because I do not appreciate the time Con is
>spending on it. I just hate to see time spent in the wrong priorities.
>Whatever super privileged sleep->burn pattern you code, it can be
>exploited w/out a global throttle for the CPU time assigned to interactive
>and non interactive tasks. This is Unix guys and it is used in multi-user
>environments, we cannot ship with a flaw like this.

(Oh, I agree that the problem is nasty.  I like fair scheduling a lot... 
when _I'm_ not the one starving things to death;)

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 10:31         ` Wiktor Wodecki
  2003-07-18 10:43           ` Con Kolivas
@ 2003-07-18 15:46           ` Mike Galbraith
  2003-07-18 16:52             ` Davide Libenzi
  1 sibling, 1 reply; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18 15:46 UTC (permalink / raw)
  To: Wiktor Wodecki
  Cc: Nick Piggin, Davide Libenzi, Con Kolivas,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

[-- Attachment #1: Type: text/plain, Size: 918 bytes --]

At 12:31 PM 7/18/2003 +0200, Wiktor Wodecki wrote:
>On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> > That _might_ (add salt) be priorities of kernel threads dropping too low.
> >
> > I'm also seeing occasional total stalls under heavy I/O in the order of
> > 10-12 seconds (even the disk stops).  I have no idea if that's 
> something in
> > mm or the scheduler changes though, as I've yet to do any isolation and/or
> > tinkering.  All I know at this point is that I haven't seen it in stock 
> yet.
>
>I've seen this too while doing a huge nfs transfer from a 2.6 machine to
>a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
>which were recently, might be the scheduler, tho. Ah, and it is fully
>reproducable.

Telling to not mess with my kernel threads seems to have fixed it here... 
no stalls during the whole contest run.  New contest numbers attached.

         -Mike 

[-- Attachment #2: contest.txt --]
[-- Type: text/plain, Size: 3778 bytes --]

no_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	153	94.8	0.0	0.0	1.00
2.5.70                1	153	94.1	0.0	0.0	1.00
2.6.0-test1           1	153	94.1	0.0	0.0	1.00
2.6.0-test1-mm1       1	152	94.7	0.0	0.0	1.00
2.6.0-test1-mm1X      1	153	94.8	0.0	0.0	1.00
cacherun:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	146	98.6	0.0	0.0	0.95
2.5.70                1	146	98.6	0.0	0.0	0.95
2.6.0-test1           1	146	98.6	0.0	0.0	0.95
2.6.0-test1-mm1       1	146	98.6	0.0	0.0	0.96
2.6.0-test1-mm1X      1	146	98.6	0.0	0.0	0.95
process_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	331	43.8	90.0	55.3	2.16
2.5.70                1	199	72.4	27.0	25.5	1.30
2.6.0-test1           1	264	54.5	61.0	44.3	1.73
2.6.0-test1-mm1       1	323	44.9	88.0	54.2	2.12
2.6.0-test1-mm1X      1	268	54.1	62.0	44.8	1.75
ctar_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	190	77.9	0.0	0.0	1.24
2.5.70                1	186	80.1	0.0	0.0	1.22
2.6.0-test1           1	213	70.4	0.0	0.0	1.39
2.6.0-test1-mm1       1	207	72.5	0.0	0.0	1.36
2.6.0-test1-mm1X      1	213	70.4	0.0	0.0	1.39
xtar_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	196	75.0	0.0	3.1	1.28
2.5.70                1	195	75.9	0.0	3.1	1.27
2.6.0-test1           1	193	76.7	1.0	4.1	1.26
2.6.0-test1-mm1       1	195	75.9	1.0	4.1	1.28
2.6.0-test1-mm1X      1	191	77.5	1.0	4.2	1.25
io_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	437	34.6	69.1	15.1	2.86
2.5.70                1	401	37.7	72.3	17.4	2.62
2.6.0-test1           1	243	61.3	48.1	17.3	1.59
2.6.0-test1-mm1       1	336	44.9	64.5	17.3	2.21
2.6.0-test1-mm1X      1	317	47.3	63.5	17.7	2.07
io_other:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	387	38.8	69.3	17.3	2.53
2.5.70                1	422	36.0	75.3	17.1	2.76
2.6.0-test1           1	258	57.8	55.8	19.0	1.69
2.6.0-test1-mm1       1	330	45.5	63.2	17.2	2.17
2.6.0-test1-mm1X      1	302	49.7	59.1	17.2	1.97
read_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	221	67.0	9.4	3.6	1.44
2.5.70                1	220	67.3	9.4	3.6	1.44
2.6.0-test1           1	200	74.0	9.7	4.0	1.31
2.6.0-test1-mm1       1	190	78.4	9.2	4.2	1.25
2.6.0-test1-mm1X      1	187	79.7	8.4	3.7	1.22
list_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	203	71.4	99.0	20.2	1.33
2.5.70                1	205	70.7	104.0	20.5	1.34
2.6.0-test1           1	199	72.4	102.0	21.6	1.30
2.6.0-test1-mm1       1	193	75.1	91.0	19.7	1.27
2.6.0-test1-mm1X      1	194	74.7	91.0	19.6	1.27
mem_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	256	57.8	34.0	1.2	1.67
2.5.70                1	252	58.7	33.0	1.2	1.65
2.6.0-test1           1	309	48.2	75.0	2.3	2.02
2.6.0-test1-mm1       1	277	53.8	38.0	1.4	1.82
2.6.0-test1-mm1X      1	225	65.8	38.0	1.8	1.47
dbench_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	517	28.8	5.0	35.6	3.38
2.5.70                1	424	35.1	3.0	26.7	2.77
2.6.0-test1           1	347	42.7	4.0	39.5	2.27
2.6.0-test1-mm1       1	377	39.8	4.0	36.9	2.48
2.6.0-test1-mm1X      1	510	29.4	5.0	34.3	3.33
ab_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.5.69                1	300	48.3	46.0	21.7	1.96
2.5.70                1	300	48.0	46.0	22.0	1.96
2.6.0-test1           1	281	51.6	50.0	25.6	1.84
2.6.0-test1-mm1       1	229	63.3	30.0	18.3	1.51
2.6.0-test1-mm1X      1	225	64.4	27.0	18.2	1.47
irman2_load:
Kernel           [runs]	Time	CPU%	Loads	LCPU%	Ratio
2.6.0-test1           1	999	14.5	31.0	0.0	6.53
2.6.0-test1-mm1       1	210	69.0	6.0	0.0	1.38
2.6.0-test1-mm1X      1	210	69.0	6.0	0.0	1.37

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 14:24         ` Con Kolivas
@ 2003-07-18 15:50           ` Mike Galbraith
  0 siblings, 0 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18 15:50 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Nick Piggin, Davide Libenzi, linux kernel mailing list,
	Andrew Morton, Felipe Alfaro Solana, Zwane Mwaikambo

At 12:24 AM 7/19/2003 +1000, Con Kolivas wrote:
>On Fri, 18 Jul 2003 20:18, Mike Galbraith wrote:
> > At 04:34 PM 7/18/2003 +1000, Nick Piggin wrote:
> > >Mike Galbraith wrote:
> > That _might_ (add salt) be priorities of kernel threads dropping too low.
>
>Is there any good reason for the priorities of kernel threads to vary at all?
>In the original design they are subject to the same interactivity changes as
>other processes and I've maintained that but I can't see a good reason for it
>and plan to change it unless someone tells me otherwise.

They're so light now days that I never see them change.  I set bonus 
manually to MAX_BONUS/2 in the last numbers posted.

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 15:46           ` Mike Galbraith
@ 2003-07-18 16:52             ` Davide Libenzi
  2003-07-18 17:05               ` Davide Libenzi
  0 siblings, 1 reply; 44+ messages in thread
From: Davide Libenzi @ 2003-07-18 16:52 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Wiktor Wodecki, Nick Piggin, Con Kolivas,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

On Fri, 18 Jul 2003, Mike Galbraith wrote:

> Telling to not mess with my kernel threads seems to have fixed it here...
> no stalls during the whole contest run.  New contest numbers attached.

It is ok to use unfairness towards kernel threads to avoid starvation. We
control them. It is right to apply uncontrolled unfairness to userspace
tasks though.



- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 16:52             ` Davide Libenzi
@ 2003-07-18 17:05               ` Davide Libenzi
  2003-07-18 17:39                 ` Valdis.Kletnieks
                                   ` (3 more replies)
  0 siblings, 4 replies; 44+ messages in thread
From: Davide Libenzi @ 2003-07-18 17:05 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Wiktor Wodecki, Nick Piggin, Con Kolivas,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

On Fri, 18 Jul 2003, Davide Libenzi wrote:

> control them. It is right to apply uncontrolled unfairness to userspace
> tasks though.

s/It is right/It is not right/



- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 17:05               ` Davide Libenzi
@ 2003-07-18 17:39                 ` Valdis.Kletnieks
  2003-07-18 19:31                   ` Davide Libenzi
       [not found]                 ` <Pine.LNX.4.55.0307181038450.5608@bigblue.dev.mcafeelabs.co m>
                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 44+ messages in thread
From: Valdis.Kletnieks @ 2003-07-18 17:39 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: linux kernel mailing list

[-- Attachment #1: Type: text/plain, Size: 303 bytes --]

On Fri, 18 Jul 2003 10:05:05 PDT, Davide Libenzi said:
> On Fri, 18 Jul 2003, Davide Libenzi wrote:
> 
> > control them. It is right to apply uncontrolled unfairness to userspace
> > tasks though.
> 
> s/It is right/It is not right/

OK.. but is it right to apply *controlled* unfairness to userspace?


[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found] ` <Pine.LNX.4.55.0307180951050.5608@bigblue.dev.mcafeelabs.co m>
@ 2003-07-18 18:49   ` Mike Galbraith
  0 siblings, 0 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18 18:49 UTC (permalink / raw)
  To: Davide Libenzi
  Cc: Wiktor Wodecki, Nick Piggin, Con Kolivas,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

At 09:52 AM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003, Mike Galbraith wrote:
>
> > Telling to not mess with my kernel threads seems to have fixed it here...
> > no stalls during the whole contest run.  New contest numbers attached.
>
>It is ok to use unfairness towards kernel threads to avoid starvation. We
>control them. It is right to apply uncontrolled unfairness to userspace
>tasks though.

In this case, it appears that the lowered priority was causing 
trouble.  One test run isn't enough to say 100%, but what I read out of the 
numbers is that at least kswapd needs to be able to preempt.

wrt the uncontrolled unfairness, I've muttered about this before.  I've 
also tried (quite) a few things, but nothing yet has been good enough to... 
require trashing that I couldn't do here ;-)

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 17:39                 ` Valdis.Kletnieks
@ 2003-07-18 19:31                   ` Davide Libenzi
  0 siblings, 0 replies; 44+ messages in thread
From: Davide Libenzi @ 2003-07-18 19:31 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: linux kernel mailing list

On Fri, 18 Jul 2003 Valdis.Kletnieks@vt.edu wrote:

> On Fri, 18 Jul 2003 10:05:05 PDT, Davide Libenzi said:
> > On Fri, 18 Jul 2003, Davide Libenzi wrote:
> >
> > > control them. It is right to apply uncontrolled unfairness to userspace
> > > tasks though.
> >
> > s/It is right/It is not right/
>
> OK.. but is it right to apply *controlled* unfairness to userspace?

I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
not think about it when this scheduler was dropped inside 2.5 sadly. The
interactivity concept is based on the fact that a particular class of
tasks characterized by certain sleep->burn patterns are never expired and
eventually, only oscillate between two (pretty high) priorities. Without
applying a global CPU throttle for interactive tasks, you can create a
small set of processes (like irman does) that hit the coded sleep->burn
pattern and that make everything is running with priority lower than the
lower of the two of the oscillation range, to almost completely starve.
Controlled unfairness would mean throttling the CPU time we reserve to
interactive tasks so that we always reserve a minimum time to non
interactive processes.



- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found]                 ` <Pine.LNX.4.55.0307181038450.5608@bigblue.dev.mcafeelabs.co m>
@ 2003-07-18 20:31                   ` Mike Galbraith
  2003-07-18 20:38                     ` Davide Libenzi
  0 siblings, 1 reply; 44+ messages in thread
From: Mike Galbraith @ 2003-07-18 20:31 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Valdis.Kletnieks, linux kernel mailing list

At 12:31 PM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003 Valdis.Kletnieks@vt.edu wrote:
>
> > On Fri, 18 Jul 2003 10:05:05 PDT, Davide Libenzi said:
> > > On Fri, 18 Jul 2003, Davide Libenzi wrote:
> > >
> > > > control them. It is right to apply uncontrolled unfairness to userspace
> > > > tasks though.
> > >
> > > s/It is right/It is not right/
> >
> > OK.. but is it right to apply *controlled* unfairness to userspace?
>
>I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
>not think about it when this scheduler was dropped inside 2.5 sadly. The
>interactivity concept is based on the fact that a particular class of
>tasks characterized by certain sleep->burn patterns are never expired and
>eventually, only oscillate between two (pretty high) priorities. Without
>applying a global CPU throttle for interactive tasks, you can create a
>small set of processes (like irman does) that hit the coded sleep->burn
>pattern and that make everything is running with priority lower than the
>lower of the two of the oscillation range, to almost completely starve.
>Controlled unfairness would mean throttling the CPU time we reserve to
>interactive tasks so that we always reserve a minimum time to non
>interactive processes.

I'd like to find a way to prevent that instead.  There's got to be a way.

It's easy to prevent irman type things from starving others permanently (i 
call this active starvation, or wakeup starvation), and this does something 
fairly similar to what you're talking about.  Just crawl down the queue 
heads looking for the oldest task periodically instead of always taking the 
highest queue.  You can do that very fast, and it does prevent active 
starvation.

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 20:31                   ` Mike Galbraith
@ 2003-07-18 20:38                     ` Davide Libenzi
  0 siblings, 0 replies; 44+ messages in thread
From: Davide Libenzi @ 2003-07-18 20:38 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: Valdis.Kletnieks, linux kernel mailing list

On Fri, 18 Jul 2003, Mike Galbraith wrote:

> >I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
> >not think about it when this scheduler was dropped inside 2.5 sadly. The
> >interactivity concept is based on the fact that a particular class of
> >tasks characterized by certain sleep->burn patterns are never expired and
> >eventually, only oscillate between two (pretty high) priorities. Without
> >applying a global CPU throttle for interactive tasks, you can create a
> >small set of processes (like irman does) that hit the coded sleep->burn
> >pattern and that make everything is running with priority lower than the
> >lower of the two of the oscillation range, to almost completely starve.
> >Controlled unfairness would mean throttling the CPU time we reserve to
> >interactive tasks so that we always reserve a minimum time to non
> >interactive processes.
>
> I'd like to find a way to prevent that instead.  There's got to be a way.

Remember that this is computer science, that is, for every problem there
"at least" one solution ;)



> It's easy to prevent irman type things from starving others permanently (i
> call this active starvation, or wakeup starvation), and this does something
> fairly similar to what you're talking about.  Just crawl down the queue
> heads looking for the oldest task periodically instead of always taking the
> highest queue.  You can do that very fast, and it does prevent active
> starvation.

Everything that will make the scheduler to say "ok, I gave enough time to
interactive tasks, now I'm really going to spin one from the masses" will
work. Having a clean solution would not be an option here.




- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-18 11:38               ` Nick Piggin
@ 2003-07-19 10:59                 ` Wiktor Wodecki
  0 siblings, 0 replies; 44+ messages in thread
From: Wiktor Wodecki @ 2003-07-19 10:59 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Con Kolivas, Mike Galbraith, Davide Libenzi,
	linux kernel mailing list, Andrew Morton, Felipe Alfaro Solana,
	Zwane Mwaikambo

[-- Attachment #1: Type: text/plain, Size: 3346 bytes --]

On Fri, Jul 18, 2003 at 09:38:10PM +1000, Nick Piggin wrote:
> 
> 
> Wiktor Wodecki wrote:
> 
> >On Fri, Jul 18, 2003 at 08:43:05PM +1000, Con Kolivas wrote:
> >
> >>On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
> >>
> >>>On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> >>>
> >>>>That _might_ (add salt) be priorities of kernel threads dropping too 
> >>>>low.
> >>>>
> >>>>I'm also seeing occasional total stalls under heavy I/O in the order of
> >>>>10-12 seconds (even the disk stops).  I have no idea if that's something
> >>>>in mm or the scheduler changes though, as I've yet to do any isolation
> >>>>and/or tinkering.  All I know at this point is that I haven't seen it in
> >>>>stock yet.
> >>>>
> >>>I've seen this too while doing a huge nfs transfer from a 2.6 machine to
> >>>a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
> >>>which were recently, might be the scheduler, tho. Ah, and it is fully
> >>>reproducable.
> >>>
> >>Well I didn't want to post this yet because I'm not sure if it's a good 
> >>workaround yet but it looks like a reasonable compromise, and since you 
> >>have a testcase it will be interesting to see if it addresses it. It's 
> >>possible that a task is being requeued every millisecond, and this is a 
> >>little smarter. It allows cpu hogs to run for 100ms before being round 
> >>robinned, but shorter for interactive tasks. Can you try this O7 which 
> >>applies on top of O6.1 please:
> >>
> >>available here:
> >>http://kernel.kolivas.org/2.5
> >>
> >
> >sorry, the problem still persists. Aborting the cp takes less time, tho
> >(about 10 seconds now, before it was about 30 secs). I'm aborting during
> >a big file, FYI.
> >
> 
> OK if the IO actually stops then it shouldn't be an IO scheduler or
> request allocation problem, but could you try to capture a sysrq T
> trace for me during the freeze.

okay, here it is. I only paste the output for cp, if you need the whole
thing, tell me please.

Jul 19 12:54:16 kakerlak kernel: cp            D C0140F7B  6164   6160
(NOTLB)
Jul 19 12:54:16 kakerlak kernel: c2c6fec8 00200082 d3de680c c0140f7b
d3de6800 c72ef000 c0477cc0 d3de681c 
Jul 19 12:54:16 kakerlak kernel:        d139ad60 c2c6e000 ce8dc3c0
ce8dc3dc 00000000 c01aba07 00000000 00000001 
Jul 19 12:54:16 kakerlak kernel:        00000000 00000001 ce8153e4
00000000 c26706d0 c011cdb0 ce8dc3dc ce8dc3dc 
Jul 19 12:54:16 kakerlak kernel: Call Trace:
Jul 19 12:54:16 kakerlak kernel:  [free_block+203/256]
free_block+0xcb/0x100
Jul 19 12:54:16 kakerlak kernel:  [nfs_wait_on_request+151/336]
nfs_wait_on_request+0x97/0x150
Jul 19 12:54:16 kakerlak kernel:  [default_wake_function+0/48]
default_wake_function+0x0/0x30
Jul 19 12:54:16 kakerlak kernel:  [nfs_wait_on_requests+169/256]
nfs_wait_on_requests+0xa9/0x100
Jul 19 12:54:16 kakerlak kernel:  [nfs_sync_file+150/192]
nfs_sync_file+0x96/0xc0
Jul 19 12:54:16 kakerlak kernel:  [nfs_file_flush+88/208]
nfs_file_flush+0x58/0xd0
Jul 19 12:54:16 kakerlak kernel:  [filp_close+101/128]
filp_close+0x65/0x80
Jul 19 12:54:16 kakerlak kernel:  [sys_close+97/160] sys_close+0x61/0xa0
Jul 19 12:54:16 kakerlak kernel:  [syscall_call+7/11]
syscall_call+0x7/0xb
Jul 19 12:54:16 kakerlak kernel: 


-- 
Regards,

Wiktor Wodecki

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found]                 ` <Pine.LNX.4.55.0307181333520.5608@bigblue.dev.mcafeelabs.co m>
@ 2003-07-19 17:04                   ` Mike Galbraith
  2003-07-21  0:21                     ` Davide Libenzi
  0 siblings, 1 reply; 44+ messages in thread
From: Mike Galbraith @ 2003-07-19 17:04 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Valdis.Kletnieks, linux kernel mailing list

At 01:38 PM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003, Mike Galbraith wrote:
>
> > >I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
> > >not think about it when this scheduler was dropped inside 2.5 sadly. The
> > >interactivity concept is based on the fact that a particular class of
> > >tasks characterized by certain sleep->burn patterns are never expired and
> > >eventually, only oscillate between two (pretty high) priorities. Without
> > >applying a global CPU throttle for interactive tasks, you can create a
> > >small set of processes (like irman does) that hit the coded sleep->burn
> > >pattern and that make everything is running with priority lower than the
> > >lower of the two of the oscillation range, to almost completely starve.
> > >Controlled unfairness would mean throttling the CPU time we reserve to
> > >interactive tasks so that we always reserve a minimum time to non
> > >interactive processes.
> >
> > I'd like to find a way to prevent that instead.  There's got to be a way.
>
>Remember that this is computer science, that is, for every problem there
>"at least" one solution ;)

As incentive for other folks to think about the solution I haven't been 
able to come up with, I think I'll post what I do about it here, and 
threaten to submit it for inclusion ;-) ...

> > It's easy to prevent irman type things from starving others permanently (i
> > call this active starvation, or wakeup starvation), and this does something
> > fairly similar to what you're talking about.  Just crawl down the queue
> > heads looking for the oldest task periodically instead of always taking the
> > highest queue.  You can do that very fast, and it does prevent active
> > starvation.
>
>Everything that will make the scheduler to say "ok, I gave enough time to
>interactive tasks, now I'm really going to spin one from the masses" will
>work. Having a clean solution would not be an option here.

... just as soon as I get my decidedly unclean work-around functioning at 
least as well as it did for plain old irman.   irman2 is _much_ more evil 
than irman ever was (wow, good job!).  I thought it'd be a half an hour 
tops.  This little bugger shows active starvation, expired starvation, 
priority inflation, _and_ interactive starvation (i have to keep inventing 
new terms to describe things i see.. jeez this is a good testcase).

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
  2003-07-19 17:04                   ` Mike Galbraith
@ 2003-07-21  0:21                     ` Davide Libenzi
  0 siblings, 0 replies; 44+ messages in thread
From: Davide Libenzi @ 2003-07-21  0:21 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: Valdis.Kletnieks, linux kernel mailing list

On Sat, 19 Jul 2003, Mike Galbraith wrote:

> >Everything that will make the scheduler to say "ok, I gave enough time to
> >interactive tasks, now I'm really going to spin one from the masses" will
> >work. Having a clean solution would not be an option here.
>
> ... just as soon as I get my decidedly unclean work-around functioning at
> least as well as it did for plain old irman.   irman2 is _much_ more evil
> than irman ever was (wow, good job!).  I thought it'd be a half an hour
> tops.  This little bugger shows active starvation, expired starvation,
> priority inflation, _and_ interactive starvation (i have to keep inventing
> new terms to describe things i see.. jeez this is a good testcase).

Yes, the problem is not only the expired tasks starvation. Anything in
the active array that reside underneath the lower priority value of the
range irman2 tasks oscillate inbetween, will experience a "CPU time eclipse".
And you do not even need a smoked glass to look at it :)



- Davide


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found]                 ` <Pine.LNX.4.55.0307201715130.3548@bigblue.dev.mcafeelabs.co m>
@ 2003-07-21  5:36                   ` Mike Galbraith
  2003-07-21 12:39                   ` [NOTAPATCH] " Mike Galbraith
  1 sibling, 0 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-21  5:36 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Valdis.Kletnieks, linux kernel mailing list

At 05:21 PM 7/20/2003 -0700, Davide Libenzi wrote:
>On Sat, 19 Jul 2003, Mike Galbraith wrote:
>
> > >Everything that will make the scheduler to say "ok, I gave enough time to
> > >interactive tasks, now I'm really going to spin one from the masses" will
> > >work. Having a clean solution would not be an option here.
> >
> > ... just as soon as I get my decidedly unclean work-around functioning at
> > least as well as it did for plain old irman.   irman2 is _much_ more evil
> > than irman ever was (wow, good job!).  I thought it'd be a half an hour
> > tops.  This little bugger shows active starvation, expired starvation,
> > priority inflation, _and_ interactive starvation (i have to keep inventing
> > new terms to describe things i see.. jeez this is a good testcase).
>
>Yes, the problem is not only the expired tasks starvation. Anything in
>the active array that reside underneath the lower priority value of the
>range irman2 tasks oscillate inbetween, will experience a "CPU time eclipse".
>And you do not even need a smoked glass to look at it :)

Here there's no oscillation that I can see.  It climbs steadily to prio 16 
and stays there forever, with the hog running down at the bottom.  I did a 
quick requirement that a non-interactive task must run every HZ ticks at 
least, with a sliding "select non-interactive" window staying open for 
HZ/10 ticks, and retrieving an expired task if necessary instead of 
expiring interactive tasks (or forcing the array switch) thinking it'd be 
enough.

Wrong answer.  For most things, it would be good enough I think, but with 
the hog being part of irman2, I have to not only pull from the expired 
array if no non-interactive task is available, I have to always pull once 
the deadline is hit.  I'm also going to have to put another check for queue 
runtime to beat the darn thing.  I ran irman2 with a bonnie -s 300 and a 
kernel compile...  After a half an hour, the compile was making steady (but 
too slow because the irman2 periodic cpu hog was getting too much of what 
gcc was intended to get;) progress, but the poor bonnie was starving at 
prio 17.  A sleep_avg vs cpu%*100 sanity check will help that, but not cure.

All this to avoid the pain (agony actually) of an array switch.

         -Mike

(someone should wrap me upside the head with a clue-x-4. this darn thing 
shouldn't be worth more than 10 lines of ugliness.  i'm obviously past 
that... and headed toward the twilight-zone at warp 9.  wheee;) 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [NOTAPATCH] Re: [PATCH] O6int for interactivity
       [not found]                 ` <Pine.LNX.4.55.0307201715130.3548@bigblue.dev.mcafeelabs.co m>
  2003-07-21  5:36                   ` Mike Galbraith
@ 2003-07-21 12:39                   ` Mike Galbraith
  2003-07-21 17:13                     ` Mike Galbraith
  1 sibling, 1 reply; 44+ messages in thread
From: Mike Galbraith @ 2003-07-21 12:39 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Valdis.Kletnieks, linux kernel mailing list

[-- Attachment #1: Type: text/plain, Size: 1314 bytes --]

At 05:21 PM 7/20/2003 -0700, Davide Libenzi wrote:
>On Sat, 19 Jul 2003, Mike Galbraith wrote:
>
> > >Everything that will make the scheduler to say "ok, I gave enough time to
> > >interactive tasks, now I'm really going to spin one from the masses" will
> > >work. Having a clean solution would not be an option here.
> >
> > ... just as soon as I get my decidedly unclean work-around functioning at
> > least as well as it did for plain old irman.   irman2 is _much_ more evil
> > than irman ever was (wow, good job!).  I thought it'd be a half an hour
> > tops.  This little bugger shows active starvation, expired starvation,
> > priority inflation, _and_ interactive starvation (i have to keep inventing
> > new terms to describe things i see.. jeez this is a good testcase).
>
>Yes, the problem is not only the expired tasks starvation. Anything in
>the active array that reside underneath the lower priority value of the
>range irman2 tasks oscillate inbetween, will experience a "CPU time eclipse".
>And you do not even need a smoked glass to look at it :)

I think I whipped the obnoxious little bugger.  Comments on the attached 
[kiss] approach?

I don't like what gpm tells me while irman2 is running with this diff, but 
hiccup hiccup is a heck of lot better than terminal starvation.

         -Mike 

[-- Attachment #2: xx.diff --]
[-- Type: application/octet-stream, Size: 3586 bytes --]

--- linux-2.6.0-test1.virgin/kernel/sched.c.org	Sat Jul 19 09:42:12 2003
+++ linux-2.6.0-test1.virgin/kernel/sched.c	Mon Jul 21 14:29:26 2003
@@ -76,6 +76,12 @@
 #define MAX_SLEEP_AVG		(10*HZ)
 #define STARVATION_LIMIT	(10*HZ)
 #define NODE_THRESHOLD		125
+#define END_IA_PRIO		(NICE_TO_PRIO(1 - INTERACTIVE_DELTA))
+#define INTERVAL		(HZ)
+#define DURATION_PERCENT	10
+#define DURATION		(INTERVAL * DURATION_PERCENT / 100)
+#define INTERVAL_EXPIRED(rq) (time_after(jiffies, \
+	(rq)->interval_ts + ((rq)->idx ? DURATION : INTERVAL)))
 
 /*
  * If a task is 'interactive' then we reinsert it in the active
@@ -158,7 +164,7 @@
 struct runqueue {
 	spinlock_t lock;
 	unsigned long nr_running, nr_switches, expired_timestamp,
-			nr_uninterruptible;
+			nr_uninterruptible, interval_ts;
 	task_t *curr, *idle;
 	struct mm_struct *prev_mm;
 	prio_array_t *active, *expired, arrays[2];
@@ -171,6 +177,7 @@
 	struct list_head migration_queue;
 
 	atomic_t nr_iowait;
+	int idx;
 };
 
 static DEFINE_PER_CPU(struct runqueue, runqueues);
@@ -1166,6 +1173,23 @@
 			STARVATION_LIMIT * ((rq)->nr_running) + 1)))
 
 /*
+ * Must be called with the runqueue lock held.
+ */
+static void __requeue_one_expired(runqueue_t *rq)
+{
+	int idx = sched_find_first_bit(rq->expired->bitmap);
+	struct list_head *queue;
+	task_t *p;
+
+	if (idx >= MAX_PRIO)
+		return;
+	queue = rq->expired->queue + idx;
+	p = list_entry(queue->next, task_t, run_list);
+	dequeue_task(p, p->array);
+	enqueue_task(p, rq->active);
+}
+
+/*
  * This function gets called by the timer code, with HZ frequency.
  * We call it with interrupts disabled.
  *
@@ -1242,10 +1266,35 @@
 			if (!rq->expired_timestamp)
 				rq->expired_timestamp = jiffies;
 			enqueue_task(p, rq->expired);
-		} else
+		} else {
 			enqueue_task(p, rq->active);
+			if (rq->idx)
+				__requeue_one_expired(rq);
+		}
+	} else if (INTERVAL_EXPIRED(rq)) {
+		/*
+		 * If we haven't run a non-interactive task within our
+		 * interval, we take this as a hint that we may have
+		 * starvation in progress.  Trigger a queue walk-down,
+		 * and walk until either non-interactive tasks have
+		 * received DURATION ticks, or we hit the bottom.
+		 */
+		if (TASK_INTERACTIVE(p)) {
+			prio_array_t *array = p->array;
+
+			/* Requeue the hight priority expired task... */
+				__requeue_one_expired(rq);
+			/* and tell the scheduler where to start walking. */
+			rq->idx = find_next_bit(array->bitmap, MAX_PRIO, 1 + p->prio);
+			if (rq->idx >= MAX_PRIO)
+				rq->idx = 0;
+		} else
+			rq->idx = 0;
+		rq->interval_ts = jiffies;
 	}
 out_unlock:
+	if (rq->idx && TASK_INTERACTIVE(p))
+		rq->interval_ts++;
 	spin_unlock(&rq->lock);
 out:
 	rebalance_tick(rq, 0);
@@ -1312,6 +1361,8 @@
 #endif
 		next = rq->idle;
 		rq->expired_timestamp = 0;
+		rq->interval_ts = jiffies;
+		rq->idx = 0;
 		goto switch_tasks;
 	}
 
@@ -1329,6 +1380,17 @@
 	idx = sched_find_first_bit(array->bitmap);
 	queue = array->queue + idx;
 	next = list_entry(queue->next, task_t, run_list);
+	if (unlikely(rq->idx)) {
+		if(!rt_task(next)) {
+			int index = find_next_bit(array->bitmap, MAX_PRIO, rq->idx);
+			if (index < MAX_PRIO) {
+				queue = array->queue + index;
+				next = list_entry(queue->next, task_t, run_list);
+				if (index < END_IA_PRIO)
+					rq->idx = index + 1;
+			}
+		}
+	}
 
 switch_tasks:
 	prefetch(next);
@@ -2497,6 +2559,7 @@
 		rq = cpu_rq(i);
 		rq->active = rq->arrays;
 		rq->expired = rq->arrays + 1;
+		rq->interval_ts = INITIAL_JIFFIES;
 		spin_lock_init(&rq->lock);
 		INIT_LIST_HEAD(&rq->migration_queue);
 		atomic_set(&rq->nr_iowait, 0);

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [NOTAPATCH] Re: [PATCH] O6int for interactivity
  2003-07-21 12:39                   ` [NOTAPATCH] " Mike Galbraith
@ 2003-07-21 17:13                     ` Mike Galbraith
  0 siblings, 0 replies; 44+ messages in thread
From: Mike Galbraith @ 2003-07-21 17:13 UTC (permalink / raw)
  To: linux-kernel; +Cc: Davide Libenzi, Valdis.Kletnieks

At 02:39 PM 7/21/2003 +0200, Mike Galbraith wrote:
>...  Comments on the attached [kiss] approach?

In case anyone decides to take a look, there's a line missing which was 
supposed to advance the timeout in the case where we haven't yet timed out, 
and the task is not interactive.  A better way to do what I intended is to 
change...

      if (rq->idx && TASK_INTERACTIVE(p))
           rq->interval_ts++
to
      if (!rq->idx - !TASK_INTERACTIVE(P) == 0)
            rq->interval_ts++;

With this diff in place, and with bonnie and a make bzImage sharing my cpu 
with irman2, I see no terminal starvation.  Both the make and bonnie are 
proceeding just fine.

         ciao,

         -Mike 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
       [not found]       ` <Pine.LNX.4.44.0307251628500.26172-300000@localhost.localdomain>
@ 2003-07-25 19:40         ` Alex Riesen
  0 siblings, 0 replies; 44+ messages in thread
From: Alex Riesen @ 2003-07-25 19:40 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel

Terribly sorry for delay. I got a bit distracted by some elements of
testing (dvd playing).

Ingo Molnar, Fri, Jul 25, 2003 16:29:33 +0200:
> > Still no good. xine drops frames by kernel's make -j2, xmms skips while
> > bk pull (locally). Updates (after switching desktops in metacity) get
> > delayed for seconds (mozilla window redraws with www.kernel.org on it,
> > for example).
> 
> would you mind to give the attached sched-2.6.0-test1-G2 patch a go? (it's
> ontop of vanilla 2.6.0-test1.) Do you still see audio skipping and/or
> other bad scheduling artifacts?

Started make -j2 (my machine is UP), xine-distractor and gvim.
Moving gvim window over xine (even if paused) is jerky, but tolerable.
No skips during playing. No at all. Redraws in MozillaFirebird are
delayed (I get trails all over the firebird window), but again - no
annoyingly long delays.

I continue testing.

> (if you prefer -mm2 then please first unapply the second attached patch
> (Con's interactivity patchset) - they are mutually exclusive.)

I used the G2 on 2.6-test1.


-alex


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] O6int for interactivity
@ 2003-07-16 20:20 Shane Shrybman
  0 siblings, 0 replies; 44+ messages in thread
From: Shane Shrybman @ 2003-07-16 20:20 UTC (permalink / raw)
  To: linux-kernel

Hi Con,

I use the infinitely superior (to XMMS), interactive testing tool
mplayer! :-)

My test consists of playing a video in mplayer with software scaling,
(ie. no xv) and refreshing various fat web pages. This as you are fully
aware would cause mozilla to gobble up CPU in short bursts of
approximately .5 - 3 seconds during which time the video would be very
choppy.

06int is much improved in this scenario, the pauses during refreshing
web pages have almost disappeared. There are still very small hiccups in
the video playback during the refreshes.

Also, I use a local web page that queries a local mysql db to display a
large table in mozilla. So, mozilla, apache and mysql are all local.
06int is a huge improvement in this area as well. It is decreases the
choppiness of the video greatly. Previously this would cause almost a
complete halt of the video for several seconds and now I would say there
is only a small amount choppiness. It is still pretty choppy on the
initial load of that page after a reboot, maybe nothing is cached yet?

Thanks for your hard work!

Regards,

Shane





^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2003-07-25 19:25 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-07-16 14:30 [PATCH] O6int for interactivity Con Kolivas
2003-07-16 15:22 ` Felipe Alfaro Solana
2003-07-16 19:55   ` Marc-Christian Petersen
2003-07-16 17:08 ` Valdis.Kletnieks
2003-07-16 21:59 ` Wiktor Wodecki
2003-07-16 22:30   ` Con Kolivas
2003-07-16 22:12 ` Davide Libenzi
2003-07-17  0:33   ` Con Kolivas
2003-07-17  0:35     ` Davide Libenzi
2003-07-17  1:12       ` Con Kolivas
2003-07-17  0:48     ` Wade
2003-07-17  1:15       ` Con Kolivas
2003-07-17  1:27         ` Eugene Teo
2003-07-17  3:05 ` Wes Janzen
2003-07-17  9:05 ` Alex Riesen
2003-07-17  9:14   ` Con Kolivas
2003-07-18  7:38     ` Alex Riesen
     [not found]       ` <Pine.LNX.4.44.0307251628500.26172-300000@localhost.localdomain>
2003-07-25 19:40         ` Alex Riesen
     [not found] ` <Pine.LNX.4.55.0307161241280.4787@bigblue.dev.mcafeelabs.co m>
2003-07-18  5:38   ` Mike Galbraith
2003-07-18  6:34     ` Nick Piggin
2003-07-18 10:18       ` Mike Galbraith
2003-07-18 10:31         ` Wiktor Wodecki
2003-07-18 10:43           ` Con Kolivas
2003-07-18 11:34             ` Wiktor Wodecki
2003-07-18 11:38               ` Nick Piggin
2003-07-19 10:59                 ` Wiktor Wodecki
2003-07-18 15:46           ` Mike Galbraith
2003-07-18 16:52             ` Davide Libenzi
2003-07-18 17:05               ` Davide Libenzi
2003-07-18 17:39                 ` Valdis.Kletnieks
2003-07-18 19:31                   ` Davide Libenzi
     [not found]                 ` <Pine.LNX.4.55.0307181038450.5608@bigblue.dev.mcafeelabs.co m>
2003-07-18 20:31                   ` Mike Galbraith
2003-07-18 20:38                     ` Davide Libenzi
     [not found]                 ` <Pine.LNX.4.55.0307181333520.5608@bigblue.dev.mcafeelabs.co m>
2003-07-19 17:04                   ` Mike Galbraith
2003-07-21  0:21                     ` Davide Libenzi
     [not found]                 ` <Pine.LNX.4.55.0307201715130.3548@bigblue.dev.mcafeelabs.co m>
2003-07-21  5:36                   ` Mike Galbraith
2003-07-21 12:39                   ` [NOTAPATCH] " Mike Galbraith
2003-07-21 17:13                     ` Mike Galbraith
2003-07-18 14:24         ` Con Kolivas
2003-07-18 15:50           ` Mike Galbraith
2003-07-18 13:46     ` Davide Libenzi
     [not found] ` <Pine.LNX.4.55.0307180630450.5077@bigblue.dev.mcafeelabs.co m>
2003-07-18 15:41   ` Mike Galbraith
     [not found] ` <Pine.LNX.4.55.0307180951050.5608@bigblue.dev.mcafeelabs.co m>
2003-07-18 18:49   ` Mike Galbraith
2003-07-16 20:20 Shane Shrybman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.