linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [patch] O(1) scheduler-H6 and nice +19
       [not found] <Pine.LNX.4.33.0201111723260.3212-100000@localhost.localdomain>
@ 2002-01-11 21:28 ` Ed Tomlinson
  2002-01-14  3:27   ` [patch] O(1) scheduler-H6/H7 " Ed Tomlinson
  0 siblings, 1 reply; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-11 21:28 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel

On January 11, 2002 11:24 am, you wrote:
> On Wed, 9 Jan 2002, Ed Tomlinson wrote:
> > Noticed something about tasks running with nice 19.  They seem to
> > always get 25-35% of the cpu.  This happens with kernel compiles and
> > some other benchmarking processes.  If I kill the setiathome task, the
> > other processes shoot up to 90% and above.
>
> why dont you run the setiathome task at nice +19? that way it'll share CPU
> time with other niced processes.

Setiathome _is_ running at nice +19...  The H6 version cured the 2.4.17 boot
problem here.  Here are some numbers (H6) for you to consider:

make bzImage with setiathome running nice +19

make bzImage  391.11s user 30.85s system 62% cpu 11:17.37 total

make bzImage alone

make bzImage  397.33s user 32.14s system 92% cpu 7:43.58 total

Notice the large difference in run times...

System is: UP K6-III 400, 512M

Ed Tomlinson


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-11 21:28 ` [patch] O(1) scheduler-H6 and nice +19 Ed Tomlinson
@ 2002-01-14  3:27   ` Ed Tomlinson
  2002-01-14  3:45     ` Davide Libenzi
  0 siblings, 1 reply; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-14  3:27 UTC (permalink / raw)
  To: mingo, linux-kernel

With pre3+H7, kernel compiles still take 40% longer with a setiathome
process running at nice +19.  This is _not_ the case with the old scheduler.

Ed Tomlinson

> On January 11, 2002 11:24 am, you wrote:
>> On Wed, 9 Jan 2002, Ed Tomlinson wrote:
>> > Noticed something about tasks running with nice 19.  They seem to
>> > always get 25-35% of the cpu.  This happens with kernel compiles and
>> > some other benchmarking processes.  If I kill the setiathome task, the
>> > other processes shoot up to 90% and above.
>>
>> why dont you run the setiathome task at nice +19? that way it'll share
>> CPU time with other niced processes.
> 
> Setiathome _is_ running at nice +19...  The H6 version cured the 2.4.17
> boot
> problem here.  Here are some numbers (H6) for you to consider:
> 
> make bzImage with setiathome running nice +19
> 
> make bzImage  391.11s user 30.85s system 62% cpu 11:17.37 total
> 
> make bzImage alone
> 
> make bzImage  397.33s user 32.14s system 92% cpu 7:43.58 total
> 
> Notice the large difference in run times...
> 
> System is: UP K6-III 400, 512M
> 
> Ed Tomlinson
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-14  3:27   ` [patch] O(1) scheduler-H6/H7 " Ed Tomlinson
@ 2002-01-14  3:45     ` Davide Libenzi
  2002-01-15  1:37       ` Ed Tomlinson
  0 siblings, 1 reply; 20+ messages in thread
From: Davide Libenzi @ 2002-01-14  3:45 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: mingo, linux-kernel

On Sun, 13 Jan 2002, Ed Tomlinson wrote:

> With pre3+H7, kernel compiles still take 40% longer with a setiathome
> process running at nice +19.  This is _not_ the case with the old scheduler.

Did you try to set MIN_TIMESLICE to 10 ( sched.h ) ?




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-14  3:45     ` Davide Libenzi
@ 2002-01-15  1:37       ` Ed Tomlinson
  2002-01-15  1:50         ` Davide Libenzi
  0 siblings, 1 reply; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-15  1:37 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: mingo, linux-kernel, Dave Jones

On January 13, 2002 10:45 pm, Davide Libenzi wrote:
> On Sun, 13 Jan 2002, Ed Tomlinson wrote:
> > With pre3+H7, kernel compiles still take 40% longer with a setiathome
> > process running at nice +19.  This is _not_ the case with the old
> > scheduler.
>
> Did you try to set MIN_TIMESLICE to 10 ( sched.h ) ?make bzImage with setiathome running nice +19

This makes things a worst - note the decreased cpu utilizaton...
  
make bzImage  424.33s user 32.21s system 48% cpu 15:48.69 total

What is this telling us?  

Ed Tomlinson

>>make bzImage  391.11s user 30.85s system 62% cpu 11:17.37 total
>>
>>make bzImage alone
>>
>>make bzImage  397.33s user 32.14s system 92% cpu 7:43.58 total
>>
>>Notice the large difference in run times...



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-15  1:37       ` Ed Tomlinson
@ 2002-01-15  1:50         ` Davide Libenzi
  2002-01-15  1:58           ` Davide Libenzi
  2002-01-15  2:18           ` Ed Tomlinson
  0 siblings, 2 replies; 20+ messages in thread
From: Davide Libenzi @ 2002-01-15  1:50 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Ingo Molnar, lkml, Dave Jones

On Mon, 14 Jan 2002, Ed Tomlinson wrote:

> On January 13, 2002 10:45 pm, Davide Libenzi wrote:
> > On Sun, 13 Jan 2002, Ed Tomlinson wrote:
> > > With pre3+H7, kernel compiles still take 40% longer with a setiathome
> > > process running at nice +19.  This is _not_ the case with the old
> > > scheduler.
> >
> > Did you try to set MIN_TIMESLICE to 10 ( sched.h ) ?make bzImage with setiathome running nice +19
>
> This makes things a worst - note the decreased cpu utilizaton...
>
> make bzImage  424.33s user 32.21s system 48% cpu 15:48.69 total
>
> What is this telling us?

Doh !
Did you set this ?

#define MIN_TIMESLICE   (10 * HZ / 1000)




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-15  1:50         ` Davide Libenzi
@ 2002-01-15  1:58           ` Davide Libenzi
  2002-01-15  2:18           ` Ed Tomlinson
  1 sibling, 0 replies; 20+ messages in thread
From: Davide Libenzi @ 2002-01-15  1:58 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Ingo Molnar, lkml, Dave Jones

On Mon, 14 Jan 2002, Davide Libenzi wrote:

On Mon, 14 Jan 2002, Ed Tomlinson wrote:

> On January 13, 2002 10:45 pm, Davide Libenzi wrote:
> > On Sun, 13 Jan 2002, Ed Tomlinson wrote:
> > > With pre3+H7, kernel compiles still take 40% longer with a setiathome
> > > process running at nice +19.  This is _not_ the case with the old
> > > scheduler.
> >
> > Did you try to set MIN_TIMESLICE to 10 ( sched.h ) ?make bzImage with setiathome running nice +19
>
> This makes things a worst - note the decreased cpu utilizaton...
>
> make bzImage  424.33s user 32.21s system 48% cpu 15:48.69 total
>
> What is this telling us?

I got it, the new scheduler assign time slices depending on priority.
Maybe Ingo it's better to assign them depending on nice since we already
have different time slices based on priority ( interactive handling in
expire_task() ).




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-15  1:50         ` Davide Libenzi
  2002-01-15  1:58           ` Davide Libenzi
@ 2002-01-15  2:18           ` Ed Tomlinson
  2002-01-15  2:33             ` Davide Libenzi
  1 sibling, 1 reply; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-15  2:18 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Ingo Molnar, lkml, Dave Jones

On January 14, 2002 08:50 pm, Davide Libenzi wrote:
> On Mon, 14 Jan 2002, Ed Tomlinson wrote:
> > On January 13, 2002 10:45 pm, Davide Libenzi wrote:
> > > On Sun, 13 Jan 2002, Ed Tomlinson wrote:
> > > > With pre3+H7, kernel compiles still take 40% longer with a setiathome
> > > > process running at nice +19.  This is _not_ the case with the old
> > > > scheduler.
> > >
> > > Did you try to set MIN_TIMESLICE to 10 ( sched.h ) ?make bzImage with
> > > setiathome running nice +19
> >
> > This makes things a worst - note the decreased cpu utilizaton...
> >
> > make bzImage  424.33s user 32.21s system 48% cpu 15:48.69 total
> >
> > What is this telling us?
>
> Doh !
> Did you set this ?
>
> #define MIN_TIMESLICE  (10 * HZ / 1000)

I set:

#define MIN_TIMESLICE  10

Now I am tring 

#define MIN_TIMESLICE  1

which, looksing at monitors, gives about 80% cpu to the compile

Ed Tomlinson











^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-15  2:18           ` Ed Tomlinson
@ 2002-01-15  2:33             ` Davide Libenzi
  2002-01-15  3:19               ` Ed Tomlinson
  0 siblings, 1 reply; 20+ messages in thread
From: Davide Libenzi @ 2002-01-15  2:33 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Ingo Molnar, lkml, Dave Jones

On Mon, 14 Jan 2002, Ed Tomlinson wrote:

> On January 14, 2002 08:50 pm, Davide Libenzi wrote:
> > On Mon, 14 Jan 2002, Ed Tomlinson wrote:
> > > On January 13, 2002 10:45 pm, Davide Libenzi wrote:
> > > > On Sun, 13 Jan 2002, Ed Tomlinson wrote:
> > > > > With pre3+H7, kernel compiles still take 40% longer with a setiathome
> > > > > process running at nice +19.  This is _not_ the case with the old
> > > > > scheduler.
> > > >
> > > > Did you try to set MIN_TIMESLICE to 10 ( sched.h ) ?make bzImage with
> > > > setiathome running nice +19
> > >
> > > This makes things a worst - note the decreased cpu utilizaton...
> > >
> > > make bzImage  424.33s user 32.21s system 48% cpu 15:48.69 total
> > >
> > > What is this telling us?
> >
> > Doh !
> > Did you set this ?
> >
> > #define MIN_TIMESLICE  (10 * HZ / 1000)
>
> I set:
>
> #define MIN_TIMESLICE  10
>
> Now I am tring
>
> #define MIN_TIMESLICE  1
>
> which, looksing at monitors, gives about 80% cpu to the compile

try to replace :

PRIO_TO_TIMESLICE() and RT_PRIO_TO_TIMESLICE() with :

#define NICE_TO_TIMESLICE(n)    (MIN_TIMESLICE + ((MAX_TIMESLICE - \
	MIN_TIMESLICE) * ((n) + 20)) / 39)


NICE_TO_TIMESLICE(p->__nice)


I'm currently running it on my machine but i don't want that this changes
that 'liquid' interactive feel that me and Ingo have got with the new code




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-15  2:33             ` Davide Libenzi
@ 2002-01-15  3:19               ` Ed Tomlinson
  2002-01-15  3:27                 ` Davide Libenzi
  0 siblings, 1 reply; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-15  3:19 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Ingo Molnar, lkml, Dave Jones

On January 14, 2002 09:33 pm, Davide Libenzi wrote:
> try to replace :
>
> PRIO_TO_TIMESLICE() and RT_PRIO_TO_TIMESLICE() with :
>
> #define NICE_TO_TIMESLICE(n)    (MIN_TIMESLICE + ((MAX_TIMESLICE - \
> 	MIN_TIMESLICE) * ((n) + 20)) / 39)
>
>
> NICE_TO_TIMESLICE(p->__nice)

Not sure about this change.  gkrellm shows the compile getting about 40%
cpu.  Best result here seems to be with a larger range of timeslices.  ie
1-15 ((10*HZ)/1000...) instead lets the compile get 80% of the cpu.  wonder
if this might be the way to go?

Ed  

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7 and nice +19
  2002-01-15  3:19               ` Ed Tomlinson
@ 2002-01-15  3:27                 ` Davide Libenzi
  2002-01-15 23:48                   ` [patch] O(1) scheduler-H6/H7/I0 " Ed Tomlinson
  0 siblings, 1 reply; 20+ messages in thread
From: Davide Libenzi @ 2002-01-15  3:27 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Ingo Molnar, lkml, Dave Jones

On Mon, 14 Jan 2002, Ed Tomlinson wrote:

> On January 14, 2002 09:33 pm, Davide Libenzi wrote:
> > try to replace :
> >
> > PRIO_TO_TIMESLICE() and RT_PRIO_TO_TIMESLICE() with :
> >
> > #define NICE_TO_TIMESLICE(n)    (MIN_TIMESLICE + ((MAX_TIMESLICE - \
> > 	MIN_TIMESLICE) * ((n) + 20)) / 39)
> >
> >
> > NICE_TO_TIMESLICE(p->__nice)
>
> Not sure about this change.  gkrellm shows the compile getting about 40%
> cpu.  Best result here seems to be with a larger range of timeslices.  ie
> 1-15 ((10*HZ)/1000...) instead lets the compile get 80% of the cpu.  wonder
> if this might be the way to go?

What's the MIN/MAX_TIMESLICE range that you used to get 80% of cpu ?




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-15  3:27                 ` Davide Libenzi
@ 2002-01-15 23:48                   ` Ed Tomlinson
  2002-01-15 23:56                     ` Davide Libenzi
  2002-01-16  1:49                     ` Ingo Molnar
  0 siblings, 2 replies; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-15 23:48 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Ingo Molnar, lkml, Dave Jones

The 2.4.17-I0 patch makes things much better here.  Does this one
suffer from the same bugs that the 2.5.2 version has?  

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
  790 ed        44  19 14320  13M   640 R N  69.4  2.7 166:18 setiathome
 7676 ed         0   0 14908  14M 11036 R    16.7  2.8   0:13 kmail
 5703 root       0 -10 82596  23M  1808 R <  11.2  4.6   2:23 XFree86
 7725 ed         0   0  1016 1016   776 R     1.3  0.1   0:00 top
 5803 ed         0   0  3764 3764  2904 R     0.5  0.7   0:15 gkrellm
 7720 ed         0   0  9752 9752  7856 R     0.3  1.8   0:04 kdeinit
 5725 ed         0   0  7524 7520  6888 S     0.1  1.4   0:01 kdeinit
    1 root       0   0   520  472   452 S     0.0  0.0   0:07 init
    2 root       0   0     0    0     0 SW    0.0  0.0   0:00 keventd
    3 root      17  19     0    0     0 SWN   0.0  0.0   0:00 ksoftirqd_CPU0
    4 root       0   0     0    0     0 SW    0.0  0.0   0:00 kswapd
    5 root      25   0     0    0     0 SW    0.0  0.0   0:00 bdflush
    6 root       0   0     0    0     0 SW    0.0  0.0   0:02 kupdated
    7 root      12   0     0    0     0 SW    0.0  0.0   0:00 khubd
   18 root       0   0     0    0     0 SW    0.0  0.0   0:00 kreiserfsd
   60 root       0   0     0    0     0 SW    0.0  0.0   0:00 mdrecoveryd
  219 root       0   0     0    0     0 SW    0.0  0.0   0:00 usb-storage-0
  220 root       0   0     0    0     0 SW    0.0  0.0   0:00 scsi_eh_0
  234 root       0   0   648  644   528 S     0.0  0.1   0:00 syslogd
  238 root      -2   0  1344 1344  1264 S     0.0  0.2   0:00 watchdog
  243 root       0   0  1184 1176   456 S     0.0  0.2   0:00 klogd
  249 daemon     0   0   472  460   380 S     0.0  0.0   0:00 portmap

Major difference from older version of the patch is that top shows many 
processes with PRI 0.   I am not sure this is intended?

Thanks
Ed Tomlinson

On January 14, 2002 10:27 pm, Davide Libenzi wrote:
> On Mon, 14 Jan 2002, Ed Tomlinson wrote:
> > On January 14, 2002 09:33 pm, Davide Libenzi wrote:
> > > try to replace :
> > >
> > > PRIO_TO_TIMESLICE() and RT_PRIO_TO_TIMESLICE() with :
> > >
> > > #define NICE_TO_TIMESLICE(n)    (MIN_TIMESLICE + ((MAX_TIMESLICE - \
> > > 	MIN_TIMESLICE) * ((n) + 20)) / 39)
> > >
> > >
> > > NICE_TO_TIMESLICE(p->__nice)
> >
> > Not sure about this change.  gkrellm shows the compile getting about 40%
> > cpu.  Best result here seems to be with a larger range of timeslices.  ie
> > 1-15 ((10*HZ)/1000...) instead lets the compile get 80% of the cpu. 
> > wonder if this might be the way to go?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-15 23:48                   ` [patch] O(1) scheduler-H6/H7/I0 " Ed Tomlinson
@ 2002-01-15 23:56                     ` Davide Libenzi
  2002-01-16  1:49                     ` Ingo Molnar
  1 sibling, 0 replies; 20+ messages in thread
From: Davide Libenzi @ 2002-01-15 23:56 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Ingo Molnar, lkml, Dave Jones

On Tue, 15 Jan 2002, Ed Tomlinson wrote:

> The 2.4.17-I0 patch makes things much better here.  Does this one
> suffer from the same bugs that the 2.5.2 version has?
>
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>   790 ed        44  19 14320  13M   640 R N  69.4  2.7 166:18 setiathome
>  7676 ed         0   0 14908  14M 11036 R    16.7  2.8   0:13 kmail
>  5703 root       0 -10 82596  23M  1808 R <  11.2  4.6   2:23 XFree86
>  7725 ed         0   0  1016 1016   776 R     1.3  0.1   0:00 top
>  5803 ed         0   0  3764 3764  2904 R     0.5  0.7   0:15 gkrellm
>  7720 ed         0   0  9752 9752  7856 R     0.3  1.8   0:04 kdeinit
>  5725 ed         0   0  7524 7520  6888 S     0.1  1.4   0:01 kdeinit
>     1 root       0   0   520  472   452 S     0.0  0.0   0:07 init
>     2 root       0   0     0    0     0 SW    0.0  0.0   0:00 keventd
>     3 root      17  19     0    0     0 SWN   0.0  0.0   0:00 ksoftirqd_CPU0
>     4 root       0   0     0    0     0 SW    0.0  0.0   0:00 kswapd
>     5 root      25   0     0    0     0 SW    0.0  0.0   0:00 bdflush
>     6 root       0   0     0    0     0 SW    0.0  0.0   0:02 kupdated
>     7 root      12   0     0    0     0 SW    0.0  0.0   0:00 khubd
>    18 root       0   0     0    0     0 SW    0.0  0.0   0:00 kreiserfsd
>    60 root       0   0     0    0     0 SW    0.0  0.0   0:00 mdrecoveryd
>   219 root       0   0     0    0     0 SW    0.0  0.0   0:00 usb-storage-0
>   220 root       0   0     0    0     0 SW    0.0  0.0   0:00 scsi_eh_0
>   234 root       0   0   648  644   528 S     0.0  0.1   0:00 syslogd
>   238 root      -2   0  1344 1344  1264 S     0.0  0.2   0:00 watchdog
>   243 root       0   0  1184 1176   456 S     0.0  0.2   0:00 klogd
>   249 daemon     0   0   472  460   380 S     0.0  0.0   0:00 portmap
>
> Major difference from older version of the patch is that top shows many
> processes with PRI 0.   I am not sure this is intended?
>
> Thanks
> Ed Tomlinson
>
> On January 14, 2002 10:27 pm, Davide Libenzi wrote:
> > On Mon, 14 Jan 2002, Ed Tomlinson wrote:
> > > On January 14, 2002 09:33 pm, Davide Libenzi wrote:
> > > > try to replace :
> > > >
> > > > PRIO_TO_TIMESLICE() and RT_PRIO_TO_TIMESLICE() with :
> > > >
> > > > #define NICE_TO_TIMESLICE(n)    (MIN_TIMESLICE + ((MAX_TIMESLICE - \
> > > > 	MIN_TIMESLICE) * ((n) + 20)) / 39)
> > > >
> > > >
> > > > NICE_TO_TIMESLICE(p->__nice)
> > >
> > > Not sure about this change.  gkrellm shows the compile getting about 40%
> > > cpu.  Best result here seems to be with a larger range of timeslices.  ie
> > > 1-15 ((10*HZ)/1000...) instead lets the compile get 80% of the cpu.
> > > wonder if this might be the way to go?

The above macro is wrong, this is right :

#define NICE_TO_TIMESLICE(n)    (MIN_TIMESLICE + ((MAX_TIMESLICE - \
	MIN_TIMESLICE) * (19 - (n))) / 39)




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  1:49                     ` Ingo Molnar
@ 2002-01-16  0:44                       ` Ed Tomlinson
  2002-01-16  2:48                         ` Ingo Molnar
  2002-01-16  2:06                       ` Rene Rebe
  1 sibling, 1 reply; 20+ messages in thread
From: Ed Tomlinson @ 2002-01-16  0:44 UTC (permalink / raw)
  To: mingo; +Cc: Davide Libenzi, lkml

On January 15, 2002 08:49 pm, Ingo Molnar wrote:
> On Tue, 15 Jan 2002, Ed Tomlinson wrote:
> > The 2.4.17-I0 patch makes things much better here.  Does this one
> > suffer from the same bugs that the 2.5.2 version has?
>
> i'll do a -I3 patch in a minute.
>
> > Major difference from older version of the patch is that top shows
> > many processes with PRI 0.  I am not sure this is intended?
>
> yes, it's intended. Lots of interactive (idle) tasks. Right now the time
> under which we detect a task as interactive is pretty short, but if you
> run 'top' with 's 0.3' then you can see how tasks grow/shrink their
> priorities, depending on the load they generate.

OK I3 also works fine with respect to my nice test.  One thing I do note
and I am not too sure how it might be fixed, is what happens when starting 
what will be interactive programs.  

Watching with top 's 0.3' I can see them lose priority in the 3-10 seconds it
takes them to setup.  This is not that critical if they are the only thing trying
to run.  If you have another (not niced) task eating cpu (like a kernel compile) 
then intactive startup time suffers.  Startup time is wait time that _is_ noticed
by users.

Is there some way we could tell the scheduler or the scheduler could learn that 
a given _program_ is usually interactive so it should wait at bit (10 seconds on my 
box would work) before starting to increase its priority numbers?

TIA
Ed Tomlinson

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-15 23:48                   ` [patch] O(1) scheduler-H6/H7/I0 " Ed Tomlinson
  2002-01-15 23:56                     ` Davide Libenzi
@ 2002-01-16  1:49                     ` Ingo Molnar
  2002-01-16  0:44                       ` Ed Tomlinson
  2002-01-16  2:06                       ` Rene Rebe
  1 sibling, 2 replies; 20+ messages in thread
From: Ingo Molnar @ 2002-01-16  1:49 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Davide Libenzi, lkml, Dave Jones


On Tue, 15 Jan 2002, Ed Tomlinson wrote:

> The 2.4.17-I0 patch makes things much better here.  Does this one
> suffer from the same bugs that the 2.5.2 version has?

i'll do a -I3 patch in a minute.

> Major difference from older version of the patch is that top shows
> many processes with PRI 0.  I am not sure this is intended?

yes, it's intended. Lots of interactive (idle) tasks. Right now the time
under which we detect a task as interactive is pretty short, but if you
run 'top' with 's 0.3' then you can see how tasks grow/shrink their
priorities, depending on the load they generate.

	Ingo


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  2:48                         ` Ingo Molnar
@ 2002-01-16  1:59                           ` Robert Love
  2002-01-16  2:04                             ` Davide Libenzi
  0 siblings, 1 reply; 20+ messages in thread
From: Robert Love @ 2002-01-16  1:59 UTC (permalink / raw)
  To: mingo; +Cc: Ed Tomlinson, Davide Libenzi, lkml, Linus Torvalds

On Tue, 2002-01-15 at 21:48, Ingo Molnar wrote:

> there is a way: renicing. Either use nice +19 on the compilation job or
> use nice -5 on the 'known good' tasks. Perhaps we should allow a nice
> decrease of up to -5 from the default level - and things like KDE or Gnome
> could renice interactive tasks, while things like compilation jobs would
> run on the default priority.

This isn't a bad idea, as long as we don't use it as a crutch or
excuse.  That is, answer scheduling problems with "properly nice your
tasks" -- the scheduler should be smart enough, to some degree.

FWIW, Solaris actually implements a completely different scheduling
policy, SCHED_INTERACT or something.  It is for windowed tasks in X --
they get a large interactivity bonus.

	Robert Love


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  1:59                           ` Robert Love
@ 2002-01-16  2:04                             ` Davide Libenzi
  2002-01-16  2:59                               ` Robert Love
  0 siblings, 1 reply; 20+ messages in thread
From: Davide Libenzi @ 2002-01-16  2:04 UTC (permalink / raw)
  To: Robert Love; +Cc: Ingo Molnar, Ed Tomlinson, lkml, Linus Torvalds

On 15 Jan 2002, Robert Love wrote:

> On Tue, 2002-01-15 at 21:48, Ingo Molnar wrote:
>
> > there is a way: renicing. Either use nice +19 on the compilation job or
> > use nice -5 on the 'known good' tasks. Perhaps we should allow a nice
> > decrease of up to -5 from the default level - and things like KDE or Gnome
> > could renice interactive tasks, while things like compilation jobs would
> > run on the default priority.
>
> This isn't a bad idea, as long as we don't use it as a crutch or
> excuse.  That is, answer scheduling problems with "properly nice your
> tasks" -- the scheduler should be smart enough, to some degree.
>
> FWIW, Solaris actually implements a completely different scheduling
> policy, SCHED_INTERACT or something.  It is for windowed tasks in X --
> they get a large interactivity bonus.

Now ( with 2.5.3-pre1 ) intractivity is *very good* but SCHED_INTERACT
would help *a lot* to get things even more right.




- Davide



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  1:49                     ` Ingo Molnar
  2002-01-16  0:44                       ` Ed Tomlinson
@ 2002-01-16  2:06                       ` Rene Rebe
  1 sibling, 0 replies; 20+ messages in thread
From: Rene Rebe @ 2002-01-16  2:06 UTC (permalink / raw)
  To: tomlins; +Cc: mingo, davidel, linux-kernel

Hi.

I3 still shows exactly the same behavior. For a test I simply compiled
ALSA and executed xstart. The screen went black (for a minute?) and
continued to start when ALSA finished.

Also dragging a xterm arround (during a compilation) results in 1-2
frames/per second refresh.

The lst one I tried was sched-patch was G1, it worked fine.

Athlon XP 1700+, SiS735, 512MB RAM, Matrox-G450 ...

From: Ed Tomlinson <tomlins@cam.org>
Subject: Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
Date: Tue, 15 Jan 2002 19:44:51 -0500

> On January 15, 2002 08:49 pm, Ingo Molnar wrote:
> > On Tue, 15 Jan 2002, Ed Tomlinson wrote:
> > > The 2.4.17-I0 patch makes things much better here.  Does this one
> > > suffer from the same bugs that the 2.5.2 version has?
> >
> > i'll do a -I3 patch in a minute.
> >
> > > Major difference from older version of the patch is that top shows
> > > many processes with PRI 0.  I am not sure this is intended?
> >
> > yes, it's intended. Lots of interactive (idle) tasks. Right now the time
> > under which we detect a task as interactive is pretty short, but if you
> > run 'top' with 's 0.3' then you can see how tasks grow/shrink their
> > priorities, depending on the load they generate.
> 
> OK I3 also works fine with respect to my nice test.  One thing I do note
> and I am not too sure how it might be fixed, is what happens when starting 
> what will be interactive programs.  
> 
> Watching with top 's 0.3' I can see them lose priority in the 3-10 seconds it
> takes them to setup.  This is not that critical if they are the only thing trying
> to run.  If you have another (not niced) task eating cpu (like a kernel compile) 
> then intactive startup time suffers.  Startup time is wait time that _is_ noticed
> by users.
> 
> Is there some way we could tell the scheduler or the scheduler could learn that 
> a given _program_ is usually interactive so it should wait at bit (10 seconds on my 
> box would work) before starting to increase its priority numbers?
> 
> TIA
> Ed Tomlinson
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  0:44                       ` Ed Tomlinson
@ 2002-01-16  2:48                         ` Ingo Molnar
  2002-01-16  1:59                           ` Robert Love
  0 siblings, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2002-01-16  2:48 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Davide Libenzi, lkml, Linus Torvalds


On Tue, 15 Jan 2002, Ed Tomlinson wrote:

> OK I3 also works fine with respect to my nice test. [...]

good!

> Watching with top 's 0.3' I can see them lose priority in the 3-10
> seconds it takes them to setup.  This is not that critical if they are
> the only thing trying to run.  If you have another (not niced) task
> eating cpu (like a kernel compile)  then intactive startup time
> suffers.  Startup time is wait time that _is_ noticed by users.

well, the kernel needs some 'proof' that a task is interactive, before it
gives it special attention.

the scheduler will give newly started up tasks some credit (if the parent
is interactive), but if they take too long to start up then there is
nothing it can do but to penalize them.

> Is there some way we could tell the scheduler or the scheduler could
> learn that a given _program_ is usually interactive so it should wait
> at bit (10 seconds on my box would work) before starting to increase
> its priority numbers?

there is a way: renicing. Either use nice +19 on the compilation job or
use nice -5 on the 'known good' tasks. Perhaps we should allow a nice
decrease of up to -5 from the default level - and things like KDE or Gnome
could renice interactive tasks, while things like compilation jobs would
run on the default priority.

	Ingo


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  2:04                             ` Davide Libenzi
@ 2002-01-16  2:59                               ` Robert Love
  2002-01-16  3:04                                 ` Linus Torvalds
  0 siblings, 1 reply; 20+ messages in thread
From: Robert Love @ 2002-01-16  2:59 UTC (permalink / raw)
  To: Davide Libenzi; +Cc: Ingo Molnar, Ed Tomlinson, lkml, Linus Torvalds

On Tue, 2002-01-15 at 21:04, Davide Libenzi wrote:

> On 15 Jan 2002, Robert Love wrote:
> > This isn't a bad idea, as long as we don't use it as a crutch or
> > excuse.  That is, answer scheduling problems with "properly nice your
> > tasks" -- the scheduler should be smart enough, to some degree.
> >
> > FWIW, Solaris actually implements a completely different scheduling
> > policy, SCHED_INTERACT or something.  It is for windowed tasks in X --
> > they get a large interactivity bonus.

> Now ( with 2.5.3-pre1 ) intractivity is *very good* but SCHED_INTERACT
> would help *a lot* to get things even more right.

I looked it up; its called class IA.  I don't know if it grows from a
limitation of their scheduler (i.e. they can't calculate priority and be
as fair to interactive tasks as us) or if it offers a fundamental
advantage.  I suspect their are a myriad of things things we can do with
an interactive/GUI scheduling policy.

One thing this is, since their kernel is preemptible, it marks processes
that very much always deserve a scheduling boost based on interactivity,
and thus their interactivity is quite nice.

	Robert Love


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [patch] O(1) scheduler-H6/H7/I0 and nice +19
  2002-01-16  2:59                               ` Robert Love
@ 2002-01-16  3:04                                 ` Linus Torvalds
  0 siblings, 0 replies; 20+ messages in thread
From: Linus Torvalds @ 2002-01-16  3:04 UTC (permalink / raw)
  To: Robert Love; +Cc: Davide Libenzi, Ingo Molnar, Ed Tomlinson, lkml


On 15 Jan 2002, Robert Love wrote:
>
> I looked it up; its called class IA.  I don't know if it grows from a
> limitation of their scheduler (i.e. they can't calculate priority and be
> as fair to interactive tasks as us) or if it offers a fundamental
> advantage.  I suspect their are a myriad of things things we can do with
> an interactive/GUI scheduling policy.

I really doubt there is a _single_ interesting case apart from X itself.

X is kind of special, in that the X server can use a lot of CPU time if
you have people scrolling data on the screen etc, yet you want it to be
interactive because it does its own scheduling of actual user input etc.

So you can have the X server showing all the signs of a CPU hog, while
still being really important.

The current scheduler is pretty good at handling it, and since I don't
believe it is a generic problem, and since there _is_ a specific answer
for the specific case of X (ie "renice -10 X") already, I see no real
reason to have a new scheduling class.

(That said, I personally had running X _without_ the renice as my test of
scheduler interactivity goodness. It breaks down in the cases where X
really becomes CPU-bound, but if you ignore the really pathological case
where X is a CPU-hog it's a really good interactivity tester. The current
scheduler passes at least my personal criteria with flying colors in this
sense).

In short: I don't want to need to renice X to get good interactive
behaviour under any normal load, but I want even _less_ to start making up
scheduler classes for it.

		Linus


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2002-01-16  3:04 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <Pine.LNX.4.33.0201111723260.3212-100000@localhost.localdomain>
2002-01-11 21:28 ` [patch] O(1) scheduler-H6 and nice +19 Ed Tomlinson
2002-01-14  3:27   ` [patch] O(1) scheduler-H6/H7 " Ed Tomlinson
2002-01-14  3:45     ` Davide Libenzi
2002-01-15  1:37       ` Ed Tomlinson
2002-01-15  1:50         ` Davide Libenzi
2002-01-15  1:58           ` Davide Libenzi
2002-01-15  2:18           ` Ed Tomlinson
2002-01-15  2:33             ` Davide Libenzi
2002-01-15  3:19               ` Ed Tomlinson
2002-01-15  3:27                 ` Davide Libenzi
2002-01-15 23:48                   ` [patch] O(1) scheduler-H6/H7/I0 " Ed Tomlinson
2002-01-15 23:56                     ` Davide Libenzi
2002-01-16  1:49                     ` Ingo Molnar
2002-01-16  0:44                       ` Ed Tomlinson
2002-01-16  2:48                         ` Ingo Molnar
2002-01-16  1:59                           ` Robert Love
2002-01-16  2:04                             ` Davide Libenzi
2002-01-16  2:59                               ` Robert Love
2002-01-16  3:04                                 ` Linus Torvalds
2002-01-16  2:06                       ` Rene Rebe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).