linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
@ 2007-03-11  3:57 Con Kolivas
  2007-03-11 11:39 ` Mike Galbraith
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-11  3:57 UTC (permalink / raw)
  To: linux kernel mailing list, ck list, Andrew Morton, Ingo Molnar

What follows this email is a patch series for the latest version of the RSDL 
cpu scheduler (ie v0.29). I have addressed all bugs that I am able to 
reproduce in this version so if some people would be kind enough to test if 
there are any hidden bugs or oops lurking, it would be nice to know in 
anticipation of putting this back in -mm. Thanks.

Full patch for 2.6.21-rc3-mm2:
http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29.patch

Patch series (which will follow this email):
http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2/

Changelog:
- Fixed the longstanding buggy bitmap problem which occurred due to swapping 
arrays when there were still tasks on the active array.
- Fixed preemption of realtime tasks when rt prio inheritance elevated their 
priority.
- Made kernel threads not be reniced to -5 by default
- Changed sched_yield behaviour of SCHED_NORMAL (SCHED_OTHER) to resemble 
realtime task yielding.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11  3:57 [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2 Con Kolivas
@ 2007-03-11 11:39 ` Mike Galbraith
  2007-03-11 11:48   ` Con Kolivas
                     ` (2 more replies)
  0 siblings, 3 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-11 11:39 UTC (permalink / raw)
  To: Con Kolivas
  Cc: linux kernel mailing list, ck list, Andrew Morton, Ingo Molnar

Hi Con,

On Sun, 2007-03-11 at 14:57 +1100, Con Kolivas wrote:
> What follows this email is a patch series for the latest version of the RSDL 
> cpu scheduler (ie v0.29). I have addressed all bugs that I am able to 
> reproduce in this version so if some people would be kind enough to test if 
> there are any hidden bugs or oops lurking, it would be nice to know in 
> anticipation of putting this back in -mm. Thanks.
> 
> Full patch for 2.6.21-rc3-mm2:
> http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29.patch

I'm seeing a cpu distribution problem running this on my P4 box.

Scenario:
listening to music collection (mp3) via Amarok.  Enable Amarok
visualization gforce, and size such that X and gforce each use ~50% cpu.
Start rip/encode of new CD with grip/lame encoder.  Lame is set to use
both cpus, at nice 5.  Once the encoders start, they receive
considerable more cpu than nice 0 X/Gforce, taking ~120% and leaving the
remaining 80% for X/Gforce and Amarok (when it updates it's ~12k entry
database) to squabble over.

With 2.6.21-rc3,  X/Gforce maintain their ~50% cpu (remain smooth), and
the encoders (100%cpu bound) get whats left when Amarok isn't eating it.

I plunked the above patch into plain 2.6.21-rc3 and retested to
eliminate other mm tree differences, and it's repeatable.  The nice 5
cpu hogs always receive considerably more that the nice 0 sleepers.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 11:39 ` Mike Galbraith
@ 2007-03-11 11:48   ` Con Kolivas
  2007-03-11 12:08     ` Mike Galbraith
  2007-03-11 12:10   ` Ingo Molnar
  2007-03-11 14:32   ` Gene Heskett
  2 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-11 11:48 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: linux kernel mailing list, ck list, Andrew Morton, Ingo Molnar

On Sunday 11 March 2007 22:39, Mike Galbraith wrote:
> Hi Con,
>
> On Sun, 2007-03-11 at 14:57 +1100, Con Kolivas wrote:
> > What follows this email is a patch series for the latest version of the
> > RSDL cpu scheduler (ie v0.29). I have addressed all bugs that I am able
> > to reproduce in this version so if some people would be kind enough to
> > test if there are any hidden bugs or oops lurking, it would be nice to
> > know in anticipation of putting this back in -mm. Thanks.
> >
> > Full patch for 2.6.21-rc3-mm2:
> > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29
> >.patch
>
> I'm seeing a cpu distribution problem running this on my P4 box.
>
> Scenario:
> listening to music collection (mp3) via Amarok.  Enable Amarok
> visualization gforce, and size such that X and gforce each use ~50% cpu.
> Start rip/encode of new CD with grip/lame encoder.  Lame is set to use
> both cpus, at nice 5.  Once the encoders start, they receive
> considerable more cpu than nice 0 X/Gforce, taking ~120% and leaving the
> remaining 80% for X/Gforce and Amarok (when it updates it's ~12k entry
> database) to squabble over.
>
> With 2.6.21-rc3,  X/Gforce maintain their ~50% cpu (remain smooth), and
> the encoders (100%cpu bound) get whats left when Amarok isn't eating it.
>
> I plunked the above patch into plain 2.6.21-rc3 and retested to
> eliminate other mm tree differences, and it's repeatable.  The nice 5
> cpu hogs always receive considerably more that the nice 0 sleepers.

Thanks for the report. I'm assuming you're describing a single hyperthread P4 
here in SMP mode so 2 logical cores. Can you elaborate on whether there is 
any difference as to which cpu things are bound to as well? Can you also see 
what happens with lame not niced to +5 (ie at 0) and with lame at nice +19.

Thanks.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 11:48   ` Con Kolivas
@ 2007-03-11 12:08     ` Mike Galbraith
  0 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-11 12:08 UTC (permalink / raw)
  To: Con Kolivas
  Cc: linux kernel mailing list, ck list, Andrew Morton, Ingo Molnar

On Sun, 2007-03-11 at 22:48 +1100, Con Kolivas wrote:
> 
> Thanks for the report. I'm assuming you're describing a single hyperthread P4 
> here in SMP mode so 2 logical cores. Can you elaborate on whether there is 
> any difference as to which cpu things are bound to as well? Can you also see 
> what happens with lame not niced to +5 (ie at 0) and with lame at nice +19.

Yes, one P4/HT/SMP. No change at nice 0, but setting the encoders to
nice 19 did put X/gforce ~back where they were with 2.6.21-rc3.  Tasks
don't seem to be bound to any particular cpu, relies on load balancing
(which appears to be working).

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 11:39 ` Mike Galbraith
  2007-03-11 11:48   ` Con Kolivas
@ 2007-03-11 12:10   ` Ingo Molnar
  2007-03-11 12:20     ` Mike Galbraith
  2007-03-12  7:22     ` Mike Galbraith
  2007-03-11 14:32   ` Gene Heskett
  2 siblings, 2 replies; 91+ messages in thread
From: Ingo Molnar @ 2007-03-11 12:10 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, linux kernel mailing list, ck list, Andrew Morton


* Mike Galbraith <efault@gmx.de> wrote:

> > Full patch for 2.6.21-rc3-mm2: 
> > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29.patch
> 
> I'm seeing a cpu distribution problem running this on my P4 box.

> With 2.6.21-rc3, X/Gforce maintain their ~50% cpu (remain smooth), and 
> the encoders (100%cpu bound) get whats left when Amarok isn't eating 
> it.
> 
> I plunked the above patch into plain 2.6.21-rc3 and retested to 
> eliminate other mm tree differences, and it's repeatable.  The nice 5 
> cpu hogs always receive considerably more that the nice 0 sleepers.

hm. Do you get the same same problem on UP too? (i.e. lets eliminate any 
SMP/HT artifacts)

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 12:10   ` Ingo Molnar
@ 2007-03-11 12:20     ` Mike Galbraith
  2007-03-11 21:18       ` Mike Galbraith
  2007-03-12  7:22     ` Mike Galbraith
  1 sibling, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-11 12:20 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Con Kolivas, linux kernel mailing list, ck list, Andrew Morton

On Sun, 2007-03-11 at 13:10 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> 
> > > Full patch for 2.6.21-rc3-mm2: 
> > > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29.patch
> > 
> > I'm seeing a cpu distribution problem running this on my P4 box.
> 
> > With 2.6.21-rc3, X/Gforce maintain their ~50% cpu (remain smooth), and 
> > the encoders (100%cpu bound) get whats left when Amarok isn't eating 
> > it.
> > 
> > I plunked the above patch into plain 2.6.21-rc3 and retested to 
> > eliminate other mm tree differences, and it's repeatable.  The nice 5 
> > cpu hogs always receive considerably more that the nice 0 sleepers.
> 
> hm. Do you get the same same problem on UP too? (i.e. lets eliminate any 
> SMP/HT artifacts)

I'll boot up nosmp and report back (but now it's time to take Opa to the
Gasthaus for his Sunday afternoon brewskies;)

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 11:39 ` Mike Galbraith
  2007-03-11 11:48   ` Con Kolivas
  2007-03-11 12:10   ` Ingo Molnar
@ 2007-03-11 14:32   ` Gene Heskett
  2007-03-12  6:58     ` Radoslaw Szkodzinski
  2 siblings, 1 reply; 91+ messages in thread
From: Gene Heskett @ 2007-03-11 14:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mike Galbraith, Con Kolivas, ck list, Andrew Morton, Ingo Molnar

On Sunday 11 March 2007, Mike Galbraith wrote:
>Hi Con,
>
>On Sun, 2007-03-11 at 14:57 +1100, Con Kolivas wrote:
>> What follows this email is a patch series for the latest version of
>> the RSDL cpu scheduler (ie v0.29). I have addressed all bugs that I am
>> able to reproduce in this version so if some people would be kind
>> enough to test if there are any hidden bugs or oops lurking, it would
>> be nice to know in anticipation of putting this back in -mm. Thanks.
>>
>> Full patch for 2.6.21-rc3-mm2:
>> http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0
>>.29.patch
>
>I'm seeing a cpu distribution problem running this on my P4 box.
>
>Scenario:
>listening to music collection (mp3) via Amarok.  Enable Amarok
>visualization gforce, and size such that X and gforce each use ~50% cpu.
>Start rip/encode of new CD with grip/lame encoder.  Lame is set to use
>both cpus, at nice 5.  Once the encoders start, they receive
>considerable more cpu than nice 0 X/Gforce, taking ~120% and leaving the
>remaining 80% for X/Gforce and Amarok (when it updates it's ~12k entry
>database) to squabble over.
>
>With 2.6.21-rc3,  X/Gforce maintain their ~50% cpu (remain smooth), and
>the encoders (100%cpu bound) get whats left when Amarok isn't eating it.
>
>I plunked the above patch into plain 2.6.21-rc3 and retested to
>eliminate other mm tree differences, and it's repeatable.  The nice 5
>cpu hogs always receive considerably more that the nice 0 sleepers.
>
>	-Mike

Just to comment, I've been running one of the patches between 20-ck1 and 
this latest one, which is building as I type, but I also run gkrellm 
here, version 2.2.9.

Since I have been running this middle of this series patch, something is 
killing gkrellm about once a day, and there is nothing in the logs to 
indicate a problem.  I see a blink out of the corner of my eye, and its 
gone.  And it always starts right back up from a kmenu click.

No idea if anyone else is experiencing this or not.

-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
You scratch my tape, and I'll scratch yours.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 12:20     ` Mike Galbraith
@ 2007-03-11 21:18       ` Mike Galbraith
  0 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-11 21:18 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Con Kolivas, linux kernel mailing list, ck list, Andrew Morton

On Sun, 2007-03-11 at 13:20 +0100, Mike Galbraith wrote:

> I'll boot up nosmp and report back

Hohum.  nosmp doesn't boot (locks after ide [bla] IRQ 14), will
recompile UP in the A.M. and try again.


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 14:32   ` Gene Heskett
@ 2007-03-12  6:58     ` Radoslaw Szkodzinski
  2007-03-12 11:16       ` Gene Heskett
  0 siblings, 1 reply; 91+ messages in thread
From: Radoslaw Szkodzinski @ 2007-03-12  6:58 UTC (permalink / raw)
  To: Gene Heskett
  Cc: linux-kernel, Mike Galbraith, Con Kolivas, ck list,
	Andrew Morton, Ingo Molnar

On 3/11/07, Gene Heskett <gene.heskett@gmail.com> wrote:
> On Sunday 11 March 2007, Mike Galbraith wrote:
>
> Just to comment, I've been running one of the patches between 20-ck1 and
> this latest one, which is building as I type, but I also run gkrellm
> here, version 2.2.9.
>
> Since I have been running this middle of this series patch, something is
> killing gkrellm about once a day, and there is nothing in the logs to
> indicate a problem.  I see a blink out of the corner of my eye, and its
> gone.  And it always starts right back up from a kmenu click.
>
> No idea if anyone else is experiencing this or not.
>
> --
> Cheers, Gene

I've had such an issue with 0.20 or something. Sometimes, the
xfce4-panel would disappear (die) when I displayed its menu.
Very rare issue.

Doesn't happen with 0.28 anyway. :-) Which looks really good, though
I'll update to 0.30.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-11 12:10   ` Ingo Molnar
  2007-03-11 12:20     ` Mike Galbraith
@ 2007-03-12  7:22     ` Mike Galbraith
  2007-03-12  7:48       ` Con Kolivas
  1 sibling, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12  7:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Con Kolivas, linux kernel mailing list, ck list, Andrew Morton

On Sun, 2007-03-11 at 13:10 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> 
> > > Full patch for 2.6.21-rc3-mm2: 
> > > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29.patch
> > 
> > I'm seeing a cpu distribution problem running this on my P4 box.
> 
> > With 2.6.21-rc3, X/Gforce maintain their ~50% cpu (remain smooth), and 
> > the encoders (100%cpu bound) get whats left when Amarok isn't eating 
> > it.
> > 
> > I plunked the above patch into plain 2.6.21-rc3 and retested to 
> > eliminate other mm tree differences, and it's repeatable.  The nice 5 
> > cpu hogs always receive considerably more that the nice 0 sleepers.
> 
> hm. Do you get the same same problem on UP too? (i.e. lets eliminate any 
> SMP/HT artifacts)

Behavior is slightly different with a UP kernel.  Neither encoder
receives more cpu than X, but they each still receive more than gforce.
The distribution of X/Gforce vs lame/lame averages per eyeball to
roughly ~50:50.

I noticed Con posted an accounting fix, and applied it.  No change.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  7:22     ` Mike Galbraith
@ 2007-03-12  7:48       ` Con Kolivas
  2007-03-12  8:29         ` Con Kolivas
  2007-03-12  8:44         ` Mike Galbraith
  0 siblings, 2 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12  7:48 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Monday 12 March 2007 18:22, Mike Galbraith wrote:
> On Sun, 2007-03-11 at 13:10 +0100, Ingo Molnar wrote:
> > * Mike Galbraith <efault@gmx.de> wrote:
> > > > Full patch for 2.6.21-rc3-mm2:
> > > > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-
> > > >0.29.patch
> > >
> > > I'm seeing a cpu distribution problem running this on my P4 box.
> > >
> > > With 2.6.21-rc3, X/Gforce maintain their ~50% cpu (remain smooth), and
> > > the encoders (100%cpu bound) get whats left when Amarok isn't eating
> > > it.
> > >
> > > I plunked the above patch into plain 2.6.21-rc3 and retested to
> > > eliminate other mm tree differences, and it's repeatable.  The nice 5
> > > cpu hogs always receive considerably more that the nice 0 sleepers.
> >
> > hm. Do you get the same same problem on UP too? (i.e. lets eliminate any
> > SMP/HT artifacts)
>
> Behavior is slightly different with a UP kernel.  Neither encoder
> receives more cpu than X, but they each still receive more than gforce.
> The distribution of X/Gforce vs lame/lame averages per eyeball to
> roughly ~50:50.
>
> I noticed Con posted an accounting fix, and applied it.  No change.

So the lames are nice 5 which means they should receive 75% of the cpu that 
nice 0 tasks receive so they should get 43% of the cpu...

Just a couple of questions;

The X/Gforce case; do they alternate cpu between them? By that I mean when 
they're the only thing running does the cpu load summate to 1 or does it 
summate to 2?

Gforce presumably is a 3d visualisation? Do you use one of the graphics card 
drivers listed that uses yield?

,----[grep -r sched_yield mesa]
| mesa/mesa/src/mesa/drivers/dri/r300/radeon_ioctl.c:       sched_yield();
| mesa/mesa/src/mesa/drivers/dri/i915tex/intel_batchpool.c:      
sched_yield();
| mesa/mesa/src/mesa/drivers/dri/i915tex/intel_batchbuffer.c:         
sched_yield();
| mesa/mesa/src/mesa/drivers/dri/common/vblank.h:#include <sched.h>   /* for 
sched_yield() */
| mesa/mesa/src/mesa/drivers/dri/common/vblank.h:#include <sched.h>   /* for 
sched_yield() */
| mesa/mesa/src/mesa/drivers/dri/common/vblank.h:      sched_yield();                                                   
\
| mesa/mesa/src/mesa/drivers/dri/unichrome/via_ioctl.c:      sched_yield();
| mesa/mesa/src/mesa/drivers/dri/i915/intel_ioctl.c:     sched_yield();
| mesa/mesa/src/mesa/drivers/dri/r200/r200_ioctl.c:       sched_yield();
`----

Thanks

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  7:48       ` Con Kolivas
@ 2007-03-12  8:29         ` Con Kolivas
  2007-03-12  8:55           ` Mike Galbraith
  2007-03-12  8:44         ` Mike Galbraith
  1 sibling, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-12  8:29 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Monday 12 March 2007 18:48, Con Kolivas wrote:
> On Monday 12 March 2007 18:22, Mike Galbraith wrote:
> > On Sun, 2007-03-11 at 13:10 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith <efault@gmx.de> wrote:
> > > > > Full patch for 2.6.21-rc3-mm2:
> > > > > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsd
> > > > >l- 0.29.patch
> > > >
> > > > I'm seeing a cpu distribution problem running this on my P4 box.
> > > >
> > > > With 2.6.21-rc3, X/Gforce maintain their ~50% cpu (remain smooth),
> > > > and the encoders (100%cpu bound) get whats left when Amarok isn't
> > > > eating it.
> > > >
> > > > I plunked the above patch into plain 2.6.21-rc3 and retested to
> > > > eliminate other mm tree differences, and it's repeatable.  The nice 5
> > > > cpu hogs always receive considerably more that the nice 0 sleepers.
> > >
> > > hm. Do you get the same same problem on UP too? (i.e. lets eliminate
> > > any SMP/HT artifacts)
> >
> > Behavior is slightly different with a UP kernel.  Neither encoder
> > receives more cpu than X, but they each still receive more than gforce.
> > The distribution of X/Gforce vs lame/lame averages per eyeball to
> > roughly ~50:50.
> >
> > I noticed Con posted an accounting fix, and applied it.  No change.
>
> So the lames are nice 5 which means they should receive 75% of the cpu that
> nice 0 tasks receive so they should get 43% of the cpu...
>
> Just a couple of questions;
>
> The X/Gforce case; do they alternate cpu between them? By that I mean when
> they're the only thing running does the cpu load summate to 1 or does it
> summate to 2?

I'll save you the trouble. I just checked myself and indeed the load is only 
1. What this means is that although there are 2 tasks running, only one is 
running at any time making a total load of 1. So, if we add two other tasks 
that add 2 more to the load the total load is 3. However if we weight the 
other two tasks at nice 5, they only add .75 each to the load making a 
weighted total of 2.5. This means that X+Gforce together should get a total 
of 1/2.5 or 40% of the overall cpu. That sounds like exactly what you're 
describing is happening.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  7:48       ` Con Kolivas
  2007-03-12  8:29         ` Con Kolivas
@ 2007-03-12  8:44         ` Mike Galbraith
  1 sibling, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12  8:44 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Mon, 2007-03-12 at 18:48 +1100, Con Kolivas wrote:
> 
> Just a couple of questions;
> 
> The X/Gforce case; do they alternate cpu between them? By that I mean when 
> they're the only thing running does the cpu load summate to 1 or does it 
> summate to 2?

They're each on their own cpu (sibling).  Oh, you mean does one wake the
other?  If so, yeah, I believe so.  I instrumented wakeups a (long)
while back, looking into keeping heavy cpu visualizations smooth, and
iirc, X was waking it.

> Gforce presumably is a 3d visualisation? Do you use one of the graphics card 
> drivers listed that uses yield?

No, GL/DRI here.  I'm using a Radeon X850Pro (R480), and for GL/DRI I'd
have to load the proprietary driver.

	-Mike




^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  8:29         ` Con Kolivas
@ 2007-03-12  8:55           ` Mike Galbraith
  2007-03-12  9:22             ` Con Kolivas
  0 siblings, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12  8:55 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Mon, 2007-03-12 at 19:29 +1100, Con Kolivas wrote:

> I'll save you the trouble. I just checked myself and indeed the load is only 
> 1. What this means is that although there are 2 tasks running, only one is 
> running at any time making a total load of 1. So, if we add two other tasks 
> that add 2 more to the load the total load is 3. However if we weight the 
> other two tasks at nice 5, they only add .75 each to the load making a 
> weighted total of 2.5. This means that X+Gforce together should get a total 
> of 1/2.5 or 40% of the overall cpu. That sounds like exactly what you're 
> describing is happening.

Hmm.  So... anything that's client/server is going to suffer horribly
unless niced tasks are niced all the way down to 19?

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  8:55           ` Mike Galbraith
@ 2007-03-12  9:22             ` Con Kolivas
  2007-03-12  9:38               ` Mike Galbraith
  2007-03-12  9:38               ` Xavier Bestel
  0 siblings, 2 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12  9:22 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Monday 12 March 2007 19:55, Mike Galbraith wrote:
> On Mon, 2007-03-12 at 19:29 +1100, Con Kolivas wrote:
> > I'll save you the trouble. I just checked myself and indeed the load is
> > only 1. What this means is that although there are 2 tasks running, only
> > one is running at any time making a total load of 1. So, if we add two
> > other tasks that add 2 more to the load the total load is 3. However if
> > we weight the other two tasks at nice 5, they only add .75 each to the
> > load making a weighted total of 2.5. This means that X+Gforce together
> > should get a total of 1/2.5 or 40% of the overall cpu. That sounds like
> > exactly what you're describing is happening.
>
> Hmm.  So... anything that's client/server is going to suffer horribly
> unless niced tasks are niced all the way down to 19?

Fortunately most client server models dont usually have mutually exclusive cpu 
use like this X case. There are many things about X that are still a little 
(/me tries to think of a relatively neutral term)... wanting. :(

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  9:22             ` Con Kolivas
@ 2007-03-12  9:38               ` Mike Galbraith
  2007-03-12 10:27                 ` Con Kolivas
  2007-03-12  9:38               ` Xavier Bestel
  1 sibling, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12  9:38 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Mon, 2007-03-12 at 20:22 +1100, Con Kolivas wrote:
> On Monday 12 March 2007 19:55, Mike Galbraith wrote:
> > On Mon, 2007-03-12 at 19:29 +1100, Con Kolivas wrote:
> > > I'll save you the trouble. I just checked myself and indeed the load is
> > > only 1. What this means is that although there are 2 tasks running, only
> > > one is running at any time making a total load of 1. So, if we add two
> > > other tasks that add 2 more to the load the total load is 3. However if
> > > we weight the other two tasks at nice 5, they only add .75 each to the
> > > load making a weighted total of 2.5. This means that X+Gforce together
> > > should get a total of 1/2.5 or 40% of the overall cpu. That sounds like
> > > exactly what you're describing is happening.
> >
> > Hmm.  So... anything that's client/server is going to suffer horribly
> > unless niced tasks are niced all the way down to 19?
> 
> Fortunately most client server models dont usually have mutually exclusive cpu 
> use like this X case. There are many things about X that are still a little 
> (/me tries to think of a relatively neutral term)... wanting. :(

But the reality of X is what we have to deal with.

This scheduler seems to close the corner cases of the interactivity
estimator, but this "any background load is palpable" thing is decidedly
detrimental to interactive feel.

When I looked into keeping interactive tasks responsive, I came to the
conclusion that I just couldn't get there from here across the full
spectrum of cpu usage without a scheduler hint.  Interactive feel is
absolutely dependent upon unfairness in many cases, and targeting that
unfairness gets it right where heuristics sometimes can't.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  9:22             ` Con Kolivas
  2007-03-12  9:38               ` Mike Galbraith
@ 2007-03-12  9:38               ` Xavier Bestel
  2007-03-12 10:34                 ` Con Kolivas
  1 sibling, 1 reply; 91+ messages in thread
From: Xavier Bestel @ 2007-03-12  9:38 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Mike Galbraith, Ingo Molnar, linux kernel mailing list, ck list,
	Andrew Morton

On Mon, 2007-03-12 at 20:22 +1100, Con Kolivas wrote:
> On Monday 12 March 2007 19:55, Mike Galbraith wrote:
> > Hmm.  So... anything that's client/server is going to suffer horribly
> > unless niced tasks are niced all the way down to 19?
> 
> Fortunately most client server models dont usually have mutually exclusive cpu 
> use like this X case. There are many things about X that are still a little 
> (/me tries to think of a relatively neutral term)... wanting. :(

I'd say the problem is less with X than with Xlib, which is heavily
round-trip-based. Fortunately XCB (its successor) seeks to be more
asynchronous.

	Xav



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  9:38               ` Mike Galbraith
@ 2007-03-12 10:27                 ` Con Kolivas
  2007-03-12 10:57                   ` Mike Galbraith
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 10:27 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Monday 12 March 2007 20:38, Mike Galbraith wrote:
> On Mon, 2007-03-12 at 20:22 +1100, Con Kolivas wrote:
> > On Monday 12 March 2007 19:55, Mike Galbraith wrote:
> > > On Mon, 2007-03-12 at 19:29 +1100, Con Kolivas wrote:
> > > > I'll save you the trouble. I just checked myself and indeed the load
> > > > is only 1. What this means is that although there are 2 tasks
> > > > running, only one is running at any time making a total load of 1.
> > > > So, if we add two other tasks that add 2 more to the load the total
> > > > load is 3. However if we weight the other two tasks at nice 5, they
> > > > only add .75 each to the load making a weighted total of 2.5. This
> > > > means that X+Gforce together should get a total of 1/2.5 or 40% of
> > > > the overall cpu. That sounds like exactly what you're describing is
> > > > happening.
> > >
> > > Hmm.  So... anything that's client/server is going to suffer horribly
> > > unless niced tasks are niced all the way down to 19?
> >
> > Fortunately most client server models dont usually have mutually
> > exclusive cpu use like this X case. There are many things about X that
> > are still a little (/me tries to think of a relatively neutral term)...
> > wanting. :(
>
> But the reality of X is what we have to deal with.

And unix.

> This scheduler seems to close the corner cases of the interactivity
> estimator, but this "any background load is palpable" thing is decidedly
> detrimental to interactive feel.

Now I think you're getting carried away because of your expectations from the 
previous scheduler and its woefully unfair treatment towards interactive 
tasks. Look at how you're loading up your poor P4 even with HT. You throw 2 
cpu hogs only gently niced at it on top of your interactive tasks. If you're 
happy to nice them +5, why not more? And you know as well as anyone that the 
2nd logical core only gives you ~25% more cpu power overall so you're asking 
too much of it. Let's not even talk about how lovely this will (not) be once 
SMT nice gets killed off come 2.6.21 and nice does less if "buyer beware" you 
chose to enable HT in your own words.

> When I looked into keeping interactive tasks responsive, I came to the
> conclusion that I just couldn't get there from here across the full
> spectrum of cpu usage without a scheduler hint.  Interactive feel is
> absolutely dependent upon unfairness in many cases, and targeting that
> unfairness gets it right where heuristics sometimes can't.

See above. Your expectations of what you should be able to do are simply 
skewed. Find what cpu balance you loved in the old one (and I believe it 
wasn't that much more cpu in favour of X if I recall correctly) and simply 
change the nice setting on your lame encoder - since you're already setting 
one anyway.

We simply cannot continue arguing that we should dish out unfairness in any 
manner any more. It will always come back and bite us where we don't want it. 
We are getting good interactive response with a fair scheduler yet you seem 
intent on overloading it to find fault with it.

> 	-Mike

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  9:38               ` Xavier Bestel
@ 2007-03-12 10:34                 ` Con Kolivas
  2007-03-12 16:38                   ` Kasper Sandberg
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 10:34 UTC (permalink / raw)
  To: Xavier Bestel
  Cc: Mike Galbraith, Ingo Molnar, linux kernel mailing list, ck list,
	Andrew Morton

On Monday 12 March 2007 20:38, Xavier Bestel wrote:
> On Mon, 2007-03-12 at 20:22 +1100, Con Kolivas wrote:
> > On Monday 12 March 2007 19:55, Mike Galbraith wrote:
> > > Hmm.  So... anything that's client/server is going to suffer horribly
> > > unless niced tasks are niced all the way down to 19?
> >
> > Fortunately most client server models dont usually have mutually
> > exclusive cpu use like this X case. There are many things about X that
> > are still a little (/me tries to think of a relatively neutral term)...
> > wanting. :(
>
> I'd say the problem is less with X than with Xlib, which is heavily
> round-trip-based. Fortunately XCB (its successor) seeks to be more
> asynchronous.

Yes I recall a talk by Keith Packard on Xorg development and how a heck of a 
lot of time spent spinning by X (?Xlib) for no damn good reason was the 
number one thing that made X suck and basically it was silly to try and fix 
that at the cpu scheduler level since it needed to be corrected in X, and was 
being actively addressed. So we should stop trying to write cpu schedulers 
for X.

> 	Xav

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 10:27                 ` Con Kolivas
@ 2007-03-12 10:57                   ` Mike Galbraith
  2007-03-12 11:08                     ` Ingo Molnar
  0 siblings, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 10:57 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Mon, 2007-03-12 at 21:27 +1100, Con Kolivas wrote:
> On Monday 12 March 2007 20:38, Mike Galbraith wrote:
> > 
> Now I think you're getting carried away because of your expectations from the 
> previous scheduler and its woefully unfair treatment towards interactive 
> tasks. Look at how you're loading up your poor P4 even with HT. You throw 2 
> cpu hogs only gently niced at it on top of your interactive tasks. If you're 
> happy to nice them +5, why not more? And you know as well as anyone that the 
> 2nd logical core only gives you ~25% more cpu power overall so you're asking 
> too much of it. Let's not even talk about how lovely this will (not) be once 
> SMT nice gets killed off come 2.6.21 and nice does less if "buyer beware" you 
> chose to enable HT in your own words.

The test scenario was one any desktop user might do with every
expectation responsiveness of the interactive application remain intact.
I understand the concepts here Con, and I'm not knocking your scheduler.
I find it to be a step forward on the one hand, but a step backward on
the other.

Tossing in the SMT nice comment was utter bullshit.  All kernels tested
were missing SMT nice.

> > When I looked into keeping interactive tasks responsive, I came to the
> > conclusion that I just couldn't get there from here across the full
> > spectrum of cpu usage without a scheduler hint.  Interactive feel is
> > absolutely dependent upon unfairness in many cases, and targeting that
> > unfairness gets it right where heuristics sometimes can't.
> 
> See above. Your expectations of what you should be able to do are simply 
> skewed. Find what cpu balance you loved in the old one (and I believe it 
> wasn't that much more cpu in favour of X if I recall correctly) and simply 
> change the nice setting on your lame encoder - since you're already setting 
> one anyway.
> 
> We simply cannot continue arguing that we should dish out unfairness in any 
> manner any more. It will always come back and bite us where we don't want it.

Unless you target accurately.

> We are getting good interactive response with a fair scheduler yet you seem 
> intent on overloading it to find fault with it.

I'm not trying to find fault, I'm TESTING AND REPORTING.  Was.

	bye,

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 10:57                   ` Mike Galbraith
@ 2007-03-12 11:08                     ` Ingo Molnar
  2007-03-12 11:23                       ` Con Kolivas
  2007-03-12 11:25                       ` Mike Galbraith
  0 siblings, 2 replies; 91+ messages in thread
From: Ingo Molnar @ 2007-03-12 11:08 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton


* Mike Galbraith <efault@gmx.de> wrote:

> The test scenario was one any desktop user might do with every 
> expectation responsiveness of the interactive application remain 
> intact. I understand the concepts here Con, and I'm not knocking your 
> scheduler. I find it to be a step forward on the one hand, but a step 
> backward on the other.

ok, then that step backward needs to be fixed.

> > We are getting good interactive response with a fair scheduler yet 
> > you seem intent on overloading it to find fault with it.
> 
> I'm not trying to find fault, I'm TESTING AND REPORTING.  Was.

Con, could you please take Mike's report of this regression seriously 
and address it? Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12  6:58     ` Radoslaw Szkodzinski
@ 2007-03-12 11:16       ` Gene Heskett
  2007-03-12 11:49         ` Gene Heskett
  0 siblings, 1 reply; 91+ messages in thread
From: Gene Heskett @ 2007-03-12 11:16 UTC (permalink / raw)
  To: linux-kernel
  Cc: Radoslaw Szkodzinski, Mike Galbraith, Con Kolivas, ck list,
	Andrew Morton, Ingo Molnar

On Monday 12 March 2007, Radoslaw Szkodzinski wrote:
>On 3/11/07, Gene Heskett <gene.heskett@gmail.com> wrote:
>> On Sunday 11 March 2007, Mike Galbraith wrote:
>>
>> Just to comment, I've been running one of the patches between 20-ck1
>> and this latest one, which is building as I type, but I also run
>> gkrellm here, version 2.2.9.
>>
>> Since I have been running this middle of this series patch, something
>> is killing gkrellm about once a day, and there is nothing in the logs
>> to indicate a problem.  I see a blink out of the corner of my eye, and
>> its gone.  And it always starts right back up from a kmenu click.
>>
>> No idea if anyone else is experiencing this or not.
>>
>> --
>> Cheers, Gene
>
>I've had such an issue with 0.20 or something. Sometimes, the
>xfce4-panel would disappear (die) when I displayed its menu.
>Very rare issue.
>
>Doesn't happen with 0.28 anyway. :-) Which looks really good, though
>I'll update to 0.30.

And I didn't see it for the few hours I was booted to 21-rc3-rsdl-0.29, 
but tar sure went berzackers.

To Con, I knew 2.6.20 worked with your earlier patches, so rather than 
revert all the way, I just rebooted to 2.6.20.2-rdsl-0.30 and I'm going 
to fire off another backup.  I suspect it will work, but will advise the 
next time I wake up.



-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
The hardest thing is to disguise your feelings when you put a lot of
relatives on the train for home.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:08                     ` Ingo Molnar
@ 2007-03-12 11:23                       ` Con Kolivas
  2007-03-12 13:48                         ` Theodore Tso
  2007-03-12 14:34                         ` Mike Galbraith
  2007-03-12 11:25                       ` Mike Galbraith
  1 sibling, 2 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 11:23 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Monday 12 March 2007 22:08, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> > The test scenario was one any desktop user might do with every
> > expectation responsiveness of the interactive application remain
> > intact. I understand the concepts here Con, and I'm not knocking your
> > scheduler. I find it to be a step forward on the one hand, but a step
> > backward on the other.
>
> ok, then that step backward needs to be fixed.
>
> > > We are getting good interactive response with a fair scheduler yet
> > > you seem intent on overloading it to find fault with it.
> >
> > I'm not trying to find fault, I'm TESTING AND REPORTING.  Was.
>
> Con, could you please take Mike's report of this regression seriously
> and address it? Thanks,

Sure. 

Mike the cpu is being proportioned out perfectly according to fairness as I 
mentioned in the prior email, yet X is getting the lower latency scheduling. 
I'm not sure within the bounds of fairness what more would you have happen to 
your liking with this test case?

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:08                     ` Ingo Molnar
  2007-03-12 11:23                       ` Con Kolivas
@ 2007-03-12 11:25                       ` Mike Galbraith
  1 sibling, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 11:25 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Con Kolivas, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Mon, 2007-03-12 at 12:08 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> 
> > The test scenario was one any desktop user might do with every 
> > expectation responsiveness of the interactive application remain 
> > intact. I understand the concepts here Con, and I'm not knocking your 
> > scheduler. I find it to be a step forward on the one hand, but a step 
> > backward on the other.
> 
> ok, then that step backward needs to be fixed.

btw, this scenario wasn't invented by me, it came from the _every single
day_ usage of my best friend since his conversion to linux (he's in love
now;) a month ago.  After I un-crippled all of the multimedia apps that
came with our distribution, this is the thing he has been doing most.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:16       ` Gene Heskett
@ 2007-03-12 11:49         ` Gene Heskett
  2007-03-12 11:58           ` Con Kolivas
  0 siblings, 1 reply; 91+ messages in thread
From: Gene Heskett @ 2007-03-12 11:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Radoslaw Szkodzinski, Mike Galbraith, Con Kolivas, ck list,
	Andrew Morton, Ingo Molnar

On Monday 12 March 2007, Gene Heskett wrote:
>On Monday 12 March 2007, Radoslaw Szkodzinski wrote:
>>On 3/11/07, Gene Heskett <gene.heskett@gmail.com> wrote:
>>> On Sunday 11 March 2007, Mike Galbraith wrote:
>>>
>>> Just to comment, I've been running one of the patches between 20-ck1
>>> and this latest one, which is building as I type, but I also run
>>> gkrellm here, version 2.2.9.
>>>
>>> Since I have been running this middle of this series patch, something
>>> is killing gkrellm about once a day, and there is nothing in the logs
>>> to indicate a problem.  I see a blink out of the corner of my eye,
>>> and its gone.  And it always starts right back up from a kmenu click.
>>>
>>> No idea if anyone else is experiencing this or not.
>>>
>>> --
>>> Cheers, Gene
>>
>>I've had such an issue with 0.20 or something. Sometimes, the
>>xfce4-panel would disappear (die) when I displayed its menu.
>>Very rare issue.
>>
>>Doesn't happen with 0.28 anyway. :-) Which looks really good, though
>>I'll update to 0.30.
>
>And I didn't see it for the few hours I was booted to 21-rc3-rsdl-0.29,
>but tar sure went berzackers.
>
>To Con, I knew 2.6.20 worked with your earlier patches, so rather than
>revert all the way, I just rebooted to 2.6.20.2-rdsl-0.30 and I'm going
>to fire off another backup.  I suspect it will work, but will advise the
>next time I wake up.

After posting the above, I thought maybe I'd hit a target in the middle 
and build a 2.6.20.2, with your -0.30 patch, but...

I'm going to have to build a 2.6.20.2, because with the rdsl-0.30 patch, 
its going to do a level 2 on my /usr/movies directory, which hasn't been 
touched in 90 days and has about 8.1GB in it according to du, and its 
going to do nearly all of it.  It shouldn't be anything but a directory 
listing file. But this is what amstatus is reporting:
coyote:/usr/movies            2     7271m dumping      793m ( 10.91%) 
(7:26:00)

And its also reporting far more data than exists it seems. As is du, 
for /var, which might have 2 gigs, its claiming 3.7!

Honest folks, I'm not smoking anything, I quit 18 years ago.  Back to bed 
while this one bombs out too.

-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Operative: "I'm a monster.  What I do is evil, I've no illusions about 
that.  
But it must be done."
				--"Serenity"

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:49         ` Gene Heskett
@ 2007-03-12 11:58           ` Con Kolivas
  2007-03-12 16:38             ` Gene Heskett
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 11:58 UTC (permalink / raw)
  To: Gene Heskett
  Cc: linux-kernel, Radoslaw Szkodzinski, Mike Galbraith, ck list,
	Andrew Morton, Ingo Molnar

On 12/03/07, Gene Heskett <gene.heskett@gmail.com> wrote:
> On Monday 12 March 2007, Gene Heskett wrote:
> >To Con, I knew 2.6.20 worked with your earlier patches, so rather than
> >revert all the way, I just rebooted to 2.6.20.2-rdsl-0.30 and I'm going
> >to fire off another backup.  I suspect it will work, but will advise the
> >next time I wake up.
>
> After posting the above, I thought maybe I'd hit a target in the middle
> and build a 2.6.20.2, with your -0.30 patch, but...
>
> I'm going to have to build a 2.6.20.2, because with the rdsl-0.30 patch,
> its going to do a level 2 on my /usr/movies directory, which hasn't been
> touched in 90 days and has about 8.1GB in it according to du, and its
> going to do nearly all of it.  It shouldn't be anything but a directory
> listing file. But this is what amstatus is reporting:
> coyote:/usr/movies            2     7271m dumping      793m ( 10.91%)
> (7:26:00)
>
> And its also reporting far more data than exists it seems. As is du,
> for /var, which might have 2 gigs, its claiming 3.7!
>
> Honest folks, I'm not smoking anything, I quit 18 years ago.  Back to bed
> while this one bombs out too.
>
> --
> Cheers, Gene

Gene your last good kernel you said was 2.6.20 based. I don't see a
good reason even to use 2.6.20.2 as a base given that information.

--
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:23                       ` Con Kolivas
@ 2007-03-12 13:48                         ` Theodore Tso
  2007-03-12 18:09                           ` Con Kolivas
  2007-03-12 14:34                         ` Mike Galbraith
  1 sibling, 1 reply; 91+ messages in thread
From: Theodore Tso @ 2007-03-12 13:48 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Mon, Mar 12, 2007 at 10:23:06PM +1100, Con Kolivas wrote:
> > > > We are getting good interactive response with a fair scheduler yet
> > > > you seem intent on overloading it to find fault with it.
> > >
> > > I'm not trying to find fault, I'm TESTING AND REPORTING.  Was.
> >
> > Con, could you please take Mike's report of this regression seriously
> > and address it? Thanks,
> 
> Sure. 
> 
> Mike the cpu is being proportioned out perfectly according to fairness as I 
> mentioned in the prior email, yet X is getting the lower latency scheduling. 
> I'm not sure within the bounds of fairness what more would you have happen 
> to your liking with this test case?

Con,

	I think what we're discovering is that a "fair scheduler" is
not going to cut it.  After all, running X and ripping CD's and MP3
encoding them is not exactly an esoteric use case.  And like it or
not, "nice" defaults to 4.

	I suspect Mike is right; the only way to deal with this
regression is some scheduler hints from the desktop subsystem (i.e., X
and friends).  Yes, X is broken, it's horrible, yadda, yadda, yadda.
It's also what everyone is using, and it's a fact of life.  Just like
we occasionally have had to work around ISA braindamage, and x86
architecture braindamage, and ACPI braindamage all inflicted on us by
Intel.  This is just life, and sometimes the clean, elegant solution
is not enough.

	Regards,

						- Ted

P.S.  The other solution that might perhaps work is that we need to
change the meaning of what the nice value does.  If we consider "nice"
to be the scheduler hint (from the other direction), then maybe any
niced process should only run a very tiny amount if there are any
non-nice processes ready to run, and that the relative nice values are
used when two niced processes are competing for the CPU.....

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:23                       ` Con Kolivas
  2007-03-12 13:48                         ` Theodore Tso
@ 2007-03-12 14:34                         ` Mike Galbraith
  2007-03-12 15:26                           ` Linus Torvalds
  2007-03-12 18:49                           ` Con Kolivas
  1 sibling, 2 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 14:34 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:

> Mike the cpu is being proportioned out perfectly according to fairness as I 
> mentioned in the prior email, yet X is getting the lower latency scheduling. 
> I'm not sure within the bounds of fairness what more would you have happen to 
> your liking with this test case?

It has been said that "perfection is the enemy of good".  The two
interactive tasks receiving 40% cpu while two niced background jobs
receive 60% may well be perfect, but it's damn sure not good.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 14:34                         ` Mike Galbraith
@ 2007-03-12 15:26                           ` Linus Torvalds
  2007-03-12 18:10                             ` Con Kolivas
                                               ` (4 more replies)
  2007-03-12 18:49                           ` Con Kolivas
  1 sibling, 5 replies; 91+ messages in thread
From: Linus Torvalds @ 2007-03-12 15:26 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Andrew Morton



On Mon, 12 Mar 2007, Mike Galbraith wrote:
>
> On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> 
> > Mike the cpu is being proportioned out perfectly according to fairness as I 
> > mentioned in the prior email, yet X is getting the lower latency scheduling. 
> > I'm not sure within the bounds of fairness what more would you have happen to 
> > your liking with this test case?
> 
> It has been said that "perfection is the enemy of good".  The two
> interactive tasks receiving 40% cpu while two niced background jobs
> receive 60% may well be perfect, but it's damn sure not good.

Well, the real problem is really "server that works on behalf of somebody 
else".

X is just the worst *practical* example of this, since not only is it the 
most common such server, it's also a case where people see interactive 
issues really easily.

And the problem is that a lot of clients actually end up doing *more* in 
the X server than they do themselves directly. Doing things like showing a 
line of text on the screen is a lot more expensive than just keeping track 
of that line of text, so you end up with the X server easily being marked 
as getting "too much" CPU time, and the clients as being starved for CPU 
time. And then you get bad interactive behaviour.

So "good fairness" really should involve some notion of "work done for 
others". It's just not very easy to do..

			Linus

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 11:58           ` Con Kolivas
@ 2007-03-12 16:38             ` Gene Heskett
  2007-03-12 18:34               ` Gene Heskett
  0 siblings, 1 reply; 91+ messages in thread
From: Gene Heskett @ 2007-03-12 16:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Con Kolivas, Radoslaw Szkodzinski, Mike Galbraith, ck list,
	Andrew Morton, Ingo Molnar

On Monday 12 March 2007, Con Kolivas wrote:
>On 12/03/07, Gene Heskett <gene.heskett@gmail.com> wrote:
>> On Monday 12 March 2007, Gene Heskett wrote:
>> >To Con, I knew 2.6.20 worked with your earlier patches, so rather
>> > than revert all the way, I just rebooted to 2.6.20.2-rdsl-0.30 and
>> > I'm going to fire off another backup.  I suspect it will work, but
>> > will advise the next time I wake up.
>>
>> After posting the above, I thought maybe I'd hit a target in the
>> middle and build a 2.6.20.2, with your -0.30 patch, but...
>>
>> I'm going to have to build a 2.6.20.2, because with the rdsl-0.30
>> patch, its going to do a level 2 on my /usr/movies directory, which
>> hasn't been touched in 90 days and has about 8.1GB in it according to
>> du, and its going to do nearly all of it.  It shouldn't be anything
>> but a directory listing file. But this is what amstatus is reporting:
>> coyote:/usr/movies            2     7271m dumping      793m ( 10.91%)
>> (7:26:00)
>>
>> And its also reporting far more data than exists it seems. As is du,
>> for /var, which might have 2 gigs, its claiming 3.7!
>>
>> Honest folks, I'm not smoking anything, I quit 18 years ago.  Back to
>> bed while this one bombs out too.
>>
>> --
>> Cheers, Gene
>
>Gene your last good kernel you said was 2.6.20 based. I don't see a
>good reason even to use 2.6.20.2 as a base given that information.
>
I have 2.6.20.1 building now.  I know that 2.6.20-ck1 worked well, so now 
I walking fwd from 2.6.20, trying to bisect it.  .1 wasn't much of a 
patch, but who knows at this point, I'm not 'the shadow' in a 65 year old 
radio show.  And it looks like that build is done, so here goes the next 
test.

The worst thing about this is that amanda's database is being hosed 
everytime this happens, and it's been 3 runs in a row, in a dumpcycle of 
5, where this has occurred.  I can do one more bad run by pre-clearing 
the vtape +1 that's it is going to use each time because the partition 
being used for vtapes is sitting at about 93% utilization now.  Normal 
life, its about 84%, it is a 175GB partition.  That also is stirring 
around in the old girls database when I kill stuff she thinks is there, 
but its also about 3 dumpcycles back and pretty much out of the picture 
so she will recover in a couple of dumpcycles once I find this, if indeed 
I do.

You've cooked up patches for all this, so it 2.6.20.1 works ok, then I try 
your patch on that one.  I tried 2.6.21-rc1, and it bombed too, but I 
just figured that was an -rc1, and we're expected to lose a pint of blood 
at most any -rc1 aren't we?, so I didn't give it any great thought and 
reverted till -rc2 came out.  But I ramble & times-a-wasting.
>--
>-ck

-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Research is to see what everybody else has seen, and think what nobody
else has thought.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 10:34                 ` Con Kolivas
@ 2007-03-12 16:38                   ` Kasper Sandberg
  2007-03-14  2:25                     ` Valdis.Kletnieks
  0 siblings, 1 reply; 91+ messages in thread
From: Kasper Sandberg @ 2007-03-12 16:38 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Xavier Bestel, Mike Galbraith, Ingo Molnar,
	linux kernel mailing list, ck list, Andrew Morton

On Mon, 2007-03-12 at 21:34 +1100, Con Kolivas wrote:
> On Monday 12 March 2007 20:38, Xavier Bestel wrote:
> > On Mon, 2007-03-12 at 20:22 +1100, Con Kolivas wrote:
> > > On Monday 12 March 2007 19:55, Mike Galbraith wrote:
> > > > Hmm.  So... anything that's client/server is going to suffer horribly
> > > > unless niced tasks are niced all the way down to 19?
> > >
> > > Fortunately most client server models dont usually have mutually
> > > exclusive cpu use like this X case. There are many things about X that
> > > are still a little (/me tries to think of a relatively neutral term)...
> > > wanting. :(
> >
> > I'd say the problem is less with X than with Xlib, which is heavily
> > round-trip-based. Fortunately XCB (its successor) seeks to be more
> > asynchronous.
> 
> Yes I recall a talk by Keith Packard on Xorg development and how a heck of a 
> lot of time spent spinning by X (?Xlib) for no damn good reason was the 
> number one thing that made X suck and basically it was silly to try and fix 
> that at the cpu scheduler level since it needed to be corrected in X, and was 
> being actively addressed. So we should stop trying to write cpu schedulers 
> for X.
Excuse me for barging in. But.

with latest xorg, xlib will be using xcb internally, which afaik should
help matters a little, but furthermore, with the arrival of xcb, stuff
are bound to change somewhat fast, and with abit more incentive(as in,
real benefit on latest kernels), they are bound to change even faster.

and if people upgrading to newest X(using xlib w/xcb) and applications
being updated can help stuff out in the kernel, i'd say its best to push
for that.
> 
> > 	Xav
> 


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 13:48                         ` Theodore Tso
@ 2007-03-12 18:09                           ` Con Kolivas
  0 siblings, 0 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 18:09 UTC (permalink / raw)
  To: Theodore Tso
  Cc: Ingo Molnar, Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tuesday 13 March 2007 00:48, Theodore Tso wrote:
> On Mon, Mar 12, 2007 at 10:23:06PM +1100, Con Kolivas wrote:
> > > > > We are getting good interactive response with a fair scheduler yet
> > > > > you seem intent on overloading it to find fault with it.
> > > >
> > > > I'm not trying to find fault, I'm TESTING AND REPORTING.  Was.
> > >
> > > Con, could you please take Mike's report of this regression seriously
> > > and address it? Thanks,
> >
> > Sure.
> >
> > Mike the cpu is being proportioned out perfectly according to fairness as
> > I mentioned in the prior email, yet X is getting the lower latency
> > scheduling. I'm not sure within the bounds of fairness what more would
> > you have happen to your liking with this test case?
>
> Con,
>
> 	I think what we're discovering is that a "fair scheduler" is
> not going to cut it.  After all, running X and ripping CD's and MP3
> encoding them is not exactly an esoteric use case.  And like it or
> not, "nice" defaults to 4.
>
> 	I suspect Mike is right; the only way to deal with this
> regression is some scheduler hints from the desktop subsystem (i.e., X
> and friends).  Yes, X is broken, it's horrible, yadda, yadda, yadda.
> It's also what everyone is using, and it's a fact of life.  Just like
> we occasionally have had to work around ISA braindamage, and x86
> architecture braindamage, and ACPI braindamage all inflicted on us by
> Intel.  This is just life, and sometimes the clean, elegant solution
> is not enough.

Instead of assuming it's bad, have you tried RSDL for yourself? Mike is using 
2 lame threads for his test case.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 15:26                           ` Linus Torvalds
@ 2007-03-12 18:10                             ` Con Kolivas
  2007-03-12 19:36                             ` Peter Zijlstra
                                               ` (3 subsequent siblings)
  4 siblings, 0 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 18:10 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Mike Galbraith, Ingo Molnar, linux kernel mailing list, ck list,
	Andrew Morton

On Tuesday 13 March 2007 02:26, Linus Torvalds wrote:
> On Mon, 12 Mar 2007, Mike Galbraith wrote:
> > On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> > > Mike the cpu is being proportioned out perfectly according to fairness
> > > as I mentioned in the prior email, yet X is getting the lower latency
> > > scheduling. I'm not sure within the bounds of fairness what more would
> > > you have happen to your liking with this test case?
> >
> > It has been said that "perfection is the enemy of good".  The two
> > interactive tasks receiving 40% cpu while two niced background jobs
> > receive 60% may well be perfect, but it's damn sure not good.
>
> Well, the real problem is really "server that works on behalf of somebody
> else".
>
> X is just the worst *practical* example of this, since not only is it the
> most common such server, it's also a case where people see interactive
> issues really easily.
>
> And the problem is that a lot of clients actually end up doing *more* in
> the X server than they do themselves directly. Doing things like showing a
> line of text on the screen is a lot more expensive than just keeping track
> of that line of text, so you end up with the X server easily being marked
> as getting "too much" CPU time, and the clients as being starved for CPU
> time. And then you get bad interactive behaviour.
>
> So "good fairness" really should involve some notion of "work done for
> others". It's just not very easy to do..

Instead of assuming it's bad, have you tried RSDL for yourself? Mike is using 
2 lame threads for his test case.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 16:38             ` Gene Heskett
@ 2007-03-12 18:34               ` Gene Heskett
  0 siblings, 0 replies; 91+ messages in thread
From: Gene Heskett @ 2007-03-12 18:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Con Kolivas, Radoslaw Szkodzinski, Mike Galbraith, ck list,
	Andrew Morton, Ingo Molnar

On Monday 12 March 2007, Gene Heskett wrote:
>On Monday 12 March 2007, Con Kolivas wrote:
>>On 12/03/07, Gene Heskett <gene.heskett@gmail.com> wrote:
>>> On Monday 12 March 2007, Gene Heskett wrote:
>>> >To Con, I knew 2.6.20 worked with your earlier patches, so rather
>>> > than revert all the way, I just rebooted to 2.6.20.2-rdsl-0.30 and
>>> > I'm going to fire off another backup.  I suspect it will work, but
>>> > will advise the next time I wake up.
>>>
>>> After posting the above, I thought maybe I'd hit a target in the
>>> middle and build a 2.6.20.2, with your -0.30 patch, but...
>>>
>>> I'm going to have to build a 2.6.20.2, because with the rdsl-0.30
>>> patch, its going to do a level 2 on my /usr/movies directory, which
>>> hasn't been touched in 90 days and has about 8.1GB in it according to
>>> du, and its going to do nearly all of it.  It shouldn't be anything
>>> but a directory listing file. But this is what amstatus is reporting:
>>> coyote:/usr/movies            2     7271m dumping      793m ( 10.91%)
>>> (7:26:00)
>>>
>>> And its also reporting far more data than exists it seems. As is du,
>>> for /var, which might have 2 gigs, its claiming 3.7!
>>>
>>> Honest folks, I'm not smoking anything, I quit 18 years ago.  Back to
>>> bed while this one bombs out too.
>>>
>>> --
>>> Cheers, Gene
>>
>>Gene your last good kernel you said was 2.6.20 based. I don't see a
>>good reason even to use 2.6.20.2 as a base given that information.
>
>I have 2.6.20.1 building now.  I know that 2.6.20-ck1 worked well, so
> now I walking fwd from 2.6.20, trying to bisect it.  .1 wasn't much of
> a patch, but who knows at this point, I'm not 'the shadow' in a 65 year
> old radio show.  And it looks like that build is done, so here goes the
> next test.
>
>The worst thing about this is that amanda's database is being hosed
>everytime this happens, and it's been 3 runs in a row, in a dumpcycle of
>5, where this has occurred.  I can do one more bad run by pre-clearing
>the vtape +1 that's it is going to use each time because the partition
>being used for vtapes is sitting at about 93% utilization now.  Normal
>life, its about 84%, it is a 175GB partition.  That also is stirring
>around in the old girls database when I kill stuff she thinks is there,
>but its also about 3 dumpcycles back and pretty much out of the picture
>so she will recover in a couple of dumpcycles once I find this, if
> indeed I do.
>
>You've cooked up patches for all this, so it 2.6.20.1 works ok, then I
> try your patch on that one.  I tried 2.6.21-rc1, and it bombed too, but
> I just figured that was an -rc1, and we're expected to lose a pint of
> blood at most any -rc1 aren't we?, so I didn't give it any great
> thought and reverted till -rc2 came out.  But I ramble &
> times-a-wasting.
>
For those following this thread, testing is halted momentarily due to a 
bug in my amanda wrapper scripts discovered when I told it to do a flush 
so the next run had a clean slate.  Alan Pearson and I are exchanging 
emails on that.  The script problem however is not connected to this, its 
just that the wrapper needs to be right under all conditions and it 
wasn't.  A few hours lag here.

Does anybody on the Cc: list need off it?

>>--
>>-ck



-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
"I am ecstatic that some moron re-invented a 1995 windows fuckup."
        -- Alan Cox

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 14:34                         ` Mike Galbraith
  2007-03-12 15:26                           ` Linus Torvalds
@ 2007-03-12 18:49                           ` Con Kolivas
  2007-03-12 19:06                             ` Xavier Bestel
  2007-03-12 20:11                             ` Mike Galbraith
  1 sibling, 2 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 18:49 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tuesday 13 March 2007 01:34, Mike Galbraith wrote:
> On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> > Mike the cpu is being proportioned out perfectly according to fairness as
> > I mentioned in the prior email, yet X is getting the lower latency
> > scheduling. I'm not sure within the bounds of fairness what more would
> > you have happen to your liking with this test case?
>
> It has been said that "perfection is the enemy of good".  The two
> interactive tasks receiving 40% cpu while two niced background jobs
> receive 60% may well be perfect, but it's damn sure not good.

Again I think your test is not a valid testcase. Why use two threads for your 
encoding with one cpu? Is that what other dedicated desktop OSs would do?

And let's not lose sight of things with this one testcase.

RSDL fixes
- every starvation case
- all fairness isssues
- is better 95% of the time on the desktop

If we fix 95% of the desktop and worsen 5% is that bad given how much else 
we've gained in the process?

Anyway for my next trick I plan to make -nice values not suck again. So we can 
go full circle and start renicing X (only if you so desire) as well like we 
used to. I figure that's the only way left to satisfy all requirements to 
beat even those last 5%. However for the most part I don't even think 
renicing X will be required (and hasn't been prior to this testcase). 
Nonetheless unsucking negative nice values is probably worthwhile. 

I need time to make it so though. Precious sleep and mood has been destroyed 
this week.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 18:49                           ` Con Kolivas
@ 2007-03-12 19:06                             ` Xavier Bestel
  2007-03-13 17:21                               ` Valdis.Kletnieks
  2007-03-12 20:11                             ` Mike Galbraith
  1 sibling, 1 reply; 91+ messages in thread
From: Xavier Bestel @ 2007-03-12 19:06 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Mike Galbraith, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

Le mardi 13 mars 2007 à 05:49 +1100, Con Kolivas a écrit :
> Again I think your test is not a valid testcase. Why use two threads for your 
> encoding with one cpu? Is that what other dedicated desktop OSs would do?

One thought occured to me (shit happens, sometimes): as your scheduler
is "strictly fair", won't that enable trivial DoS by just letting an
user fork a multitude of CPU-intensive processes ?

	Xav



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 15:26                           ` Linus Torvalds
  2007-03-12 18:10                             ` Con Kolivas
@ 2007-03-12 19:36                             ` Peter Zijlstra
  2007-03-12 20:36                             ` Mike Galbraith
                                               ` (2 subsequent siblings)
  4 siblings, 0 replies; 91+ messages in thread
From: Peter Zijlstra @ 2007-03-12 19:36 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Mike Galbraith, Con Kolivas, Ingo Molnar,
	linux kernel mailing list, ck list, Andrew Morton

On Mon, 2007-03-12 at 08:26 -0700, Linus Torvalds wrote:

> So "good fairness" really should involve some notion of "work done for 
> others". It's just not very easy to do..

A solution that is already in demand is a class based scheduler, where
the thread doing work for a client (temp.) joins the class of the
client.

The in-kernel virtualization guys also want to have this, for pretty
much the same reasons.

Peter


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 18:49                           ` Con Kolivas
  2007-03-12 19:06                             ` Xavier Bestel
@ 2007-03-12 20:11                             ` Mike Galbraith
  2007-03-12 20:38                               ` Con Kolivas
                                                 ` (2 more replies)
  1 sibling, 3 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 20:11 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 05:49 +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 01:34, Mike Galbraith wrote:
> > On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> > > Mike the cpu is being proportioned out perfectly according to fairness as
> > > I mentioned in the prior email, yet X is getting the lower latency
> > > scheduling. I'm not sure within the bounds of fairness what more would
> > > you have happen to your liking with this test case?
> >
> > It has been said that "perfection is the enemy of good".  The two
> > interactive tasks receiving 40% cpu while two niced background jobs
> > receive 60% may well be perfect, but it's damn sure not good.
> 
> Again I think your test is not a valid testcase. Why use two threads for your 
> encoding with one cpu? Is that what other dedicated desktop OSs would do?

The testcase is perfectly valid.  My buddies box has two full cores, so
we used two encoders such that whatever bandwidth is not being actively
consumed by more important things gets translated into mp3 encoding.

How would you go about ensuring that there won't be any cycles wasted?
 
_My_ box has 1 core that if fully utilized translates to 1.2 cores.. or
whatever, depending on the phase of the moon.  But no matter, logical vs
physical cpu argument is pure hand-waving.  What really matters here is
the bottom line: your fair scheduler ignores the very real requirements
of interactivity.

> And let's not lose sight of things with this one testcase.
> 
> RSDL fixes
> - every starvation case
> - all fairness isssues
> - is better 95% of the time on the desktop

I don't know where you got that 95% number from.  For the most part, the
existing scheduler does well.  If it sucked 95% of the time, it would
have been shredded a long time ago.

> If we fix 95% of the desktop and worsen 5% is that bad given how much else 
> we've gained in the process?

Killing the known corner case starvation scenarios is wonderful, but
let's not just pretend that interactive tasks don't have any special
requirements.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 15:26                           ` Linus Torvalds
  2007-03-12 18:10                             ` Con Kolivas
  2007-03-12 19:36                             ` Peter Zijlstra
@ 2007-03-12 20:36                             ` Mike Galbraith
  2007-03-13  4:17                             ` Kyle Moffett
  2007-03-13  8:09                             ` Ingo Molnar
  4 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 20:36 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Andrew Morton

On Mon, 2007-03-12 at 08:26 -0700, Linus Torvalds wrote:
> 
> On Mon, 12 Mar 2007, Mike Galbraith wrote:
> >
> > On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> > 
> > > Mike the cpu is being proportioned out perfectly according to fairness as I 
> > > mentioned in the prior email, yet X is getting the lower latency scheduling. 
> > > I'm not sure within the bounds of fairness what more would you have happen to 
> > > your liking with this test case?
> > 
> > It has been said that "perfection is the enemy of good".  The two
> > interactive tasks receiving 40% cpu while two niced background jobs
> > receive 60% may well be perfect, but it's damn sure not good.
> 
> Well, the real problem is really "server that works on behalf of somebody 
> else".

Yes, exactly.  We have a disconnect.  The process consists of both.  If
either client or server doesn't get enough, the process is a failure.

> X is just the worst *practical* example of this, since not only is it the 
> most common such server, it's also a case where people see interactive 
> issues really easily.
> 
> And the problem is that a lot of clients actually end up doing *more* in 
> the X server than they do themselves directly. Doing things like showing a 
> line of text on the screen is a lot more expensive than just keeping track 
> of that line of text, so you end up with the X server easily being marked 
> as getting "too much" CPU time, and the clients as being starved for CPU 
> time. And then you get bad interactive behaviour.
> 
> So "good fairness" really should involve some notion of "work done for 
> others". It's just not very easy to do..

Purely from the interactivity side, I connected via a simple tag X as
TASK_INTERACTIVE thingy, and boosted the tasks it was waking.  Worked
for things like the heavy cpu visualizations while other things are
going on in the background.  It was full of evilness though.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:11                             ` Mike Galbraith
@ 2007-03-12 20:38                               ` Con Kolivas
  2007-03-12 20:45                                 ` Mike Galbraith
                                                   ` (2 more replies)
  2007-03-12 20:42                               ` Peter Zijlstra
  2007-03-12 21:05                               ` Serge Belyshev
  2 siblings, 3 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 20:38 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tuesday 13 March 2007 07:11, Mike Galbraith wrote:
> On Tue, 2007-03-13 at 05:49 +1100, Con Kolivas wrote:
> > On Tuesday 13 March 2007 01:34, Mike Galbraith wrote:
> > > On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> > > > Mike the cpu is being proportioned out perfectly according to
> > > > fairness as I mentioned in the prior email, yet X is getting the
> > > > lower latency scheduling. I'm not sure within the bounds of fairness
> > > > what more would you have happen to your liking with this test case?
> > >
> > > It has been said that "perfection is the enemy of good".  The two
> > > interactive tasks receiving 40% cpu while two niced background jobs
> > > receive 60% may well be perfect, but it's damn sure not good.
> >
> > Again I think your test is not a valid testcase. Why use two threads for
> > your encoding with one cpu? Is that what other dedicated desktop OSs
> > would do?
>
> The testcase is perfectly valid.  My buddies box has two full cores, so
> we used two encoders such that whatever bandwidth is not being actively
> consumed by more important things gets translated into mp3 encoding.
>
> How would you go about ensuring that there won't be any cycles wasted?
>
> _My_ box has 1 core that if fully utilized translates to 1.2 cores.. or
> whatever, depending on the phase of the moon.  But no matter, logical vs
> physical cpu argument is pure hand-waving.  What really matters here is
> the bottom line: your fair scheduler ignores the very real requirements
> of interactivity.

Definitely not. It does not give unfair cpu towards interactive tasks. That's 
a very different argument.

> > And let's not lose sight of things with this one testcase.
> >
> > RSDL fixes
> > - every starvation case
> > - all fairness isssues
> > - is better 95% of the time on the desktop
>
> I don't know where you got that 95% number from.  For the most part, the
> existing scheduler does well.  If it sucked 95% of the time, it would
> have been shredded a long time ago.

Check the number of feedback reports. I don't feel petty enough to count them 
personally to give you an accuracte percentage.

> > If we fix 95% of the desktop and worsen 5% is that bad given how much
> > else we've gained in the process?
>
> Killing the known corner case starvation scenarios is wonderful, but
> let's not just pretend that interactive tasks don't have any special
> requirements.

Now you're really making a stretch of things. Where on earth did I say that 
interactive tasks don't have special requirements? It's a fundamental feature 
of this scheduler that I go to great pains to get them as low latency as 
possible and their fair share of cpu despite having a completely fair cpu 
distribution.

> 	-Mike

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:11                             ` Mike Galbraith
  2007-03-12 20:38                               ` Con Kolivas
@ 2007-03-12 20:42                               ` Peter Zijlstra
  2007-03-12 21:05                               ` Serge Belyshev
  2 siblings, 0 replies; 91+ messages in thread
From: Peter Zijlstra @ 2007-03-12 20:42 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Mon, 2007-03-12 at 21:11 +0100, Mike Galbraith wrote:

> How would you go about ensuring that there won't be any cycles wasted?

SCHED_IDLE or otherwise nice 19
 
> Killing the known corner case starvation scenarios is wonderful, but
> let's not just pretend that interactive tasks don't have any special
> requirements.

Interaction wants low latency, getting that is traditionally expressed
in priorities - the highest prio gets the least latency (all RTOSs work
like that).

There is nothing that warrants giving them more CPU time IMHO; if you
think they deserve more, express that using priorities.

Priorities are a well understood concept and they work; heuristics can
(and Murphy tells us they will) go wrong.

Getting the server/client thing working can be done without heuristics
using class based scheduling.


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:38                               ` Con Kolivas
@ 2007-03-12 20:45                                 ` Mike Galbraith
  2007-03-12 22:51                                   ` Con Kolivas
  2007-03-12 23:43                                   ` David Lang
  2007-03-12 21:34                                 ` [ck] " jos poortvliet
  2007-03-12 21:38                                 ` michael chang
  2 siblings, 2 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 20:45 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 07:38 +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 07:11, Mike Galbraith wrote:
> >
> > Killing the known corner case starvation scenarios is wonderful, but
> > let's not just pretend that interactive tasks don't have any special
> > requirements.
> 
> Now you're really making a stretch of things. Where on earth did I say that 
> interactive tasks don't have special requirements? It's a fundamental feature 
> of this scheduler that I go to great pains to get them as low latency as 
> possible and their fair share of cpu despite having a completely fair cpu 
> distribution.

As soon as your cpu is fully utilized, fairness looses or interactivity
loses.  Pick one.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:11                             ` Mike Galbraith
  2007-03-12 20:38                               ` Con Kolivas
  2007-03-12 20:42                               ` Peter Zijlstra
@ 2007-03-12 21:05                               ` Serge Belyshev
  2007-03-12 21:41                                 ` Mike Galbraith
  2 siblings, 1 reply; 91+ messages in thread
From: Serge Belyshev @ 2007-03-12 21:05 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

Mike Galbraith <efault@gmx.de> writes:

[snip]
>> And let's not lose sight of things with this one testcase.
>> 
>> RSDL fixes
>> - every starvation case
>> - all fairness isssues
>> - is better 95% of the time on the desktop
>
> I don't know where you got that 95% number from.  For the most part, the
> existing scheduler does well.  If it sucked 95% of the time, it would
> have been shredded a long time ago.
>

I tell you.

http://article.gmane.org/gmane.linux.kernel/500027
http://article.gmane.org/gmane.linux.kernel/502996
http://article.gmane.org/gmane.linux.kernel/500119
http://article.gmane.org/gmane.linux.kernel/500784
http://article.gmane.org/gmane.linux.kernel/500768
http://article.gmane.org/gmane.linux.kernel/502255
http://article.gmane.org/gmane.linux.kernel/502282
http://article.gmane.org/gmane.linux.kernel/503650
http://article.gmane.org/gmane.linux.kernel/503695
http://article.gmane.org/gmane.linux.kernel.ck/6512
http://article.gmane.org/gmane.linux.kernel.ck/6539
http://article.gmane.org/gmane.linux.kernel.ck/6565


Also, count my email too.
I'm using RSDL since day one on my laptop and my router/compute server
and I wont come back to mainline, needless to say why.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:38                               ` Con Kolivas
  2007-03-12 20:45                                 ` Mike Galbraith
@ 2007-03-12 21:34                                 ` jos poortvliet
  2007-03-12 21:38                                 ` michael chang
  2 siblings, 0 replies; 91+ messages in thread
From: jos poortvliet @ 2007-03-12 21:34 UTC (permalink / raw)
  To: ck
  Cc: Con Kolivas, Mike Galbraith, Linus Torvalds,
	linux kernel mailing list, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 2244 bytes --]

Op Monday 12 March 2007, schreef Con Kolivas:
> > > If we fix 95% of the desktop and worsen 5% is that bad given how much
> > > else we've gained in the process?
> >
> > Killing the known corner case starvation scenarios is wonderful, but
> > let's not just pretend that interactive tasks don't have any special
> > requirements.
>
> Now you're really making a stretch of things. Where on earth did I say that
> interactive tasks don't have special requirements? It's a fundamental
> feature of this scheduler that I go to great pains to get them as low
> latency as possible and their fair share of cpu despite having a completely
> fair cpu distribution.

As far as I understand it, RSDL always gives an equal share of cpu, but 
interactive tasks can have lower latency, right? So you get in trouble with 
interactive tasks only when their share isn't enough to actually do what they 
have to do in that period, eg on a heavily (over?) loaded box. Staircase, 
like mainline which gave them MORE than their share, would support that 
(though this comes at a price).

So, if your box is overloaded to a great extend, X, which can use a lot of 
cpu, can get unresponsive - unless it's negatively niced. But most other apps 
aren't as demanding as X is, so they won't really suffer. Thus the problem is 
mostly X. And at least part of that problem is being solved - X wasting cpu 
cycles. Also, cpu's are getting stronger, and I think it's likely X's 
relative CPU usage goes down as well.

In the long term, RSDL seems like the best way to go. Nice X down, and you got 
most of the disadvantages. You still have the perfect fairness, no stalls and 
starvation ;-)

If RSDL can be improved to help X, great. But introducing again the problem 
which RSDL was supposed to solve would be pretty pointless. I think that's 
what grumpy Con is trying to say, and he's right at it.

grtz

Jos

-- 
Disclaimer:

Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. 
Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat 
ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. 
Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld.

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:38                               ` Con Kolivas
  2007-03-12 20:45                                 ` Mike Galbraith
  2007-03-12 21:34                                 ` [ck] " jos poortvliet
@ 2007-03-12 21:38                                 ` michael chang
  2007-03-13  0:09                                   ` Thibaut VARENE
  2007-03-13  6:08                                   ` Mike Galbraith
  2 siblings, 2 replies; 91+ messages in thread
From: michael chang @ 2007-03-12 21:38 UTC (permalink / raw)
  To: Con Kolivas, Mike Galbraith
  Cc: ck list, Linus Torvalds, linux kernel mailing list, Andrew Morton

On 3/12/07, Con Kolivas <kernel@kolivas.org> wrote:
> On Tuesday 13 March 2007 07:11, Mike Galbraith wrote:
> > On Tue, 2007-03-13 at 05:49 +1100, Con Kolivas wrote:
> > > On Tuesday 13 March 2007 01:34, Mike Galbraith wrote:
> > > > On Mon, 2007-03-12 at 22:23 +1100, Con Kolivas wrote:
> > > > > Mike the cpu is being proportioned out perfectly according to
> > > > > fairness as I mentioned in the prior email, yet X is getting the
> > > > > lower latency scheduling. I'm not sure within the bounds of fairness
> > > > > what more would you have happen to your liking with this test case?
> > > >
> > > > It has been said that "perfection is the enemy of good".  The two
> > > > interactive tasks receiving 40% cpu while two niced background jobs
> > > > receive 60% may well be perfect, but it's damn sure not good.
> > >
> > > Again I think your test is not a valid testcase. Why use two threads for
> > > your encoding with one cpu? Is that what other dedicated desktop OSs
> > > would do?
> >
> > The testcase is perfectly valid.  My buddies box has two full cores, so
> > we used two encoders such that whatever bandwidth is not being actively
> > consumed by more important things gets translated into mp3 encoding.
> >
> > How would you go about ensuring that there won't be any cycles wasted?
> >
> > _My_ box has 1 core that if fully utilized translates to 1.2 cores.. or
> > whatever, depending on the phase of the moon.  But no matter, logical vs
> > physical cpu argument is pure hand-waving.  What really matters here is
> > the bottom line: your fair scheduler ignores the very real requirements
> > of interactivity.
>
> Definitely not. It does not give unfair cpu towards interactive tasks. That's
> a very different argument.

I think the issue here is that the scheduler is doing what Con expects
it to do, but not what Mike Galbraith here feels it should do. Maybe
Con and Mike here are using different definitions, as such, for
"interactivity", or at least have different ideas of how this is
supposed to be accomplished. Does that sound right?

I've begun using RSDL on my machines here, and so far there haven't
been any issues with it, in my opinion. From a feel standpoint, it's
not what I would call perfectly smooth, but it is better than the
other schedulers I've seen (and the one case where there are still
problems it is an issue of I/O contention, not CPU -- using RSDL has
made a surprisingly large impact regardless).

Perhaps, Mike Galbraith, do you feel that it should be possible to use
the CPU at 100% for some task and still maintain excellent
interactivity? (It has always seemed to me that if you wanted
interactivity, you had to have the CPU idle at least a couple percent
of the time. How much or how little that many percent had to be was
usually affected by how much preempting you put in the kernel, and
what CPU scheduler was in it at the time.)

Considering the concepts put out by projects such as BOINC and
SETI@Home, I wouldn't be thoroughly surprised by this ideology,
although I do question the particular way this test case is being run.

That said, I haven't run the test case in particular yet, although I
will see if I can get the time to do so soon. In any case, I
personally do have a few qualms about this test case being run on HT
virtual cores:

* I am curious about why splitting a task and running them on separate
HT virtual cores improves interactivity any. (If it was Amarok on one
virtual CPU and one lame on the other, I would get it. But I see two
lame processes here -- wouldn't they just be allocated one to each
virtual CPU, leaving Amarok out most of the time? How do you get
interactivity with that?) Does using HT really fill up the CPU better
than having the CPU announce itself as the single core it is? My
understanding is that throughput goes down somewhat even just by using
multiple threads with HT, compared to the single thread on the single
core, and why would you use more than one lame thread unless you seek
throughput?

* Where are the lame processes encoding to/from? For example, are the
results for both being sent to /dev/null? To a hard drive? etc. etc.
In a real-world test case, I would imagine a user running TWO lame
processes would be encoding from two sources to the same hard drive.
(Or, they might even be both encoding FROM that same hard drive. Or
both.) The need for the single HD to seek so much reduces throughput
on most of these cases in HT, IIRC, which may be a factor that would
probably defeat the point of this case for most users. Of course, my
point is negated if they have multiple drives for their use of lame,
and/or if they have sufficient memory and bandwidth to handle the
issue, or if encoding throughput isn't their aim.

The only reason I can think of that running two lame processes would
improve "interactivity" would be so that if one particular portion
gets stuck, then there's a chance the other thread will be working on
an easier portion, making it appear like more is being done.  This
occurs, for example, with POV-Ray and Blender, where some parts of the
image may require more time to render than others due to the variable
complexity of various portions of a 3D image. In this case, using HT
or multiple threads makes it more likely that at least one thread will
be working on an "easy" spot, which would increase the number of
pixels on screen in e.g. the middle of the render. However, the
overall render usually wouldn't speed up on HT. In fact, the whole
image may even take slightly longer due to the overhead of the
threading, although that overhead is trivial for most users when they
are using it. (And of course, the overhead would be negligible when
you had two or more actual cores, because the increase would be more
like 1.85x when you factored out the overhead. HT, by contrast, would
give you, maybe 0.97x. Or whatever.)

As for the realism of the scenario, maybe one might have a Linux
server that is broadcasting/encoding the same source for multiple bit
rates (e.g. Internet Radio) might run multiple lame instances... on a
single core with HT. Not the most common thing in the world, but there
are still quite a few of these guys out there. Enough to be of
concern, IMO.

Con has indicated somewhere recently he has an idea about improving
negative nice; among other things -- maybe RSDL isn't capable of
handling this test case yet, but it might very soon. Considering that
any process not run at the top nice level ends up with pockets of
smoothness followed by pauses of a determinable size, processes like
lame that handle bits and pieces as fast at they come but need to do
so at a steady rate may act peculiarly (not smoothly?) under RSDL.

One last thing I'm not sure about: Mike, are you upset about lame
interfering with Amarok, or lame in and of itself? Which process do
you feel is getting too much or too little CPU? Why?

> > > And let's not lose sight of things with this one testcase.
> > >
> > > RSDL fixes
> > > - every starvation case
> > > - all fairness isssues
> > > - is better 95% of the time on the desktop
> >
> > I don't know where you got that 95% number from.  For the most part, the
> > existing scheduler does well.  If it sucked 95% of the time, it would
> > have been shredded a long time ago.
>
> Check the number of feedback reports. I don't feel petty enough to count them
> personally to give you an accuracte percentage.

I think Con is saying here that Mike, you are one of two or so people
so far to have given a primarily negative feedback report on RSDL. (I
think akpm hit a snag on some PPC box with -mm a while back, IIRC, and
then there's you. I'm only on -ck, though, so there may be others I
haven't heard about. But these two pale in comparison to the
complements I've seen on-list.) Of course, it's still important that
we see WHY it doesn't work well for you, while everyone else is having
fewer (or no) issues.

> > > If we fix 95% of the desktop and worsen 5% is that bad given how much
> > > else we've gained in the process?
> >
> > Killing the known corner case starvation scenarios is wonderful, but
> > let's not just pretend that interactive tasks don't have any special
> > requirements.
>
> Now you're really making a stretch of things. Where on earth did I say that
> interactive tasks don't have special requirements? It's a fundamental feature
> of this scheduler that I go to great pains to get them as low latency as
> possible and their fair share of cpu despite having a completely fair cpu
> distribution.

This seems to me like he's saying that there has to be a mechanism
(outside of nice) that can be used to treat processes that "I" want to
be interactive all special-like. It feels like something that would
have been said in the design of what the scheduler was in -ck and is
currently in vanilla.

To me, that fundamentally clashes with the design behind RSDL. That
said, I could be wrong -- Con appears to have something that could be
very promising up his sleeve that could come out sooner or later. Once
he's written it, of course. In any case, RSDL seems very promising,
for the most part.

--
Michael Chang

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 21:05                               ` Serge Belyshev
@ 2007-03-12 21:41                                 ` Mike Galbraith
  0 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-12 21:41 UTC (permalink / raw)
  To: Serge Belyshev
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tue, 2007-03-13 at 00:05 +0300, Serge Belyshev wrote:
> Mike Galbraith <efault@gmx.de> writes:
> 
> [snip]
> >> And let's not lose sight of things with this one testcase.
> >> 
> >> RSDL fixes
> >> - every starvation case
> >> - all fairness isssues
> >> - is better 95% of the time on the desktop
> >
> > I don't know where you got that 95% number from.  For the most part, the
> > existing scheduler does well.  If it sucked 95% of the time, it would
> > have been shredded a long time ago.
> >
> 
> I tell you.
> 
> http://article.gmane.org/gmane.linux.kernel/500027
> http://article.gmane.org/gmane.linux.kernel/502996
> http://article.gmane.org/gmane.linux.kernel/500119
> http://article.gmane.org/gmane.linux.kernel/500784
> http://article.gmane.org/gmane.linux.kernel/500768
> http://article.gmane.org/gmane.linux.kernel/502255
> http://article.gmane.org/gmane.linux.kernel/502282
> http://article.gmane.org/gmane.linux.kernel/503650
> http://article.gmane.org/gmane.linux.kernel/503695
> http://article.gmane.org/gmane.linux.kernel.ck/6512
> http://article.gmane.org/gmane.linux.kernel.ck/6539
> http://article.gmane.org/gmane.linux.kernel.ck/6565

Thanks, but I've already read them.  They are part of the reason I
decided to spend some time testing.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:45                                 ` Mike Galbraith
@ 2007-03-12 22:51                                   ` Con Kolivas
  2007-03-13  5:10                                     ` Mike Galbraith
  2007-03-16 16:42                                     ` Pavel Machek
  2007-03-12 23:43                                   ` David Lang
  1 sibling, 2 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-12 22:51 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On 13/03/07, Mike Galbraith <efault@gmx.de> wrote:
> On Tue, 2007-03-13 at 07:38 +1100, Con Kolivas wrote:
> > On Tuesday 13 March 2007 07:11, Mike Galbraith wrote:
> > >
> > > Killing the known corner case starvation scenarios is wonderful, but
> > > let's not just pretend that interactive tasks don't have any special
> > > requirements.
> >
> > Now you're really making a stretch of things. Where on earth did I say that
> > interactive tasks don't have special requirements? It's a fundamental feature
> > of this scheduler that I go to great pains to get them as low latency as
> > possible and their fair share of cpu despite having a completely fair cpu
> > distribution.
>
> As soon as your cpu is fully utilized, fairness looses or interactivity
> loses.  Pick one.

That's not true unless you refuse to prioritise your tasks
accordingly. Let's take this discussion in a different direction. You
already nice your lame processes. Why? You already have the concept
that you are prioritising things to normal or background tasks. You
say so yourself that lame is a background task. Stating the bleedingly
obvious, the unix way of prioritising things is via nice. You already
do that. So moving on from that...

Your test case you ask "how can I maximise cpu usage". Well you know
the answer already. You run two threads. I won't dispute that.

The debate seems to be centered on whether two tasks that are niced +5
or to a higher value is background. In my opinion, nice 5 is not
background, but relatively less cpu. You already are savvy enough to
be using two threads and nicing them. All I ask you to do when using
RSDL is to change your expectations slightly and your settings from
nice 5 to nice 10 or 15 or even 19. Why is that so offensive to you?
nice 5 is 75% the cpu of nice 0. nice 10 is 50%, nice 15 is 25%, nice
19 is 5%.If you're so intent on defining nice 5 as background would it
be a matter of me just modifying nice 5 to be 25% instead? I suspect
your answer will be no because then you'll argue that you shouldn't
nice at all, but it should be interesting to see your response. You
seem to be advocating that the scheduler does everything and we need
to implement some complex flag instead. I don't believe that's the
right thing to do at all. So I offer you some options.

1. Be happy with changing your nice from 5 to15. I still don't think
this is in any way unreasonable.
2. Wait for me to fix -niced tasks behaviour and -nice your X. I plan
to implement this change anyway, not necessarily for X.
3. Have me redefine what nice 5 is, and tell me what percentage cpu
you think is right.
4. Any combination of the above.

Please don't pick 5.none of the above. Please try to work with me on this.

--
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 20:45                                 ` Mike Galbraith
  2007-03-12 22:51                                   ` Con Kolivas
@ 2007-03-12 23:43                                   ` David Lang
  2007-03-13  2:23                                     ` Lee Revell
  1 sibling, 1 reply; 91+ messages in thread
From: David Lang @ 2007-03-12 23:43 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Mon, 12 Mar 2007, Mike Galbraith wrote:

> On Tue, 2007-03-13 at 07:38 +1100, Con Kolivas wrote:
>> On Tuesday 13 March 2007 07:11, Mike Galbraith wrote:
>>>
>>> Killing the known corner case starvation scenarios is wonderful, but
>>> let's not just pretend that interactive tasks don't have any special
>>> requirements.
>>
>> Now you're really making a stretch of things. Where on earth did I say that
>> interactive tasks don't have special requirements? It's a fundamental feature
>> of this scheduler that I go to great pains to get them as low latency as
>> possible and their fair share of cpu despite having a completely fair cpu
>> distribution.
>
> As soon as your cpu is fully utilized, fairness looses or interactivity
> loses.  Pick one.

correct.

the problem is that it's hard (if not impossible) to properly identify what is 
needed to make a system have good interactivity. in some cases it's a matter of 
low latency (wake up a process as quickly as you can when whatever it was 
waiting on is available), but in others it's a matter of allocating the _right_ 
process enough CPU (X needs enough CPU to do things)

where it's a matter of needing low-latency, it's possible to design a scheduler 
that will do things in a predictable enough way that you know the max latency 
you have to deal with (and the RSDL seems to do this)

the problem comes when this isn't enough. if you have several CPU hogs on a 
system, and they are all around the same priority level, how can the scheduler 
know which one needs the CPU the most for good interactivity?

in some cases you may be able to directly detect that your high-priority process 
is waiting for another one (tracing pipes and local sockets for example), but 
what if you are waiting for several of them? (think a multimedia desktop waiting 
for the sound card, CDRom, hard drive, and video all at once) which one needs 
the extra CPU the most?

Fairness is much easier to enforce (and much easier to understand)

the RSDL is concentrating on enforcing fairness, with bounded (and predictable) 
latencies.

if you are willing to tell the system what you consider more important (and how 
much more important you consider it), then it's much easier to figure out who to 
give the CPU to. Con is just asking you to do this (and you already do, by doing 
a nice -5. but it sounds like you want that to mean more then it currently does)

David Lang



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 21:38                                 ` michael chang
@ 2007-03-13  0:09                                   ` Thibaut VARENE
  2007-03-13  6:08                                   ` Mike Galbraith
  1 sibling, 0 replies; 91+ messages in thread
From: Thibaut VARENE @ 2007-03-13  0:09 UTC (permalink / raw)
  To: michael chang; +Cc: Con Kolivas, ck list, linux kernel mailing list

On 3/12/07, michael chang <thenewme91@gmail.com> wrote:

> Considering the concepts put out by projects such as BOINC and
> SETI@Home, I wouldn't be thoroughly surprised by this ideology,
> although I do question the particular way this test case is being run.

If Con actually implements SCHED_IDLEPRIO in RSDL, life is good even
in that case.

> This seems to me like he's saying that there has to be a mechanism
> (outside of nice) that can be used to treat processes that "I" want to
> be interactive all special-like. It feels like something that would
> have been said in the design of what the scheduler was in -ck and is
> currently in vanilla.

Exactly. Driving us again toward the fact that different workloads
might benefit from different schedulers (eg: RSDL is cool for server
loads, previous staircase did an excellent job on desktop, etc) and
thus that having a choice of schedulers might be something that would
satisfy (some) people...

> To me, that fundamentally clashes with the design behind RSDL. That
> said, I could be wrong -- Con appears to have something that could be
> very promising up his sleeve that could come out sooner or later. Once
> he's written it, of course. In any case, RSDL seems very promising,
> for the most part.

It certainly is. "Negative" feedback can be a good thing too, as it
helps improving it anyway. It's nonetheless true that it's practically
impossible to satisfy 100% of use case with a single design, so
choices will have to be made.

HTH

T-Bone

-- 
Thibaut VARENE
http://www.parisc-linux.org/~varenet/

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 23:43                                   ` David Lang
@ 2007-03-13  2:23                                     ` Lee Revell
  2007-03-13  6:00                                       ` David Lang
  0 siblings, 1 reply; 91+ messages in thread
From: Lee Revell @ 2007-03-13  2:23 UTC (permalink / raw)
  To: David Lang
  Cc: Mike Galbraith, Con Kolivas, Ingo Molnar,
	linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On 3/12/07, David Lang <david.lang@digitalinsight.com> wrote:
> the problem comes when this isn't enough. if you have several CPU hogs on a
> system, and they are all around the same priority level, how can the scheduler
> know which one needs the CPU the most for good interactivity?
>
> in some cases you may be able to directly detect that your high-priority process
> is waiting for another one (tracing pipes and local sockets for example), but
> what if you are waiting for several of them? (think a multimedia desktop waiting
> for the sound card, CDRom, hard drive, and video all at once) which one needs
> the extra CPU the most?

I'm not an expert in this area by any means but after reading this
thread the OSX solution of simply telling the kernel "I'm the GUI,
schedule me accordingly" looks increasingly attractive.  Why make the
kernel guess when we can just be explicit?

Does anyone know of a UNIX-like system that has managed to solve this
problem without hooking the GUI into the scheduler?

Lee

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 15:26                           ` Linus Torvalds
                                               ` (2 preceding siblings ...)
  2007-03-12 20:36                             ` Mike Galbraith
@ 2007-03-13  4:17                             ` Kyle Moffett
  2007-03-13  8:09                             ` Ingo Molnar
  4 siblings, 0 replies; 91+ messages in thread
From: Kyle Moffett @ 2007-03-13  4:17 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Mike Galbraith, Con Kolivas, Ingo Molnar,
	linux kernel mailing list, ck list, Andrew Morton

On Mar 12, 2007, at 11:26:25, Linus Torvalds wrote:
> So "good fairness" really should involve some notion of "work done  
> for others". It's just not very easy to do..

Maybe extend UNIX sockets to add another passable object type vis-a- 
vis SCM_RIGHTS, except in this case "SCM_CPUTIME".  You call  
SCM_CPUTIME with a time value in monotonic real-time nanoseconds  
(duration) and a value out of 100 indicating what percentage of your  
timeslices to give to the process (for the specified duration).  The  
receiving process would be informed of the estimated total number of  
nanoseconds of timeslice that it will be given based on the priority  
of the processes. (Maybe it could prioritize requests?).  The X  
libraries could then properly "pass" CPU time to the X server to help  
with rendering their requests, and the X server could give priority  
to tasks which give up more CPU time than is needed to render their  
data, and penalize those which use more than they give.  Initially  
even if you don't patch the X server you could at least patch the X  
clients to give up CPU to the X server to promote interactivity.

Cheers,
Kyle Moffett


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 22:51                                   ` Con Kolivas
@ 2007-03-13  5:10                                     ` Mike Galbraith
  2007-03-13  5:53                                       ` Con Kolivas
  2007-03-13  8:18                                       ` Ingo Molnar
  2007-03-16 16:42                                     ` Pavel Machek
  1 sibling, 2 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  5:10 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 09:51 +1100, Con Kolivas wrote:
> On 13/03/07, Mike Galbraith <efault@gmx.de> wrote:

> > As soon as your cpu is fully utilized, fairness looses or interactivity
> > loses.  Pick one.
> 
> That's not true unless you refuse to prioritise your tasks
> accordingly. Let's take this discussion in a different direction. You
> already nice your lame processes. Why? You already have the concept
> that you are prioritising things to normal or background tasks. You
> say so yourself that lame is a background task. Stating the bleedingly
> obvious, the unix way of prioritising things is via nice. You already
> do that. So moving on from that...

Sure.  If a user wants to do anything interactive, they can indeed nice
19 the rest of their box before they start.

> Your test case you ask "how can I maximise cpu usage". Well you know
> the answer already. You run two threads. I won't dispute that.
> 
> The debate seems to be centered on whether two tasks that are niced +5
> or to a higher value is background. In my opinion, nice 5 is not
> background, but relatively less cpu. You already are savvy enough to
> be using two threads and nicing them. All I ask you to do when using
> RSDL is to change your expectations slightly and your settings from
> nice 5 to nice 10 or 15 or even 19. Why is that so offensive to you?

It's not "offensive" to me, it is a behavioral regression.  The
situation as we speak is that you can run cpu intensive tasks while
watching eye-candy.  With RSDL, you can't, you feel the non-interactive
load instantly.  Doesn't the fact that you're asking me to lower my
expectations tell you that I just might have a point?

> Please don't pick 5.none of the above. Please try to work with me on this.

I'm not trying to be pig-headed.  I'm of the opinion that fairness is
great... until you strictly enforce it wrt interactive tasks.  

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  5:10                                     ` Mike Galbraith
@ 2007-03-13  5:53                                       ` Con Kolivas
  2007-03-13  6:08                                         ` [ck] " Rodney Gordon II
                                                           ` (3 more replies)
  2007-03-13  8:18                                       ` Ingo Molnar
  1 sibling, 4 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-13  5:53 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tuesday 13 March 2007 16:10, Mike Galbraith wrote:
> On Tue, 2007-03-13 at 09:51 +1100, Con Kolivas wrote:
> > On 13/03/07, Mike Galbraith <efault@gmx.de> wrote:
> > > As soon as your cpu is fully utilized, fairness looses or interactivity
> > > loses.  Pick one.
> >
> > That's not true unless you refuse to prioritise your tasks
> > accordingly. Let's take this discussion in a different direction. You
> > already nice your lame processes. Why? You already have the concept
> > that you are prioritising things to normal or background tasks. You
> > say so yourself that lame is a background task. Stating the bleedingly
> > obvious, the unix way of prioritising things is via nice. You already
> > do that. So moving on from that...
>
> Sure.  If a user wants to do anything interactive, they can indeed nice
> 19 the rest of their box before they start.
>
> > Your test case you ask "how can I maximise cpu usage". Well you know
> > the answer already. You run two threads. I won't dispute that.
> >
> > The debate seems to be centered on whether two tasks that are niced +5
> > or to a higher value is background. In my opinion, nice 5 is not
> > background, but relatively less cpu. You already are savvy enough to
> > be using two threads and nicing them. All I ask you to do when using
> > RSDL is to change your expectations slightly and your settings from
> > nice 5 to nice 10 or 15 or even 19. Why is that so offensive to you?
>
> It's not "offensive" to me, it is a behavioral regression.  The
> situation as we speak is that you can run cpu intensive tasks while
> watching eye-candy.  With RSDL, you can't, you feel the non-interactive
> load instantly.  Doesn't the fact that you're asking me to lower my
> expectations tell you that I just might have a point?

Yet looking at the mainline scheduler code, nice 5 tasks are also supposed to 
get 75% cpu compared to nice 0 tasks, however I cannot seem to get 75% cpu 
with a fully cpu bound task in the presence of an interactive task. To me 
that means mainline is not living up to my expectations. What you're saying 
is your expectations are based on a false cpu expectation from nice 5. You 
can spin it both ways. It seems to me the only one that lives up to a defined 
expectation is to be fair. Anything else is at best vague, and at worst 
starvation prone.

> > Please don't pick 5.none of the above. Please try to work with me on
> > this.
>
> I'm not trying to be pig-headed.  I'm of the opinion that fairness is
> great... until you strictly enforce it wrt interactive tasks.

How about answering my question then since I offered you numerous combinations 
of ways to tackle the problem? The simplest one doesn't even need code, it 
just needs you to alter the nice value that you're already setting.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  2:23                                     ` Lee Revell
@ 2007-03-13  6:00                                       ` David Lang
  0 siblings, 0 replies; 91+ messages in thread
From: David Lang @ 2007-03-13  6:00 UTC (permalink / raw)
  To: Lee Revell
  Cc: Mike Galbraith, Con Kolivas, Ingo Molnar,
	linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Mon, 12 Mar 2007, Lee Revell wrote:

> On 3/12/07, David Lang <david.lang@digitalinsight.com> wrote:
>> the problem comes when this isn't enough. if you have several CPU hogs on a
>> system, and they are all around the same priority level, how can the 
>> scheduler
>> know which one needs the CPU the most for good interactivity?
>> 
>> in some cases you may be able to directly detect that your high-priority 
>> process
>> is waiting for another one (tracing pipes and local sockets for example), 
>> but
>> what if you are waiting for several of them? (think a multimedia desktop 
>> waiting
>> for the sound card, CDRom, hard drive, and video all at once) which one 
>> needs
>> the extra CPU the most?
>
> I'm not an expert in this area by any means but after reading this
> thread the OSX solution of simply telling the kernel "I'm the GUI,
> schedule me accordingly" looks increasingly attractive.  Why make the
> kernel guess when we can just be explicit?

this can solve the specific problem (and since 'nice' is the natural way to tell 
the kernel this, it's not even a one-shot solution).

however Linus is right, the real underlying problem is where the user is 
waiting on a server. if this issue could be solved then a lot of things would 
benifit.

Con, as a quick hack (probably a bad idea as I'm not a scheduling expert), if a 
program blocks on another program (via a pipe or socket) could you easily give 
the rest of the first program's timeslice to the second one, without makeing it 
loose it's own?

I'm thinking that doing the dumb thing and just throwing a bit more CPU at the 
thing you are waiting for may work. (assuming that the server process actually 
does something useful with the extra CPU time it gets)

as far as latencies go, it would be like turning every process on the system 
into a cpu hog.

David Lang

> Does anyone know of a UNIX-like system that has managed to solve this
> problem without hooking the GUI into the scheduler?
>
> Lee
>

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  5:53                                       ` Con Kolivas
@ 2007-03-13  6:08                                         ` Rodney Gordon II
  2007-03-13  6:17                                         ` Mike Galbraith
                                                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 91+ messages in thread
From: Rodney Gordon II @ 2007-03-13  6:08 UTC (permalink / raw)
  To: ck
  Cc: Con Kolivas, Mike Galbraith, Linus Torvalds,
	linux kernel mailing list, Andrew Morton

On Tuesday 13 March 2007 00:53, Con Kolivas wrote:
> On Tuesday 13 March 2007 16:10, Mike Galbraith wrote:
> > On Tue, 2007-03-13 at 09:51 +1100, Con Kolivas wrote:
> > > On 13/03/07, Mike Galbraith <efault@gmx.de> wrote:
> > > > As soon as your cpu is fully utilized, fairness looses or
> > > > interactivity loses.  Pick one.
> > >
> > > That's not true unless you refuse to prioritise your tasks
> > > accordingly. Let's take this discussion in a different direction. You
> > > already nice your lame processes. Why? You already have the concept
> > > that you are prioritising things to normal or background tasks. You
> > > say so yourself that lame is a background task. Stating the bleedingly
> > > obvious, the unix way of prioritising things is via nice. You already
> > > do that. So moving on from that...
> >
> > Sure.  If a user wants to do anything interactive, they can indeed nice
> > 19 the rest of their box before they start.
> >
> > > Your test case you ask "how can I maximise cpu usage". Well you know
> > > the answer already. You run two threads. I won't dispute that.
> > >
> > > The debate seems to be centered on whether two tasks that are niced +5
> > > or to a higher value is background. In my opinion, nice 5 is not
> > > background, but relatively less cpu. You already are savvy enough to
> > > be using two threads and nicing them. All I ask you to do when using
> > > RSDL is to change your expectations slightly and your settings from
> > > nice 5 to nice 10 or 15 or even 19. Why is that so offensive to you?
> >
> > It's not "offensive" to me, it is a behavioral regression.  The
> > situation as we speak is that you can run cpu intensive tasks while
> > watching eye-candy.  With RSDL, you can't, you feel the non-interactive
> > load instantly.  Doesn't the fact that you're asking me to lower my
> > expectations tell you that I just might have a point?

I do not feel nearly any non-interactive load. See below.

>
> Yet looking at the mainline scheduler code, nice 5 tasks are also supposed
> to get 75% cpu compared to nice 0 tasks, however I cannot seem to get 75%
> cpu with a fully cpu bound task in the presence of an interactive task. To
> me that means mainline is not living up to my expectations. What you're
> saying is your expectations are based on a false cpu expectation from nice
> 5. You can spin it both ways. It seems to me the only one that lives up to
> a defined expectation is to be fair. Anything else is at best vague, and at
> worst starvation prone.
>
> > > Please don't pick 5.none of the above. Please try to work with me on
> > > this.
> >
> > I'm not trying to be pig-headed.  I'm of the opinion that fairness is
> > great... until you strictly enforce it wrt interactive tasks.
>
> How about answering my question then since I offered you numerous
> combinations of ways to tackle the problem? The simplest one doesn't even
> need code, it just needs you to alter the nice value that you're already
> setting.

Also, just to chime in, I am doing a large project converting over 250GB of 
FLAC audio to MP3 via lame for my archive conversion.

I am using 2.6.20.2-rsdl0.30, and I have 2 processes of flac decoding/lame 
encoding running simultaneously from a perl script I hacked up on my P-D 830. 
These processes are both nice'd to 19.

I have almost no degredation in latency in my usage of X (which is at nice 0), 
if that matters at all. Please try what Con is suggesting by adjusting your 
nice level, and see if that helps you at all.

These are just useless arguments, time better spent on coding and fixing real 
problems, than a flamewar on whether nice 5 is good enough or not.

Con's rsdl implements what ingosched was supposed to do, wrt the niceness 
levels. Perhaps Mike, you are used to the impression ingosched gave you with 
nice +5, but try something else as Con suggested.. +10, +15, hell, whatever. 
Is that so hard?

My 2c,
-r

-- 
Rodney "meff" Gordon II -*- meff@pobox.com
Systems Administrator / Coder Geek -*- Open yourself to OpenSource

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 21:38                                 ` michael chang
  2007-03-13  0:09                                   ` Thibaut VARENE
@ 2007-03-13  6:08                                   ` Mike Galbraith
  2007-03-13  6:16                                     ` Con Kolivas
  1 sibling, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  6:08 UTC (permalink / raw)
  To: michael chang
  Cc: Con Kolivas, ck list, Linus Torvalds, linux kernel mailing list,
	Andrew Morton

On Mon, 2007-03-12 at 17:38 -0400, michael chang wrote:

> Perhaps, Mike Galbraith, do you feel that it should be possible to use
> the CPU at 100% for some task and still maintain excellent
> interactivity?

Within reason, yes.  Defining "reason" is difficult.  As we speak, this
is possible to a much greater degree than with RSDL.  Before anybody
pipes in, yes, I'm very much aware of the down side of the interactivity
estimator, I've waged bloody battles with it, and have the t-shirt :)

> That said, I haven't run the test case in particular yet, although I
> will see if I can get the time to do so soon. In any case, I
> personally do have a few qualms about this test case being run on HT
> virtual cores:

Virtual or physical cores has nothing to do with the interactivity
regression I noticed.  Two nice 0 tasks which combined used 50% of my
box can no longer share that box with two nice 5 tasks and receive the
50% they need to perform.  That's it. From there, we wandered off into a
discussion on the relative merit and pitfalls of fairness.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for  2.6.21-rc3-mm2
  2007-03-13  6:08                                   ` Mike Galbraith
@ 2007-03-13  6:16                                     ` Con Kolivas
  2007-03-13  6:30                                       ` Mike Galbraith
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-13  6:16 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: michael chang, ck list, Linus Torvalds,
	linux kernel mailing list, Andrew Morton

On Tuesday 13 March 2007 17:08, Mike Galbraith wrote:
> Virtual or physical cores has nothing to do with the interactivity
> regression I noticed.  Two nice 0 tasks which combined used 50% of my
> box can no longer share that box with two nice 5 tasks and receive the
> 50% they need to perform.  That's it. From there, we wandered off into a
> discussion on the relative merit and pitfalls of fairness.

And again, with X in its current implementation it is NOT like two nice 0 
tasks at all; it is like one nice 0 task. This is being fixed in the X design 
as we speak.

> 	-Mike

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  5:53                                       ` Con Kolivas
  2007-03-13  6:08                                         ` [ck] " Rodney Gordon II
@ 2007-03-13  6:17                                         ` Mike Galbraith
  2007-03-13  7:53                                         ` Mike Galbraith
  2007-03-13  8:22                                         ` Ingo Molnar
  3 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  6:17 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 16:53 +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 16:10, Mike Galbraith wrote:

> > I'm not trying to be pig-headed.  I'm of the opinion that fairness is
> > great... until you strictly enforce it wrt interactive tasks.
> 
> How about answering my question then since I offered you numerous combinations 
> of ways to tackle the problem? The simplest one doesn't even need code, it 
> just needs you to alter the nice value that you're already setting.

Hey, you specifically asked me to not choose 5 :)  (I mentioned 5
earlier in the thread anyway, so no sense in repeating myself)

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for  2.6.21-rc3-mm2
  2007-03-13  6:16                                     ` Con Kolivas
@ 2007-03-13  6:30                                       ` Mike Galbraith
  0 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  6:30 UTC (permalink / raw)
  To: Con Kolivas
  Cc: michael chang, ck list, Linus Torvalds,
	linux kernel mailing list, Andrew Morton

On Tue, 2007-03-13 at 17:16 +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 17:08, Mike Galbraith wrote:
> > Virtual or physical cores has nothing to do with the interactivity
> > regression I noticed.  Two nice 0 tasks which combined used 50% of my
> > box can no longer share that box with two nice 5 tasks and receive the
> > 50% they need to perform.  That's it. From there, we wandered off into a
> > discussion on the relative merit and pitfalls of fairness.
> 
> And again, with X in its current implementation it is NOT like two nice 0 
> tasks at all; it is like one nice 0 task. This is being fixed in the X design 
> as we speak.

Shrug.  I don't live then, I live now.  I have expressed my concerns,
and will now switch from talk back to listen mode.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  5:53                                       ` Con Kolivas
  2007-03-13  6:08                                         ` [ck] " Rodney Gordon II
  2007-03-13  6:17                                         ` Mike Galbraith
@ 2007-03-13  7:53                                         ` Mike Galbraith
  2007-03-13  8:22                                         ` Ingo Molnar
  3 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  7:53 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 16:53 +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 16:10, Mike Galbraith wrote:

> > It's not "offensive" to me, it is a behavioral regression.  The
> > situation as we speak is that you can run cpu intensive tasks while
> > watching eye-candy.  With RSDL, you can't, you feel the non-interactive
> > load instantly.  Doesn't the fact that you're asking me to lower my
> > expectations tell you that I just might have a point?
> 
> Yet looking at the mainline scheduler code, nice 5 tasks are also supposed to 
> get 75% cpu compared to nice 0 tasks, however I cannot seem to get 75% cpu 
> with a fully cpu bound task in the presence of an interactive task.

(One more comment before I go.  You can then have the last word this
time, promise :)

Because the interactivity logic, which was put there to do precisely
this, is doing it's job?

>  To me 
> that means mainline is not living up to my expectations. What you're saying 
> is your expectations are based on a false cpu expectation from nice 5. You 
> can spin it both ways.

Talk about spin, you turn an example of the current scheduler working
properly into a negative attribute, and attempt to discredit me with it.

The floor is yours.  No reply will be forthcoming.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 15:26                           ` Linus Torvalds
                                               ` (3 preceding siblings ...)
  2007-03-13  4:17                             ` Kyle Moffett
@ 2007-03-13  8:09                             ` Ingo Molnar
  4 siblings, 0 replies; 91+ messages in thread
From: Ingo Molnar @ 2007-03-13  8:09 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Mike Galbraith, Con Kolivas, linux kernel mailing list, ck list,
	Andrew Morton


* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> > It has been said that "perfection is the enemy of good".  The two 
> > interactive tasks receiving 40% cpu while two niced background jobs 
> > receive 60% may well be perfect, but it's damn sure not good.
> 
> Well, the real problem is really "server that works on behalf of 
> somebody else".

i think Mike's testcase was even simpler than that: two plain CPU hogs 
on nice +5 stole much more CPU time with Con's new interactivity code 
than they did with the current interactivity code. I'd agree with Mike 
that a phenomenon like that needs to be fixed.

/less/ interactivity we can do easily in the current scheduler: just 
remove various bits here and there. The RSDL promise is that it gives us 
/more/ interactivity (with 'interactivity designed in', etc.), which in 
Mike's testcase does not seem to be the case.

> And the problem is that a lot of clients actually end up doing *more* 
> in the X server than they do themselves directly.

yeah. It's a hard case because X is not always a _clear_ interactive 
task - still the current interactivity code handles it quite well.

but Mike's scenario wasnt even that complex. It wasnt even a hard case 
of X being starved by _other_ interactive tasks running on the same nice 
level. Mike's test-scenario was about two plain nice +5 CPU hogs 
starving nice +0 interactive tasks more than the current scheduler does, 
and this is really not an area where we want to see any regression. Con, 
could you work on this area a bit more?

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  5:10                                     ` Mike Galbraith
  2007-03-13  5:53                                       ` Con Kolivas
@ 2007-03-13  8:18                                       ` Ingo Molnar
  2007-03-13  8:22                                         ` Mike Galbraith
                                                           ` (3 more replies)
  1 sibling, 4 replies; 91+ messages in thread
From: Ingo Molnar @ 2007-03-13  8:18 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton


* Mike Galbraith <efault@gmx.de> wrote:

> [...] The situation as we speak is that you can run cpu intensive 
> tasks while watching eye-candy.  With RSDL, you can't, you feel the 
> non-interactive load instantly. [...]

i have to agree with Mike that this is a material regression that cannot 
be talked around.

Con, we want RSDL to /improve/ interactivity. Having new scheduler 
interactivity logic that behaves /worse/ in the presence of CPU hogs, 
which CPU hogs are even reniced to +5, than the current interactivity 
code, is i think a non-starter. Could you try to fix this, please? Good 
interactivity in the presence of CPU hogs (be them default nice level or 
nice +5) is _the_ most important scheduler interactivity metric. 
Anything else is really secondary.

	Ingo

ps. please be nice to each other - both of you are long-time
    scheduler contributors who did lots of cool stuff :-)

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  8:18                                       ` Ingo Molnar
@ 2007-03-13  8:22                                         ` Mike Galbraith
  2007-03-13  9:21                                         ` Con Kolivas
                                                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  8:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Con Kolivas, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 09:18 +0100, Ingo Molnar wrote:

> ps. please be nice to each other - both of you are long-time
>     scheduler contributors who did lots of cool stuff :-)

It's no big deal, Con and I just seem to be oil and water.  He'll have
to be oil, because water is already take.  *evaporate* :)


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  5:53                                       ` Con Kolivas
                                                           ` (2 preceding siblings ...)
  2007-03-13  7:53                                         ` Mike Galbraith
@ 2007-03-13  8:22                                         ` Ingo Molnar
  3 siblings, 0 replies; 91+ messages in thread
From: Ingo Molnar @ 2007-03-13  8:22 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton


* Con Kolivas <kernel@kolivas.org> wrote:

> > It's not "offensive" to me, it is a behavioral regression.  The 
> > situation as we speak is that you can run cpu intensive tasks while 
> > watching eye-candy.  With RSDL, you can't, you feel the 
> > non-interactive load instantly.  Doesn't the fact that you're asking 
> > me to lower my expectations tell you that I just might have a point?
> 
> Yet looking at the mainline scheduler code, nice 5 tasks are also 
> supposed to get 75% cpu compared to nice 0 tasks, however I cannot 
> seem to get 75% cpu with a fully cpu bound task in the presence of an 
> interactive task. [...]

i'm sorry, but your argument seems to be negated. We of course have no 
problem with interactive tasks stealing CPU time from CPU hogs. The 
situation Mike found is _the other direction_: that /CPU hogs/ stole 
from interactive tasks. That's bad and needs to be fixed. Please?

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  8:18                                       ` Ingo Molnar
  2007-03-13  8:22                                         ` Mike Galbraith
@ 2007-03-13  9:21                                         ` Con Kolivas
  2007-03-13  9:29                                           ` Ingo Molnar
  2007-03-13  9:31                                           ` [ck] " Con Kolivas
  2007-03-13  9:33                                         ` Mike Galbraith
  2007-03-13 15:15                                         ` David Schwartz
  3 siblings, 2 replies; 91+ messages in thread
From: Con Kolivas @ 2007-03-13  9:21 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tuesday 13 March 2007 19:18, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> > [...] The situation as we speak is that you can run cpu intensive
> > tasks while watching eye-candy.  With RSDL, you can't, you feel the
> > non-interactive load instantly. [...]
>
> i have to agree with Mike that this is a material regression that cannot
> be talked around.
>
> Con, we want RSDL to /improve/ interactivity. Having new scheduler
> interactivity logic that behaves /worse/ in the presence of CPU hogs,
> which CPU hogs are even reniced to +5, than the current interactivity
> code, is i think a non-starter. Could you try to fix this, please? Good
> interactivity in the presence of CPU hogs (be them default nice level or
> nice +5) is _the_ most important scheduler interactivity metric.
> Anything else is really secondary.

Well I guess you must have missed where I asked him if he would be happy if I 
changed +5 metrics to do whatever he wanted and he refused to answer me. That 
would easily fit within that scheme. Any percentage of nice value he chose. I 
suggest 50% of nice 0. Heck I can even increase it if he likes. All I asked 
for was an answer as to whether that would satisfy his criterion.

> 	Ingo
>
> ps. please be nice to each other - both of you are long-time
>     scheduler contributors who did lots of cool stuff :-)

I have been civil. Only one email crossed the line on my part and I apologise.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:21                                         ` Con Kolivas
@ 2007-03-13  9:29                                           ` Ingo Molnar
  2007-03-13  9:41                                             ` Con Kolivas
  2007-03-13  9:31                                           ` [ck] " Con Kolivas
  1 sibling, 1 reply; 91+ messages in thread
From: Ingo Molnar @ 2007-03-13  9:29 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton


* Con Kolivas <kernel@kolivas.org> wrote:

> Well I guess you must have missed where I asked him if he would be 
> happy if I changed +5 metrics to do whatever he wanted and he refused 
> to answer me. [...]

I'd say lets keep nice levels out of this completely for now - while 
they should work _too_, it's easy because the scheduler has the 'nice' 
information. The basic behavior of CPU hogs that matters most.

So the question is: if all tasks are on the same nice level, how does, 
in Mike's test scenario, RSDL behave relative to the current 
interactivity code?

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:21                                         ` Con Kolivas
  2007-03-13  9:29                                           ` Ingo Molnar
@ 2007-03-13  9:31                                           ` Con Kolivas
  2007-03-13 10:24                                             ` Xavier Bestel
  1 sibling, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-13  9:31 UTC (permalink / raw)
  To: ck
  Cc: Ingo Molnar, Andrew Morton, Mike Galbraith, Linus Torvalds,
	linux kernel mailing list

On Tuesday 13 March 2007 20:21, Con Kolivas wrote:
> On Tuesday 13 March 2007 19:18, Ingo Molnar wrote:
> > * Mike Galbraith <efault@gmx.de> wrote:
> > > [...] The situation as we speak is that you can run cpu intensive
> > > tasks while watching eye-candy.  With RSDL, you can't, you feel the
> > > non-interactive load instantly. [...]
> >
> > i have to agree with Mike that this is a material regression that cannot
> > be talked around.
> >
> > Con, we want RSDL to /improve/ interactivity. Having new scheduler
> > interactivity logic that behaves /worse/ in the presence of CPU hogs,
> > which CPU hogs are even reniced to +5, than the current interactivity
> > code, is i think a non-starter. Could you try to fix this, please? Good
> > interactivity in the presence of CPU hogs (be them default nice level or
> > nice +5) is _the_ most important scheduler interactivity metric.
> > Anything else is really secondary.
>
> Well I guess you must have missed where I asked him if he would be happy if
> I changed +5 metrics to do whatever he wanted and he refused to answer me.
> That would easily fit within that scheme. Any percentage of nice value he
> chose. I suggest 50% of nice 0. Heck I can even increase it if he likes.
> All I asked for was an answer as to whether that would satisfy his
> criterion.

It seem Mike has chosen to go silent so I'll guess on his part. 

nice on my debian etch seems to choose nice +10 without arguments contrary to 
a previous discussion that said 4 was the default. However 4 is a good value 
to use as a base of sorts.

What I propose is as a proportion of nice 0:
nice 4 1/2
nice 8 1/4
nice 12 1/8
nice 16 1/16
nice 20 1/32 (of course nice 20 doesn't exist)

and we can do the opposite in the other direction
nice -4 2
nice -8 4
nice -12 8
nice -16 16
nice -20 32

Assuming no further discussion is forthcoming I'll implement that along with 
Al's suggestion for staggering the latencies better with nice differences 
since the two are changing the same thing.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  8:18                                       ` Ingo Molnar
  2007-03-13  8:22                                         ` Mike Galbraith
  2007-03-13  9:21                                         ` Con Kolivas
@ 2007-03-13  9:33                                         ` Mike Galbraith
  2007-03-13  9:39                                           ` Ingo Molnar
  2007-03-13 14:17                                           ` Matt Mackall
  2007-03-13 15:15                                         ` David Schwartz
  3 siblings, 2 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13  9:33 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Con Kolivas, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 09:18 +0100, Ingo Molnar wrote:

> Con, we want RSDL to /improve/ interactivity. Having new scheduler 
> interactivity logic that behaves /worse/ in the presence of CPU hogs, 
> which CPU hogs are even reniced to +5, than the current interactivity 
> code, is i think a non-starter. Could you try to fix this, please? Good 
> interactivity in the presence of CPU hogs (be them default nice level or 
> nice +5) is _the_ most important scheduler interactivity metric. 
> Anything else is really secondary.

I just retested with the encoders at nice 0, and the x/gforce combo is
terrible.  Funny thing though, x/gforce isn't as badly affected with a
kernel build.  Any build is quite noticable, but even at -j8, the effect
doen't seem to be (very brief test warning applies) as bad as with only
the two encoders running.  That seems quite odd.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:33                                         ` Mike Galbraith
@ 2007-03-13  9:39                                           ` Ingo Molnar
  2007-03-13 10:06                                             ` Con Kolivas
  2007-03-13 14:17                                           ` Matt Mackall
  1 sibling, 1 reply; 91+ messages in thread
From: Ingo Molnar @ 2007-03-13  9:39 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton


* Mike Galbraith <efault@gmx.de> wrote:

> I just retested with the encoders at nice 0, and the x/gforce combo is 
> terrible. [...]

ok. So nice levels had nothing to do with it - it's some other 
regression somewhere. How does the vanilla scheduler cope with the 
exactly same workload? I.e. could you describe the 'delta' difference in 
behavior - because the delta is what we are interested in mostly, the 
'absolute' behavior alone is not sufficient. Something like:

 - on scheduler foo, under this workload, the CPU hogs steal 70% CPU 
   time and the resulting desktop experience is 'choppy': mouse pointer 
   is laggy and audio skips.

 - on scheduler bar, under this workload, the CPU hogs are at 40% 
   CPU time and the desktop experience is smooth.

things like that - we really need to be able to see the delta.

> [...]  Funny thing though, x/gforce isn't as badly affected with a 
> kernel build.  Any build is quite noticable, but even at -j8, the 
> effect doen't seem to be (very brief test warning applies) as bad as 
> with only the two encoders running.  That seems quite odd.

likewise, how does the RSDL kernel build behavior compare to the vanilla 
scheduler's behavior? (what happens in one that doesnt happen in the 
other, etc.)

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:29                                           ` Ingo Molnar
@ 2007-03-13  9:41                                             ` Con Kolivas
  2007-03-13 10:50                                               ` Bill Huey
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-13  9:41 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tuesday 13 March 2007 20:29, Ingo Molnar wrote:
> * Con Kolivas <kernel@kolivas.org> wrote:
> > Well I guess you must have missed where I asked him if he would be
> > happy if I changed +5 metrics to do whatever he wanted and he refused
> > to answer me. [...]
>
> I'd say lets keep nice levels out of this completely for now - while
> they should work _too_, it's easy because the scheduler has the 'nice'
> information. The basic behavior of CPU hogs that matters most.
>
> So the question is: if all tasks are on the same nice level, how does,
> in Mike's test scenario, RSDL behave relative to the current
> interactivity code?

If everything is run at nice 0? (this was not the test case but that's what 
you've asked for).

We have:
X + GForce contribute a load of 1
lame x 2 threads contribute a load of 2

In RSDL
X + GForce will get 33% cpu
lame will get 66% cpu

In mainline
X + GForce gets a fluctuating percentage somewhere between 35~45% as far as I 
can see on UP.
lame gets the rest

The only way to get the same behaviour on RSDL without hacking an 
interactivity estimator, priority boost cpu misproportionator onto it is to 
either -nice X or +nice lame.

> 	Ingo

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:39                                           ` Ingo Molnar
@ 2007-03-13 10:06                                             ` Con Kolivas
  2007-03-13 11:23                                               ` Mike Galbraith
  0 siblings, 1 reply; 91+ messages in thread
From: Con Kolivas @ 2007-03-13 10:06 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tuesday 13 March 2007 20:39, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
> > I just retested with the encoders at nice 0, and the x/gforce combo is
> > terrible. [...]
>
> ok. So nice levels had nothing to do with it - it's some other
> regression somewhere. How does the vanilla scheduler cope with the
> exactly same workload? I.e. could you describe the 'delta' difference in
> behavior - because the delta is what we are interested in mostly, the
> 'absolute' behavior alone is not sufficient. Something like:
>
>  - on scheduler foo, under this workload, the CPU hogs steal 70% CPU
>    time and the resulting desktop experience is 'choppy': mouse pointer
>    is laggy and audio skips.
>
>  - on scheduler bar, under this workload, the CPU hogs are at 40%
>    CPU time and the desktop experience is smooth.
>
> things like that - we really need to be able to see the delta.

I only find a slowdown, no choppiness, no audio stutter (it would be extremely 
hard to make audio stutter in this design without i/o starvation or something 
along those lines). The number difference in cpu percentage I've already 
given on the previous email. The graphics driver does feature in this test 
case as well so others' mileage may vary. Mike said it was terrible.

> > [...]  Funny thing though, x/gforce isn't as badly affected with a
> > kernel build.  Any build is quite noticable, but even at -j8, the
> > effect doen't seem to be (very brief test warning applies) as bad as
> > with only the two encoders running.  That seems quite odd.
>
> likewise, how does the RSDL kernel build behavior compare to the vanilla
> scheduler's behavior? (what happens in one that doesnt happen in the
> other, etc.)

Kernel compiles seem similar till the jobs get above about 3 where rsdl gets 
slower but still smooth. Audio is basically unaffected either way.

Don't forget all the rest of the cases people have posted.

-- 
-ck

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:31                                           ` [ck] " Con Kolivas
@ 2007-03-13 10:24                                             ` Xavier Bestel
  2007-03-13 23:19                                               ` Sanjoy Mahajan
  0 siblings, 1 reply; 91+ messages in thread
From: Xavier Bestel @ 2007-03-13 10:24 UTC (permalink / raw)
  To: Con Kolivas
  Cc: ck, Ingo Molnar, Andrew Morton, Mike Galbraith, Linus Torvalds,
	linux kernel mailing list

On Tue, 2007-03-13 at 20:31 +1100, Con Kolivas wrote:
> nice on my debian etch seems to choose nice +10 without arguments contrary to 
> a previous discussion that said 4 was the default. However 4 is a good value 
> to use as a base of sorts.

I don't see why. nice uses +10 by default on all linux distro, and even
on Solaris and HP/UX. So I suspect that if Mike just used "nice lame"
instead of "nice +5 lame", he would have got what he wanted.

	Xav



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:41                                             ` Con Kolivas
@ 2007-03-13 10:50                                               ` Bill Huey
  0 siblings, 0 replies; 91+ messages in thread
From: Bill Huey @ 2007-03-13 10:50 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, Mike Galbraith, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton, Bill Huey (hui)

On Tue, Mar 13, 2007 at 08:41:05PM +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 20:29, Ingo Molnar wrote:
> > So the question is: if all tasks are on the same nice level, how does,
> > in Mike's test scenario, RSDL behave relative to the current
> > interactivity code?
... 
> The only way to get the same behaviour on RSDL without hacking an 
> interactivity estimator, priority boost cpu misproportionator onto it is to 
> either -nice X or +nice lame.

Hello Ingo,

After talking to Con over IRC (and if I can summarize it), he's wondering if
properly nicing those tasks, as previously mention in user emails, would solve
this potential user reported regression or is something additional needed. It
seems like folks are happy with the results once the nice tweeking is done.
This is a huge behavior change after all to scheduler (just thinking out loud).

bill


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 10:06                                             ` Con Kolivas
@ 2007-03-13 11:23                                               ` Mike Galbraith
  2007-03-13 11:41                                                 ` Serge Belyshev
  0 siblings, 1 reply; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13 11:23 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Ingo Molnar, linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

On Tue, 2007-03-13 at 21:06 +1100, Con Kolivas wrote:
> On Tuesday 13 March 2007 20:39, Ingo Molnar wrote:
> > * Mike Galbraith <efault@gmx.de> wrote:
> > > I just retested with the encoders at nice 0, and the x/gforce combo is
> > > terrible. [...]
> >
> > ok. So nice levels had nothing to do with it - it's some other
> > regression somewhere. How does the vanilla scheduler cope with the
> > exactly same workload? I.e. could you describe the 'delta' difference in
> > behavior - because the delta is what we are interested in mostly, the
> > 'absolute' behavior alone is not sufficient. Something like:
> >
> >  - on scheduler foo, under this workload, the CPU hogs steal 70% CPU
> >    time and the resulting desktop experience is 'choppy': mouse pointer
> >    is laggy and audio skips.
> >
> >  - on scheduler bar, under this workload, the CPU hogs are at 40%
> >    CPU time and the desktop experience is smooth.
> >
> > things like that - we really need to be able to see the delta.
> 
> I only find a slowdown, no choppiness, no audio stutter (it would be extremely 
> hard to make audio stutter in this design without i/o starvation or something 
> along those lines). The number difference in cpu percentage I've already 
> given on the previous email. The graphics driver does feature in this test 
> case as well so others' mileage may vary. Mike said it was terrible.

My first test run with lame at nice 0 was truly horrid, but has _not_
repeated, so disregard that as an anomaly.  For the most part, it is as
you say, things just get slower with load, any load.  I definitely am
seeing lurchiness which is not present in mainline.  No audio problems
with either kernel.

> > > [...]  Funny thing though, x/gforce isn't as badly affected with a
> > > kernel build.  Any build is quite noticable, but even at -j8, the
> > > effect doen't seem to be (very brief test warning applies) as bad as
> > > with only the two encoders running.  That seems quite odd.
> >
> > likewise, how does the RSDL kernel build behavior compare to the vanilla
> > scheduler's behavior? (what happens in one that doesnt happen in the
> > other, etc.)
> 
> Kernel compiles seem similar till the jobs get above about 3 where rsdl gets 
> slower but still smooth. Audio is basically unaffected either way.

It seems to be a plain linear slowdown.  The lurchiness I'm experiencing
varies in intensity, and is impossible to quantify.  I see neither
lurchiness nor slowdown in mainline through -j8.

> Don't forget all the rest of the cases people have posted.

Absolutely, all test results count.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 11:23                                               ` Mike Galbraith
@ 2007-03-13 11:41                                                 ` Serge Belyshev
  2007-03-13 11:46                                                   ` Mike Galbraith
  2007-03-13 15:36                                                   ` John Stoffel
  0 siblings, 2 replies; 91+ messages in thread
From: Serge Belyshev @ 2007-03-13 11:41 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

Mike Galbraith <efault@gmx.de> writes:

[snip]
> It seems to be a plain linear slowdown.  The lurchiness I'm experiencing
> varies in intensity, and is impossible to quantify.  I see neither
> lurchiness nor slowdown in mainline through -j8.
>
Whaa? make -j8 on mainline makes my desktop box completely useless.

Please reconsider your statement.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 11:41                                                 ` Serge Belyshev
@ 2007-03-13 11:46                                                   ` Mike Galbraith
  2007-03-13 15:36                                                   ` John Stoffel
  1 sibling, 0 replies; 91+ messages in thread
From: Mike Galbraith @ 2007-03-13 11:46 UTC (permalink / raw)
  To: Serge Belyshev
  Cc: Con Kolivas, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tue, 2007-03-13 at 14:41 +0300, Serge Belyshev wrote:
> Mike Galbraith <efault@gmx.de> writes:
> 
> [snip]
> > It seems to be a plain linear slowdown.  The lurchiness I'm experiencing
> > varies in intensity, and is impossible to quantify.  I see neither
> > lurchiness nor slowdown in mainline through -j8.
> >
> Whaa? make -j8 on mainline makes my desktop box completely useless.
> 
> Please reconsider your statement.

I'll do no such thing, and don't appreciate the insinuation.

	-Mike


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  9:33                                         ` Mike Galbraith
  2007-03-13  9:39                                           ` Ingo Molnar
@ 2007-03-13 14:17                                           ` Matt Mackall
  1 sibling, 0 replies; 91+ messages in thread
From: Matt Mackall @ 2007-03-13 14:17 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Ingo Molnar, Con Kolivas, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

On Tue, Mar 13, 2007 at 10:33:18AM +0100, Mike Galbraith wrote:
> On Tue, 2007-03-13 at 09:18 +0100, Ingo Molnar wrote:
> 
> > Con, we want RSDL to /improve/ interactivity. Having new scheduler 
> > interactivity logic that behaves /worse/ in the presence of CPU hogs, 
> > which CPU hogs are even reniced to +5, than the current interactivity 
> > code, is i think a non-starter. Could you try to fix this, please? Good 
> > interactivity in the presence of CPU hogs (be them default nice level or 
> > nice +5) is _the_ most important scheduler interactivity metric. 
> > Anything else is really secondary.
> 
> I just retested with the encoders at nice 0, and the x/gforce combo is
> terrible.  Funny thing though, x/gforce isn't as badly affected with a
> kernel build.  Any build is quite noticable, but even at -j8, the effect
> doen't seem to be (very brief test warning applies) as bad as with only
> the two encoders running.  That seems quite odd.

Is gforce calling sched_yield?

Can you try testing with some simpler loads, like these:

memload:
#!/usr/bin/python
a = "a" * 16 * 1024 * 1024
while 1:
    b = a[1:] + "b"
    a = b[1:] + "c"

execload:
#!/bin/sh
exec ./execload

forkload:
#!/bin/sh
./forkload&

pipeload:
#!/usr/bin/python
import os
pi, po = os.pipe()
if os.fork():
  while 1:
    os.write(po, "A" * 4096)
else:
  while 1:
    os.read(pi, 4096)

-- 
Mathematics is the supreme nostalgia of our time.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* RE: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13  8:18                                       ` Ingo Molnar
                                                           ` (2 preceding siblings ...)
  2007-03-13  9:33                                         ` Mike Galbraith
@ 2007-03-13 15:15                                         ` David Schwartz
  2007-03-13 17:59                                           ` Jeremy Fitzhardinge
  3 siblings, 1 reply; 91+ messages in thread
From: David Schwartz @ 2007-03-13 15:15 UTC (permalink / raw)
  To: Linux-Kernel@Vger. Kernel. Org


> * Mike Galbraith <efault@gmx.de> wrote:
>
> > [...] The situation as we speak is that you can run cpu intensive
> > tasks while watching eye-candy.  With RSDL, you can't, you feel the
> > non-interactive load instantly. [...]
>
> i have to agree with Mike that this is a material regression that cannot
> be talked around.

I don't know what else you can do when the argument is that behavior that is
wrong is what you actually want. The regression is not that the scheduler
doesn't do what it was asked to do or even that it isn't more faithful to
what it was told to do than the scheduler it replaces. The regression is
that the scheduler didn't do what Mike wanted it to do, even though he
didn't ask it to do that.

I would argue this is progression, not regression. The new scheduler is
fairer than the old one and fairness is good even though it sometimes hurts
some tasks.

> Con, we want RSDL to /improve/ interactivity.

Not when the interactivity was the result of unfairness.

> Having new scheduler
> interactivity logic that behaves /worse/ in the presence of CPU hogs,
> which CPU hogs are even reniced to +5, than the current interactivity
> code, is i think a non-starter. Could you try to fix this, please?

If you did this, it would mean that all the space between the signficant
level of unfairness you want in this case and pure fairness would have to
fit in five nice levels. That just seems like poor granularity.

> Good
> interactivity in the presence of CPU hogs (be them default nice level or
> nice +5) is _the_ most important scheduler interactivity metric.
> Anything else is really secondary.

Good interactivity for tasks that aren't themselves CPU hogs. A task should
get low latency if and only if it's yielding the CPU voluntarily most of the
time. If it's not, it can only get better interactivity at the cost of
fairness, and you have to *ask* for that. (Common sense says you can't give
a task *more* CPU because it yields the CPU a lot. And how else do you
determine interactivity other than nice level?)

This scheduler will not give you greater interactivity at the cost of
fairness unless you really ask for it. I think that's a good thing, though I
do agree it might take some getting used to.

I'm not saying it is impossible to make RSDL better at handling this
particular job. I'm saying the "regression" may be the scheduler doing what
it was asked to do more faithfully than the current scheduler and the right
fix (at least in the longer term) is to ask for what you really want.

DS



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 11:41                                                 ` Serge Belyshev
  2007-03-13 11:46                                                   ` Mike Galbraith
@ 2007-03-13 15:36                                                   ` John Stoffel
  1 sibling, 0 replies; 91+ messages in thread
From: John Stoffel @ 2007-03-13 15:36 UTC (permalink / raw)
  To: Serge Belyshev
  Cc: Mike Galbraith, Con Kolivas, Ingo Molnar,
	linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

>>>>> "Serge" == Serge Belyshev <belyshev@depni.sinp.msu.ru> writes:

Serge> Mike Galbraith <efault@gmx.de> writes:
Serge> [snip]
>> It seems to be a plain linear slowdown.  The lurchiness I'm experiencing
>> varies in intensity, and is impossible to quantify.  I see neither
>> lurchiness nor slowdown in mainline through -j8.
>> 

Serge> Whaa? make -j8 on mainline makes my desktop box completely
Serge> useless.

I tried a make -j5 on my Dual processor PIII Xeon box.  It was pretty
slow.  Firefox was ok scrolling with keyboard, but the scrollbar was
jerky.  I also has MP3s playing at the same time and it worked just
fine.  No drops that I heard. 

This is 2.6.21-rc3 patched with RSDL.  

To me, that large a load on my system was just pushing it too hard.
Esp since firefox seems to have become a total CPU hog in it's own
right lately.  Or is that because I have a Matrox G450 as my video
card and it's not handling GPU rendering in the card, but in the OS
these days?  I dunno...

I guess I need to reboot into the default scheduler and see how it
goes.  

Serge> Please reconsider your statement.

This is not a nice statement on your half.  Mike has been doing a
great job testing.  He and Con don't seem to communicate well at
times, but hey, they're still talking and working together, even with
friction.  I'm happy to see all this testing going on!


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 19:06                             ` Xavier Bestel
@ 2007-03-13 17:21                               ` Valdis.Kletnieks
  0 siblings, 0 replies; 91+ messages in thread
From: Valdis.Kletnieks @ 2007-03-13 17:21 UTC (permalink / raw)
  To: Xavier Bestel
  Cc: Con Kolivas, Mike Galbraith, Ingo Molnar,
	linux kernel mailing list, ck list, Linus Torvalds,
	Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 644 bytes --]

On Mon, 12 Mar 2007 20:06:43 BST, Xavier Bestel said:
> Le mardi 13 mars 2007 à 05:49 +1100, Con Kolivas a écrit :
> > Again I think your test is not a valid testcase. Why use two threads for your 
> > encoding with one cpu? Is that what other dedicated desktop OSs would do?
> 
> One thought occured to me (shit happens, sometimes): as your scheduler
> is "strictly fair", won't that enable trivial DoS by just letting an
> user fork a multitude of CPU-intensive processes ?

Fork bombs are the reason that 'ulimit -u' exists. I don't see this scheduler
as being significantly more DoS'able via that route than previous schedulers.

[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 15:15                                         ` David Schwartz
@ 2007-03-13 17:59                                           ` Jeremy Fitzhardinge
  2007-03-13 19:58                                             ` David Schwartz
  0 siblings, 1 reply; 91+ messages in thread
From: Jeremy Fitzhardinge @ 2007-03-13 17:59 UTC (permalink / raw)
  To: davids; +Cc: Linux-Kernel@Vger. Kernel. Org

David Schwartz wrote:
> Good interactivity for tasks that aren't themselves CPU hogs. A task should
> get low latency if and only if it's yielding the CPU voluntarily most of the
> time. If it's not, it can only get better interactivity at the cost of
> fairness, and you have to *ask* for that. (Common sense says you can't give
> a task *more* CPU because it yields the CPU a lot. And how else do you
> determine interactivity other than nice level?)
>   

There's a distinction between giving it more cpu and giving it higher
priority: the important part about having high priority is getting low
latency access to the cpu when its needed.

> This scheduler will not give you greater interactivity at the cost of
> fairness unless you really ask for it. I think that's a good thing, though I
> do agree it might take some getting used to.
>   

This really seems like the wrong approach to me.  The implication here
and in other mails is that fairness is an inherently good thing which
should obviously take preference over any other property.

It's a nice simple stance, and its relatively easy to code up and test
to see that its working, but it doesn't really give people what they want.

The old unix-style dynamic priority scheme was designed to give
interactive processes high priorities, by using the observation that
"interactive" means "spends a lot of time blocked waiting for input". 
That model of interactive is way too simple now, and the attempts to try
an find an equivalent heuristic have been flawed and lead to - in some
cases - wildly bad behaviours.  I'm guessing the emphasis on "fairness"
is in reaction to this, which is fair enough.

But saying that the user needs to explicitly hold the schedulers hand
and nice everything to tell it how to schedule seems to be an abdication
of duty, an admission of failure.  We can't expect users to finesse all
their processes with nice, and it would be a bad user interface to ask
them to do so. 

And if someone/distro *does* go to all the effort of managing how to get
all the processes at the right nice levels, you have this big legacy
problem where you're now stuck keeping all those nice values meaningful
as you continue to develop the scheduler.  Its bad enough to make them
do the work in the first place, but its worse if they need to make it a
kernel version dependent function.

    J

^ permalink raw reply	[flat|nested] 91+ messages in thread

* RE: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 17:59                                           ` Jeremy Fitzhardinge
@ 2007-03-13 19:58                                             ` David Schwartz
  2007-03-13 20:10                                               ` Jeremy Fitzhardinge
  2007-03-13 20:27                                               ` Bill Huey
  0 siblings, 2 replies; 91+ messages in thread
From: David Schwartz @ 2007-03-13 19:58 UTC (permalink / raw)
  To: jeremy; +Cc: Linux-Kernel@Vger. Kernel. Org


> There's a distinction between giving it more cpu and giving it higher
> priority: the important part about having high priority is getting low
> latency access to the cpu when its needed.

I agree. Tasks that voluntarily relinquish their timeslices should get lower
latency compared to other processes at the same static priority.

> This really seems like the wrong approach to me.  The implication here
> and in other mails is that fairness is an inherently good thing which
> should obviously take preference over any other property.

Yes, that is the implication. The alternative to fairness is arbitrary
unfairness. "Rational unfairness" is a form of fairness.

> The old unix-style dynamic priority scheme was designed to give
> interactive processes high priorities, by using the observation that
> "interactive" means "spends a lot of time blocked waiting for input".
> That model of interactive is way too simple now, and the attempts to try
> an find an equivalent heuristic have been flawed and lead to - in some
> cases - wildly bad behaviours.  I'm guessing the emphasis on "fairness"
> is in reaction to this, which is fair enough.

I don't think it makes sense for the scheduler to look for some hint that
the user would prefer a task to get more CPU and try to give it more. That's
what 'nice' is for.

> But saying that the user needs to explicitly hold the schedulers hand
> and nice everything to tell it how to schedule seems to be an abdication
> of duty, an admission of failure.  We can't expect users to finesse all
> their processes with nice, and it would be a bad user interface to ask
> them to do so.

Then you will always get cases where the scheduler does not do what the user
wants because the scheduler does not *know* what the user wants. You always
have to tell a computer what you want it to do, and the best it can do is
faithfully follow your request.

I think it's completely irrational to ask for a scheduler that automatically
gives more CPU time to CPU hogs.

> And if someone/distro *does* go to all the effort of managing how to get
> all the processes at the right nice levels, you have this big legacy
> problem where you're now stuck keeping all those nice values meaningful
> as you continue to develop the scheduler.  Its bad enough to make them
> do the work in the first place, but its worse if they need to make it a
> kernel version dependent function.

I agree. I'm not claiming to have the perfect solution. Let's not let the
perfect be the enemy of the good though.

DS



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 19:58                                             ` David Schwartz
@ 2007-03-13 20:10                                               ` Jeremy Fitzhardinge
  2007-03-13 20:35                                                 ` Bill Huey
  2007-03-13 20:27                                               ` Bill Huey
  1 sibling, 1 reply; 91+ messages in thread
From: Jeremy Fitzhardinge @ 2007-03-13 20:10 UTC (permalink / raw)
  To: davids; +Cc: Linux-Kernel@Vger. Kernel. Org

David Schwartz wrote:
>> There's a distinction between giving it more cpu and giving it higher
>> priority: the important part about having high priority is getting low
>> latency access to the cpu when its needed.
>>     
>
> I agree. Tasks that voluntarily relinquish their timeslices should get lower
> latency compared to other processes at the same static priority.
>
>   
>> This really seems like the wrong approach to me.  The implication here
>> and in other mails is that fairness is an inherently good thing which
>> should obviously take preference over any other property.
>>     
>
> Yes, that is the implication. The alternative to fairness is arbitrary
> unfairness. "Rational unfairness" is a form of fairness.
>
>   
>> The old unix-style dynamic priority scheme was designed to give
>> interactive processes high priorities, by using the observation that
>> "interactive" means "spends a lot of time blocked waiting for input".
>> That model of interactive is way too simple now, and the attempts to try
>> an find an equivalent heuristic have been flawed and lead to - in some
>> cases - wildly bad behaviours.  I'm guessing the emphasis on "fairness"
>> is in reaction to this, which is fair enough.
>>     
>
> I don't think it makes sense for the scheduler to look for some hint that
> the user would prefer a task to get more CPU and try to give it more. That's
> what 'nice' is for.
>
>   
>> But saying that the user needs to explicitly hold the schedulers hand
>> and nice everything to tell it how to schedule seems to be an abdication
>> of duty, an admission of failure.  We can't expect users to finesse all
>> their processes with nice, and it would be a bad user interface to ask
>> them to do so.
>>     
>
> Then you will always get cases where the scheduler does not do what the user
> wants because the scheduler does not *know* what the user wants. You always
> have to tell a computer what you want it to do, and the best it can do is
> faithfully follow your request.
>   

Hm, well.  The general preference has been for the kernel to do a
good-enough job on getting the common cases right without tuning, and
then only add knobs for the really tricky cases it can't do well.  But
the impression I'm getting here is that you often get sucky behaviours
without tuning.

> I think it's completely irrational to ask for a scheduler that automatically
> gives more CPU time to CPU hogs.
>   

Well, it doesn't have to.  It could give good low latency with short
timeslices to things which appear to be interactive.  If the interactive
program doesn't make good use of its low latency, then it will suck. 
But that's largely independent of how much overall CPU you give it.

> I agree. I'm not claiming to have the perfect solution. Let's not let the
> perfect be the enemy of the good though.
>   

For all its faults, the current scheduler mostly does a good job without
much tuning - I normally only use "nice" to run cpu-bound things without
jacking the cpu speed up.  Certainly in my normal interactive use of
compiz vs make -j4 on a dual-core generally gets pretty pretty good
results.  I plan on testing the new scheduler soon though.

    J

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 19:58                                             ` David Schwartz
  2007-03-13 20:10                                               ` Jeremy Fitzhardinge
@ 2007-03-13 20:27                                               ` Bill Huey
  1 sibling, 0 replies; 91+ messages in thread
From: Bill Huey @ 2007-03-13 20:27 UTC (permalink / raw)
  To: David Schwartz
  Cc: jeremy, Linux-Kernel@Vger. Kernel. Org, Bill Huey (hui), Con Kolivas

On Tue, Mar 13, 2007 at 12:58:01PM -0700, David Schwartz wrote:
> > But saying that the user needs to explicitly hold the schedulers hand
> > and nice everything to tell it how to schedule seems to be an abdication
> > of duty, an admission of failure.  We can't expect users to finesse all
> > their processes with nice, and it would be a bad user interface to ask
> > them to do so.
> 
> Then you will always get cases where the scheduler does not do what the user
> wants because the scheduler does not *know* what the user wants. You always
> have to tell a computer what you want it to do, and the best it can do is
> faithfully follow your request.
> 
> I think it's completely irrational to ask for a scheduler that automatically
> gives more CPU time to CPU hogs.

SGI machines had an interactive term in their scheduler as well as a
traditional nice priority. It might be useful for Con to possibly consider
this as an extension for problematic (badly hacked) processes like X.

Nice as a control mechanism is rather coarse, yet overly strict because of
the sophistication of his scheduler. Having an additional term (control knob)
would be nice for a scheduler that is built upon (correct me if I'm wrong Con):

1) has rudimentary bandwidth control for a group of runnable processes
2) has a basic deadline mechanism

The "nice" term is only an indirect way of controlling his scheduler and
think and this kind of imprecise tweeking being done with various apps is an
indicator of how lacking it is as a control term in the scheduler. It would
be good to have some kind of coherent and direct control over the knobs that
are (1) and (2).

Schedulers like this have superior control over these properties and they
should be fully exploited with terms in additional to "nice".

Item (1) is subject to a static "weight" multiplication in relation to other
runnable tasks. It also might be useful to make a part of that term a bit
dynamic to get some kind of interactivity control back. It's a matter of
testing, tweeking, etc... and are not easy for apps that don't have a
direct thread context to control like a thread unaware X system.

> > And if someone/distro *does* go to all the effort of managing how to get
> > all the processes at the right nice levels, you have this big legacy
> > problem where you're now stuck keeping all those nice values meaningful
> > as you continue to develop the scheduler.  Its bad enough to make them
> > do the work in the first place, but its worse if they need to make it a
> > kernel version dependent function.
> 
> I agree. I'm not claiming to have the perfect solution. Let's not let the
> perfect be the enemy of the good though.

I hope this was useful.

bill


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 20:10                                               ` Jeremy Fitzhardinge
@ 2007-03-13 20:35                                                 ` Bill Huey
  0 siblings, 0 replies; 91+ messages in thread
From: Bill Huey @ 2007-03-13 20:35 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: davids, Linux-Kernel@Vger. Kernel. Org, Con Kolivas, Bill Huey (hui)

On Tue, Mar 13, 2007 at 01:10:40PM -0700, Jeremy Fitzhardinge wrote:
> David Schwartz wrote:
> Hm, well.  The general preference has been for the kernel to do a
> good-enough job on getting the common cases right without tuning, and
> then only add knobs for the really tricky cases it can't do well.  But
> the impression I'm getting here is that you often get sucky behaviours
> without tuning.

Well, you get strict behaviors as expected for this scheduler. 

> > I think it's completely irrational to ask for a scheduler that automatically
> > gives more CPU time to CPU hogs.
> >   
> 
> Well, it doesn't have to.  It could give good low latency with short
> timeslices to things which appear to be interactive.  If the interactive
> program doesn't make good use of its low latency, then it will suck. 
> But that's largely independent of how much overall CPU you give it.

This is way beyond what SCHED_OTHER should do. It can't predict the universe.
Much of the interactivity estimator borders on magic. It just happens to
also "be a good fit" for hacky apps as well almost by accident.

> > I agree. I'm not claiming to have the perfect solution. Let's not let the
> > perfect be the enemy of the good though.
> 
> For all its faults, the current scheduler mostly does a good job without
> much tuning - I normally only use "nice" to run cpu-bound things without
> jacking the cpu speed up.  Certainly in my normal interactive use of
> compiz vs make -j4 on a dual-core generally gets pretty pretty good
> results.  I plan on testing the new scheduler soon though.

We can do MUCH better in the long run with something like Con's scheduler.
His approach shouldn't be dismissed because it's running into a relatively
few minor snags large the fault of scheduleing opaque applications. It's
precise enough that it can also be loosened up a bit with additional
control terms (previous email).

It might be good to think about that a bit to see if a schema like this can
be made more adaptable for the environment it serves. You'd then have both
precisely bounded control over CPU usage and enough flexibility for burstly
needs of certain apps.

bill


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [ck] Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-13 10:24                                             ` Xavier Bestel
@ 2007-03-13 23:19                                               ` Sanjoy Mahajan
  0 siblings, 0 replies; 91+ messages in thread
From: Sanjoy Mahajan @ 2007-03-13 23:19 UTC (permalink / raw)
  To: Xavier Bestel
  Cc: Con Kolivas, ck, Ingo Molnar, Andrew Morton, Mike Galbraith,
	Linus Torvalds, linux kernel mailing list

> a previous discussion that said 4 was the default...I don't see
> why. nice uses +10 by default on all linux distro...So I suspect
> that if Mike just used "nice lame" instead of "nice +5 lame", he
> would have got what he wanted.

tcsh, and probably csh, has a builtin 'nice' with default +4.  So

  tcsh% nice ps -l

will show a process with nice +4.  If you tell it not to use the builtin,

  tcsh% \nice ps -l

then it uses /usr/bin/nice and you get +10.  bash doesn't have a nice
builtin, so it always uses /usr/bin/nice and you get +10 by default.

-Sanjoy

`Not all those who wander are lost.' (J.R.R. Tolkien)

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 16:38                   ` Kasper Sandberg
@ 2007-03-14  2:25                     ` Valdis.Kletnieks
  2007-03-14  3:25                       ` Gabriel C
  0 siblings, 1 reply; 91+ messages in thread
From: Valdis.Kletnieks @ 2007-03-14  2:25 UTC (permalink / raw)
  To: Kasper Sandberg
  Cc: Con Kolivas, Xavier Bestel, Mike Galbraith, Ingo Molnar,
	linux kernel mailing list, ck list, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 258 bytes --]

On Mon, 12 Mar 2007 17:38:38 BST, Kasper Sandberg said:
> with latest xorg, xlib will be using xcb internally,

Out of curiosity, when is this "latest" Xorg going to escape to distros,
and is it far enough along that beta testers can gather usable numbers?


[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-14  2:25                     ` Valdis.Kletnieks
@ 2007-03-14  3:25                       ` Gabriel C
  2007-03-14  9:44                         ` Xavier Bestel
  0 siblings, 1 reply; 91+ messages in thread
From: Gabriel C @ 2007-03-14  3:25 UTC (permalink / raw)
  To: Valdis.Kletnieks
  Cc: Kasper Sandberg, Con Kolivas, Xavier Bestel, Mike Galbraith,
	Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

Valdis.Kletnieks@vt.edu wrote:
> On Mon, 12 Mar 2007 17:38:38 BST, Kasper Sandberg said:
>   
>> with latest xorg, xlib will be using xcb internally,
>>     
>
> Out of curiosity, when is this "latest" Xorg going to escape to distros,
>   

Already is .. Xorg 7.2+ libx11 build with xcb enabled..


> and is it far enough along that beta testers can gather usable numbers?
>
>   


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-14  3:25                       ` Gabriel C
@ 2007-03-14  9:44                         ` Xavier Bestel
  0 siblings, 0 replies; 91+ messages in thread
From: Xavier Bestel @ 2007-03-14  9:44 UTC (permalink / raw)
  To: Gabriel C
  Cc: Valdis.Kletnieks, Kasper Sandberg, Con Kolivas, Mike Galbraith,
	Ingo Molnar, linux kernel mailing list, ck list, Andrew Morton

On Wed, 2007-03-14 at 04:25 +0100, Gabriel C wrote:
> Valdis.Kletnieks@vt.edu wrote:
> > On Mon, 12 Mar 2007 17:38:38 BST, Kasper Sandberg said:
> >   
> >> with latest xorg, xlib will be using xcb internally,
> >>     
> >
> > Out of curiosity, when is this "latest" Xorg going to escape to distros,
> >   
> 
> Already is .. Xorg 7.2+ libx11 build with xcb enabled..

I think the true improvement will come when toolkits (GTK+ & Qt) are
ported to xcb.

	Xav



^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
  2007-03-12 22:51                                   ` Con Kolivas
  2007-03-13  5:10                                     ` Mike Galbraith
@ 2007-03-16 16:42                                     ` Pavel Machek
  1 sibling, 0 replies; 91+ messages in thread
From: Pavel Machek @ 2007-03-16 16:42 UTC (permalink / raw)
  To: Con Kolivas
  Cc: Mike Galbraith, Ingo Molnar, linux kernel mailing list, ck list,
	Linus Torvalds, Andrew Morton

Hi!

> >> > Killing the known corner case starvation scenarios 
> >is wonderful, but
> >> > let's not just pretend that interactive tasks don't 
> >have any special
> >> > requirements.
> >>
> >> Now you're really making a stretch of things. Where 
> >on earth did I say that
> >> interactive tasks don't have special requirements? 
> >It's a fundamental feature
> >> of this scheduler that I go to great pains to get 
> >them as low latency as
> >> possible and their fair share of cpu despite having a 
> >completely fair cpu
> >> distribution.
> >
> >As soon as your cpu is fully utilized, fairness looses 
> >or interactivity
> >loses.  Pick one.
> 
> That's not true unless you refuse to prioritise your 
> tasks
> accordingly. Let's take this discussion in a different 
> direction. You
> already nice your lame processes. Why? You already have 
> the concept
> that you are prioritising things to normal or background 
> tasks. You
> say so yourself that lame is a background task. Stating 
> the bleedingly
> obvious, the unix way of prioritising things is via 
> nice. You already
> do that. So moving on from that...
> 
> Your test case you ask "how can I maximise cpu usage". 
> Well you know
> the answer already. You run two threads. I won't dispute 
> that.
> 
> The debate seems to be centered on whether two tasks 
> that are niced +5
> or to a higher value is background. In my opinion, nice 
> 5 is not
> background, but relatively less cpu. You already are 
> savvy enough to
> be using two threads and nicing them. All I ask you to 
> do when using
> RSDL is to change your expectations slightly and your 
> settings from
> nice 5 to nice 10 or 15 or even 19. Why is that so 
> offensive to you?
> nice 5 is 75% the cpu of nice 0. nice 10 is 50%, nice 15 
> is 25%, nice

Hmm, I'd certainly expect nice to be stronger. nice 5 should be 50% or
less of nice 0... You'll not even notice nice 2 if that is 90%...

And I guess it nicely solves problem in this thread, too...
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
@ 2007-03-12 19:53 Al Boldi
  0 siblings, 0 replies; 91+ messages in thread
From: Al Boldi @ 2007-03-12 19:53 UTC (permalink / raw)
  To: linux-kernel

Xavier Bestel wrote:
> Le mardi 13 mars 2007 à 05:49 +1100, Con Kolivas a écrit :
> > Again I think your test is not a valid testcase. Why use two threads for
> > your encoding with one cpu? Is that what other dedicated desktop OSs
> > would do?
>
> as your scheduler
> is "strictly fair", won't that enable trivial DoS by just letting an
> user fork a multitude of CPU-intensive processes ?

I don't think we have to worry about DoS with this scheduler.
Note the load average; it happily wraps.

top - 22:34:02 up 8 min,  0 users,  load average: 615.32, 746.30, 929.15
Tasks: 3379 total, 3336 running,  43 sleeping,   0 stopped,   0 zombie
Cpu(s):   8.8% user,  21.1% system,  10.0% nice,  56.5% idle,   3.6% IO-wait
Mem:    499480k total,   414640k used,    84840k free,     1956k buffers
Swap:  1020088k total,        0k used,  1020088k free,     8296k cached

  PID  PR  NI  VIRT  RES  SHR SWAP nFLT nDRT WCHAN     S %CPU    TIME+  Command 
  614  39   0  3668 2712  720  956    5    0 rest_init R  1.9   0:09.70 top     
  845  23   0  3056 1780  932 1276    0    0 wait      S  0.0   0:01.96 sh      
    1  20   0  1440  504  444  936   14    0 rest_init S  0.0   0:00.76 init    
 4180  28   0  3664 2640  652 1024    0    0 rest_init R  3.5   0:00.40 top     
  863  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.30 ping    
  929  39  19  1584  488  412 1096    0    0 rest_init R  0.2   0:00.29 ping    
  871  39  19  1584  488  412 1096    0    0 rest_init R  0.2   0:00.28 ping    
  904  39  19  1584  488  412 1096    0    0 rest_init R  0.2   0:00.28 ping    
  946  39  19  1584  488  412 1096    0    0 rest_init R  0.2   0:00.28 ping    
  947  39  19  1584  488  412 1096    0    0 rest_init R  0.2   0:00.28 ping    
  972  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.28 ping    
  862  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  881  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  901  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  915  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  923  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  926  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  958  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  967  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  987  39  19  1584  488  412 1096    0    0 rest_init R  0.0   0:00.27 ping    
  994  39  19  1584  488  412 1096    0    0 rest_init R  0.2   0:00.27 ping    


Thanks!

--
Al


^ permalink raw reply	[flat|nested] 91+ messages in thread

end of thread, other threads:[~2007-03-16 16:43 UTC | newest]

Thread overview: 91+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-03-11  3:57 [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2 Con Kolivas
2007-03-11 11:39 ` Mike Galbraith
2007-03-11 11:48   ` Con Kolivas
2007-03-11 12:08     ` Mike Galbraith
2007-03-11 12:10   ` Ingo Molnar
2007-03-11 12:20     ` Mike Galbraith
2007-03-11 21:18       ` Mike Galbraith
2007-03-12  7:22     ` Mike Galbraith
2007-03-12  7:48       ` Con Kolivas
2007-03-12  8:29         ` Con Kolivas
2007-03-12  8:55           ` Mike Galbraith
2007-03-12  9:22             ` Con Kolivas
2007-03-12  9:38               ` Mike Galbraith
2007-03-12 10:27                 ` Con Kolivas
2007-03-12 10:57                   ` Mike Galbraith
2007-03-12 11:08                     ` Ingo Molnar
2007-03-12 11:23                       ` Con Kolivas
2007-03-12 13:48                         ` Theodore Tso
2007-03-12 18:09                           ` Con Kolivas
2007-03-12 14:34                         ` Mike Galbraith
2007-03-12 15:26                           ` Linus Torvalds
2007-03-12 18:10                             ` Con Kolivas
2007-03-12 19:36                             ` Peter Zijlstra
2007-03-12 20:36                             ` Mike Galbraith
2007-03-13  4:17                             ` Kyle Moffett
2007-03-13  8:09                             ` Ingo Molnar
2007-03-12 18:49                           ` Con Kolivas
2007-03-12 19:06                             ` Xavier Bestel
2007-03-13 17:21                               ` Valdis.Kletnieks
2007-03-12 20:11                             ` Mike Galbraith
2007-03-12 20:38                               ` Con Kolivas
2007-03-12 20:45                                 ` Mike Galbraith
2007-03-12 22:51                                   ` Con Kolivas
2007-03-13  5:10                                     ` Mike Galbraith
2007-03-13  5:53                                       ` Con Kolivas
2007-03-13  6:08                                         ` [ck] " Rodney Gordon II
2007-03-13  6:17                                         ` Mike Galbraith
2007-03-13  7:53                                         ` Mike Galbraith
2007-03-13  8:22                                         ` Ingo Molnar
2007-03-13  8:18                                       ` Ingo Molnar
2007-03-13  8:22                                         ` Mike Galbraith
2007-03-13  9:21                                         ` Con Kolivas
2007-03-13  9:29                                           ` Ingo Molnar
2007-03-13  9:41                                             ` Con Kolivas
2007-03-13 10:50                                               ` Bill Huey
2007-03-13  9:31                                           ` [ck] " Con Kolivas
2007-03-13 10:24                                             ` Xavier Bestel
2007-03-13 23:19                                               ` Sanjoy Mahajan
2007-03-13  9:33                                         ` Mike Galbraith
2007-03-13  9:39                                           ` Ingo Molnar
2007-03-13 10:06                                             ` Con Kolivas
2007-03-13 11:23                                               ` Mike Galbraith
2007-03-13 11:41                                                 ` Serge Belyshev
2007-03-13 11:46                                                   ` Mike Galbraith
2007-03-13 15:36                                                   ` John Stoffel
2007-03-13 14:17                                           ` Matt Mackall
2007-03-13 15:15                                         ` David Schwartz
2007-03-13 17:59                                           ` Jeremy Fitzhardinge
2007-03-13 19:58                                             ` David Schwartz
2007-03-13 20:10                                               ` Jeremy Fitzhardinge
2007-03-13 20:35                                                 ` Bill Huey
2007-03-13 20:27                                               ` Bill Huey
2007-03-16 16:42                                     ` Pavel Machek
2007-03-12 23:43                                   ` David Lang
2007-03-13  2:23                                     ` Lee Revell
2007-03-13  6:00                                       ` David Lang
2007-03-12 21:34                                 ` [ck] " jos poortvliet
2007-03-12 21:38                                 ` michael chang
2007-03-13  0:09                                   ` Thibaut VARENE
2007-03-13  6:08                                   ` Mike Galbraith
2007-03-13  6:16                                     ` Con Kolivas
2007-03-13  6:30                                       ` Mike Galbraith
2007-03-12 20:42                               ` Peter Zijlstra
2007-03-12 21:05                               ` Serge Belyshev
2007-03-12 21:41                                 ` Mike Galbraith
2007-03-12 11:25                       ` Mike Galbraith
2007-03-12  9:38               ` Xavier Bestel
2007-03-12 10:34                 ` Con Kolivas
2007-03-12 16:38                   ` Kasper Sandberg
2007-03-14  2:25                     ` Valdis.Kletnieks
2007-03-14  3:25                       ` Gabriel C
2007-03-14  9:44                         ` Xavier Bestel
2007-03-12  8:44         ` Mike Galbraith
2007-03-11 14:32   ` Gene Heskett
2007-03-12  6:58     ` Radoslaw Szkodzinski
2007-03-12 11:16       ` Gene Heskett
2007-03-12 11:49         ` Gene Heskett
2007-03-12 11:58           ` Con Kolivas
2007-03-12 16:38             ` Gene Heskett
2007-03-12 18:34               ` Gene Heskett
2007-03-12 19:53 Al Boldi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).