linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [SHED] Questions.
@ 2003-08-31 10:07 Ian Kumlien
  2003-08-31 10:17 ` Nick Piggin
  2003-08-31 18:53 ` Robert Love
  0 siblings, 2 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-08-31 10:07 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ian Kumlien

[-- Attachment #1: Type: text/plain, Size: 1132 bytes --]

Hi, 

I'll risk sounding like a moron again =)

I still wonder about the counter intuitive quantum value for
processes... (or timeslice if you will)

Why not use small quantum values for high pri processes and long for low
pri since the high pri processes will preempt the low pri processes
anyways. And for a server working under load with only a few processes
(assuming they are all low pri) would lessen the context switches.

And a system with "interactive load" as well would, as i said, preempt
the lower pris. But this could also cause a problem... Imho there should
be a "min quantum value" so that processes can't preempt a process that
was just scheduled (i dunno if this is implemented already though). 

Imho this would also make it easy to get the right pri for highpri
processes since the quantum value is smaller and if you use it all up
you get demoted.

Anyways, I've been wondering about the inverted values in the scheduler
and for a mixed load/server load i don't see the benefit... =P

PS. Do not forget to CC me since i'm not on this list...
DS.

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 10:07 [SHED] Questions Ian Kumlien
@ 2003-08-31 10:17 ` Nick Piggin
  2003-08-31 10:24   ` Ian Kumlien
  2003-08-31 18:53 ` Robert Love
  1 sibling, 1 reply; 36+ messages in thread
From: Nick Piggin @ 2003-08-31 10:17 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel



Ian Kumlien wrote:

>Hi, 
>
>I'll risk sounding like a moron again =)
>
>I still wonder about the counter intuitive quantum value for
>processes... (or timeslice if you will)
>
>Why not use small quantum values for high pri processes and long for low
>pri since the high pri processes will preempt the low pri processes
>anyways. And for a server working under load with only a few processes
>(assuming they are all low pri) would lessen the context switches.
>
>And a system with "interactive load" as well would, as i said, preempt
>the lower pris. But this could also cause a problem... Imho there should
>be a "min quantum value" so that processes can't preempt a process that
>was just scheduled (i dunno if this is implemented already though). 
>
>Imho this would also make it easy to get the right pri for highpri
>processes since the quantum value is smaller and if you use it all up
>you get demoted.
>
>Anyways, I've been wondering about the inverted values in the scheduler
>and for a mixed load/server load i don't see the benefit... =P
>
>PS. Do not forget to CC me since i'm not on this list...
>DS.
>

Search for "Nick's scheduler policy" ;)



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 10:17 ` Nick Piggin
@ 2003-08-31 10:24   ` Ian Kumlien
  2003-08-31 10:41     ` Nick Piggin
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-08-31 10:24 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 351 bytes --]

On Sun, 2003-08-31 at 12:17, Nick Piggin wrote:
> Search for "Nick's scheduler policy" ;)

Heh, yeah, i have been following your and con's work via
marc.theaimsgroup.com. =)

But wouldn't ingos off the shelf stuff work better with the quantum
values like that?

And is the preempt min quantum in there?

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 10:24   ` Ian Kumlien
@ 2003-08-31 10:41     ` Nick Piggin
  2003-08-31 10:46       ` Nick Piggin
       [not found]       ` <1062326980.9959.65.camel@big.pomac.com>
  0 siblings, 2 replies; 36+ messages in thread
From: Nick Piggin @ 2003-08-31 10:41 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel



Ian Kumlien wrote:

>On Sun, 2003-08-31 at 12:17, Nick Piggin wrote:
>
>>Search for "Nick's scheduler policy" ;)
>>
>
>Heh, yeah, i have been following your and con's work via
>marc.theaimsgroup.com. =)
>

Well, my patch does almost exactly what you describe.

>
>But wouldn't ingos off the shelf stuff work better with the quantum
>values like that?
>

That means more complexity and behaviour that is more difficult
to trace. The interactivity stuff is already a monster to tune.

>
>And is the preempt min quantum in there?
>

No. If you do that, you'll either break the priority concept very
badly, or you'll break it a little bit and turn the scheduler into
an O(n) one.



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 10:41     ` Nick Piggin
@ 2003-08-31 10:46       ` Nick Piggin
       [not found]       ` <1062326980.9959.65.camel@big.pomac.com>
  1 sibling, 0 replies; 36+ messages in thread
From: Nick Piggin @ 2003-08-31 10:46 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Ian Kumlien, linux-kernel



Nick Piggin wrote:

>
>
> Ian Kumlien wrote:
>
>
>>
>> And is the preempt min quantum in there?
>>
>
> No. If you do that, you'll either break the priority concept very
> badly, or you'll break it a little bit and turn the scheduler into
> an O(n) one.


Well I guess you could just break it a little bit without it being O(n)


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
       [not found]         ` <3F51D4A4.4090501@cyberone.com.au>
@ 2003-08-31 11:08           ` Ian Kumlien
  2003-08-31 11:31             ` Nick Piggin
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-08-31 11:08 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2519 bytes --]

[Forgot to CC LKML last time, so i didn't remove old text ]

On Sun, 2003-08-31 at 12:57, Nick Piggin wrote:
> Ian Kumlien wrote:
> >On Sun, 2003-08-31 at 12:41, Nick Piggin wrote:
> >>Ian Kumlien wrote:
> >>>On Sun, 2003-08-31 at 12:17, Nick Piggin wrote:

> >>>>Search for "Nick's scheduler policy" ;)

> >>>Heh, yeah, i have been following your and con's work via
> >>>marc.theaimsgroup.com. =)

> >>Well, my patch does almost exactly what you describe.

> >Yes, i know =)... You and con should team up =)

> Heh, well we discuss stuff sometimes, but we disagree on things.
> Which is a good thing because now our eggs are in two baskets.

Yes, but sometimes it feels like a merger would be better... As long as
the propper quantum usage prevails =)

> >>>But wouldn't ingos off the shelf stuff work better with the quantum
> >>>values like that?

> >>That means more complexity and behaviour that is more difficult
> >>to trace. The interactivity stuff is already a monster to tune.

> >Oh, humm, how much did you change btw? =))

> Yeah quite a lot. Lots included removing the interactivity stuff.

Humm, yeah, that should work automatically with the "used the full
quantum" if thats still in that is... =)

> >>>And is the preempt min quantum in there?

> >>No. If you do that, you'll either break the priority concept very
> >>badly, or you'll break it a little bit and turn the scheduler into
> >>an O(n) one.

> >>Well I guess you could just break it a little bit without it being
> >>O(n)

> >Well, i just thought since each context switch/reschedule is costly...
> >Having something that prevents a freshly scheduled process from being
> >forced off before it can actually do something would be usefull.

> Yeah it is, but the process will still take a lot of the penalty,
> and if it is using a lot of CPU in context switching, then it will
> get a lower priority anyway. Possibly there could be a very small
> additional penalty per context switch, but so far it hasn't been
> a big problem AFAIK.

Well my idea was more... The highest pri gets MIN_QUANT and a preemt
can't happen faster than MIN_QUANT or so.. 
If i remember correctly, 2.6 spends much more time doing the actual
context switches (not time / context switch but amount during this
period). The new 1000 HZ thingy doesn't have to have that effect...

And since to many context switches are inefficient imho, some standoffs
would be good =)

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 11:08           ` Ian Kumlien
@ 2003-08-31 11:31             ` Nick Piggin
  2003-08-31 11:43               ` Ian Kumlien
  0 siblings, 1 reply; 36+ messages in thread
From: Nick Piggin @ 2003-08-31 11:31 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel



Ian Kumlien wrote:

>[Forgot to CC LKML last time, so i didn't remove old text ]
>
>On Sun, 2003-08-31 at 12:57, Nick Piggin wrote:
>
>
>>Heh, well we discuss stuff sometimes, but we disagree on things.
>>Which is a good thing because now our eggs are in two baskets.
>>
>
>Yes, but sometimes it feels like a merger would be better... As long as
>the propper quantum usage prevails =)
>

Nope. They're going in different directions. We'd slow each other down.

>
>>Yeah quite a lot. Lots included removing the interactivity stuff.
>>
>
>Humm, yeah, that should work automatically with the "used the full
>quantum" if thats still in that is... =)
>

You've lost me here.
My stuff is the opposite of what the interactivity stuff is trying
to do. The interactivity stuff _does_ kind of implement variable
timeslices in the form of re queueing stuff. I think it would be a
nightmare for them to put my variable timeslices on top of that and
then get it to all work properly.

>
>>Yeah it is, but the process will still take a lot of the penalty,
>>and if it is using a lot of CPU in context switching, then it will
>>get a lower priority anyway. Possibly there could be a very small
>>additional penalty per context switch, but so far it hasn't been
>>a big problem AFAIK.
>>
>
>Well my idea was more... The highest pri gets MIN_QUANT and a preemt
>can't happen faster than MIN_QUANT or so.. 
>

My idea is to try to make it as simple as possible, and no
simpler (as a great man put it!). So more is less if you
know what I mean.

I think this is going against how the scheduler (and UNIX
schedulers in general) have generally behaved. Its very likely
that you'd be better off fixing your app / other broken bit
of kernel code though.

I don't know... maybe...

>
>If i remember correctly, 2.6 spends much more time doing the actual
>context switches (not time / context switch but amount during this
>period). The new 1000 HZ thingy doesn't have to have that effect...
>
>And since to many context switches are inefficient imho, some standoffs
>would be good =)
>

I'm not sure. I think the 1000HZ thing is mainly from timer interrupts.
The scheduler should be pretty well agnostic to the 100->1000 change,
other than having higher resolution. Increased context switches might
indicate something is not being scaled with HZ properly though.

Yes context switches are inefficient. The tradeoff is vs scheduling
latency and there is no way around that.



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 11:31             ` Nick Piggin
@ 2003-08-31 11:43               ` Ian Kumlien
  0 siblings, 0 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-08-31 11:43 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3579 bytes --]

On Sun, 2003-08-31 at 13:31, Nick Piggin wrote:
> Ian Kumlien wrote:
> 
> >[Forgot to CC LKML last time, so i didn't remove old text ]
> >
> >On Sun, 2003-08-31 at 12:57, Nick Piggin wrote:
> >
> >
> >>Heh, well we discuss stuff sometimes, but we disagree on things.
> >>Which is a good thing because now our eggs are in two baskets.
> >>
> >
> >Yes, but sometimes it feels like a merger would be better... As long as
> >the propper quantum usage prevails =)

> Nope. They're going in different directions. We'd slow each other down.

Okis.

> >>Yeah quite a lot. Lots included removing the interactivity stuff.
> >>
> >
> >Humm, yeah, that should work automatically with the "used the full
> >quantum" if thats still in that is... =)
> >
> 
> You've lost me here.
> My stuff is the opposite of what the interactivity stuff is trying
> to do. The interactivity stuff _does_ kind of implement variable
> timeslices in the form of re queueing stuff. I think it would be a
> nightmare for them to put my variable timeslices on top of that and
> then get it to all work properly.

Well, i dunno how your patch works (i forget =))... But afair ingos
interactivity patches was about the amount of the quantum that was used.
And combining that with high = small and low = large would automatically
balance the priorities accordingly.

> >>Yeah it is, but the process will still take a lot of the penalty,
> >>and if it is using a lot of CPU in context switching, then it will
> >>get a lower priority anyway. Possibly there could be a very small
> >>additional penalty per context switch, but so far it hasn't been
> >>a big problem AFAIK.
> >>
> >
> >Well my idea was more... The highest pri gets MIN_QUANT and a preemt
> >can't happen faster than MIN_QUANT or so.. 
> >
> 
> My idea is to try to make it as simple as possible, and no
> simpler (as a great man put it!). So more is less if you
> know what I mean.

Yup =)

> I think this is going against how the scheduler (and UNIX
> schedulers in general) have generally behaved. Its very likely
> that you'd be better off fixing your app / other broken bit
> of kernel code though.
> 
> I don't know... maybe...

Humm i thought more in the direction of:
Preempt prior to MIN_QUANT being used -> put it on the runqueue as the
next process being scheduled, change the running tasks timeslice ->
continue with current task.

(make the current tasks timeslice appear as used.)

> >If i remember correctly, 2.6 spends much more time doing the actual
> >context switches (not time / context switch but amount during this
> >period). The new 1000 HZ thingy doesn't have to have that effect...
> >
> >And since to many context switches are inefficient imho, some standoffs
> >would be good =)
> >
> 
> I'm not sure. I think the 1000HZ thing is mainly from timer interrupts.
> The scheduler should be pretty well agnostic to the 100->1000 change,
> other than having higher resolution. Increased context switches might
> indicate something is not being scaled with HZ properly though.

hummm i dunno, but afair the scheduler uses that timing aswell... 

> Yes context switches are inefficient. The tradeoff is vs scheduling
> latency and there is no way around that.

Thus, keeping preempt from being able to preempt other tasks prior to
MIN_QUANT being used is bad.. =)

Which also might fix the "child preempting parent on fork" problem that
con patched afair.
(dunno if you have the same problem though...)

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 10:07 [SHED] Questions Ian Kumlien
  2003-08-31 10:17 ` Nick Piggin
@ 2003-08-31 18:53 ` Robert Love
  2003-08-31 19:31   ` Ian Kumlien
  1 sibling, 1 reply; 36+ messages in thread
From: Robert Love @ 2003-08-31 18:53 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel

On Sun, 2003-08-31 at 06:07, Ian Kumlien wrote:

> Why not use small quantum values for high pri processes and long for low
> pri since the high pri processes will preempt the low pri processes
> anyways. And for a server working under load with only a few processes
> (assuming they are all low pri) would lessen the context switches.

The rationale behind giving high priority processes a large timeslice is
two-fold:

(1) if they are interactive, then they won't actually use it all (this
is the point you are making). But,

(2) Having a large timeslice ensures that they have a high probability
of having available timeslice when they _do_ need it.

So, yes, interactive processes can get by with a small timeslice,
because that is by-definition all they need.  But they do need to run
often (i.e., as I think you have mentioned in your last email,
interactive processes are "run often for short periods"), so the large
timeslice ensures that they are never expired.

A counterargument might be that the large timeslice is a detriment to
other high priority processes.  But the thinking is that, by definition,
interactive processes won't use all of the timeslice.  And thus will not
hog the CPU.  If they do, the interactivity estimator will quickly bring
them down.

That is the rationale in the current scheduler, anyhow.  Nick's current
work is interesting, and a bit different.

> And a system with "interactive load" as well would, as i said, preempt
> the lower pris. But this could also cause a problem... Imho there should
> be a "min quantum value" so that processes can't preempt a process that
> was just scheduled (i dunno if this is implemented already though). 

I don't think this is a good idea.  I see your intention, but we have
priorities for a reason.

	Robert Love



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 18:53 ` Robert Love
@ 2003-08-31 19:31   ` Ian Kumlien
  2003-08-31 19:51     ` Robert Love
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-08-31 19:31 UTC (permalink / raw)
  To: Robert Love; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3028 bytes --]

On Sun, 2003-08-31 at 20:53, Robert Love wrote:
> On Sun, 2003-08-31 at 06:07, Ian Kumlien wrote:
> 
> > Why not use small quantum values for high pri processes and long for low
> > pri since the high pri processes will preempt the low pri processes
> > anyways. And for a server working under load with only a few processes
> > (assuming they are all low pri) would lessen the context switches.
> 
> The rationale behind giving high priority processes a large timeslice is
> two-fold:
> 
> (1) if they are interactive, then they won't actually use it all (this
> is the point you are making). But,
> 
> (2) Having a large timeslice ensures that they have a high probability
> of having available timeslice when they _do_ need it.

Since they would have a high pri still, and preempt is there... it
should be back on the cpu pretty quick.

> So, yes, interactive processes can get by with a small timeslice,
> because that is by-definition all they need.  But they do need to run
> often (i.e., as I think you have mentioned in your last email,
> interactive processes are "run often for short periods"), so the large
> timeslice ensures that they are never expired.

But, it also creates problems for when a interactive process becomes a
cpu hog. Like this the detection should be faster, but should be slowed
down somewhat.

> A counterargument might be that the large timeslice is a detriment to
> other high priority processes.  But the thinking is that, by definition,
> interactive processes won't use all of the timeslice.  And thus will not
> hog the CPU.  If they do, the interactivity estimator will quickly bring
> them down.

But, hogs would instead cause a context switch hell and lessen the
throughput on server loads...

> That is the rationale in the current scheduler, anyhow.  Nick's current
> work is interesting, and a bit different.

Yes, saner imho =)

> > And a system with "interactive load" as well would, as i said, preempt
> > the lower pris. But this could also cause a problem... Imho there should
> > be a "min quantum value" so that processes can't preempt a process that
> > was just scheduled (i dunno if this is implemented already though). 
> 
> I don't think this is a good idea.  I see your intention, but we have
> priorities for a reason.

I don't see how priorities would be questioned... Since, all i say is
that a task that gets preempted should have a guaranteed time on the cpu
so that we don't waste cycles doing context switches all the time. 

I can see that Ingos current scheduler is good from a desktop
standpoint, but having it that way is not warranted when preempt comes
in to the picture (if i correctly understand it's workings)... 
With preempt i actually see no reason for the priority inversion.. And
to answer someone who mailed about this before: "Yes, it does seem to be
slower than my Amigas, esp the ones that use Executive...".
(That feedback scheduler rocks =))

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 19:31   ` Ian Kumlien
@ 2003-08-31 19:51     ` Robert Love
  2003-08-31 22:41       ` Ian Kumlien
  0 siblings, 1 reply; 36+ messages in thread
From: Robert Love @ 2003-08-31 19:51 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel

On Sun, 2003-08-31 at 15:31, Ian Kumlien wrote:

> Since they would have a high pri still, and preempt is there... it
> should be back on the cpu pretty quick.

Ah, but no!  You assume we do not have an expired list and round robin
scheduling.

Once a task exhausts its timeslice, it cannot run until all other tasks
exhaust their timeslice.  If this were not the case, high priority tasks
could monopolize the system.

> But, it also creates problems for when a interactive process becomes a
> cpu hog. Like this the detection should be faster, but should be slowed
> down somewhat.

I agree, although I do think it responds fairly quick.  But, regardless,
this is why I am interested in Nick's work.  The interactivity estimator
can never be perfect.

> But, hogs would instead cause a context switch hell and lessen the
> throughput on server loads...

Hm, why?

> I don't see how priorities would be questioned... Since, all i say is
> that a task that gets preempted should have a guaranteed time on the cpu
> so that we don't waste cycles doing context switches all the time. 

But latency is important.

	Robert Love



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 19:51     ` Robert Love
@ 2003-08-31 22:41       ` Ian Kumlien
  2003-08-31 23:41         ` Robert Love
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-08-31 22:41 UTC (permalink / raw)
  To: Robert Love; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2200 bytes --]

On Sun, 2003-08-31 at 21:51, Robert Love wrote:
> On Sun, 2003-08-31 at 15:31, Ian Kumlien wrote:
> 
> > Since they would have a high pri still, and preempt is there... it
> > should be back on the cpu pretty quick.
> 
> Ah, but no!  You assume we do not have an expired list and round robin
> scheduling.

hummm, I assume that a high pri process can preempt a low pri process...
The rest sounds sane to me =), Please tell me what i'm missing.. =)

> Once a task exhausts its timeslice, it cannot run until all other tasks
> exhaust their timeslice.  If this were not the case, high priority tasks
> could monopolize the system.

All other? including sleeping?... How many tasks can be assumed to run
on the cpu at a time?....

Should preempt send the new quantum value to all "low pri, high quantum"
processes?

Damn thats a tough cookie, i still think that the priority inversion is
bad. Don't know enough about this to actually provide a solution... 
Any one else that has a view point?

> > But, it also creates problems for when a interactive process becomes a
> > cpu hog. Like this the detection should be faster, but should be slowed
> > down somewhat.
> 
> I agree, although I do think it responds fairly quick.  But, regardless,
> this is why I am interested in Nick's work.  The interactivity estimator
> can never be perfect.

Hummm, the skips in xmms tells me that something is bad.. 
(esp since it works perfectly on the previus scheduler)

> > But, hogs would instead cause a context switch hell and lessen the
> > throughput on server loads...
> 
> Hm, why?

Since it's rescheduled after a short runtime or, might be.
From someones mail i saw (afair), there was much more context switches
in 2.6 than in 2.4. And each schedule consumes time and cycles.

> > I don't see how priorities would be questioned... Since, all i say is
> > that a task that gets preempted should have a guaranteed time on the cpu
> > so that we don't waste cycles doing context switches all the time. 
> 
> But latency is important.

Oh yes, but otoh, if you are really keen on the latency then you'll do
realtime =)

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 22:41       ` Ian Kumlien
@ 2003-08-31 23:41         ` Robert Love
  2003-09-01  0:00           ` Ian Kumlien
  2003-09-01 15:07           ` Daniel Phillips
  0 siblings, 2 replies; 36+ messages in thread
From: Robert Love @ 2003-08-31 23:41 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel

On Sun, 2003-08-31 at 18:41, Ian Kumlien wrote:

> hummm, I assume that a high pri process can preempt a low pri process...
> The rest sounds sane to me =), Please tell me what i'm missing.. =)

No no.  The rule is "the highest priority process with timeslice
remaining runs" not just "the highest priority process runs."

Otherwise, timeslice wouldn't matter much!

When a process exhausts its timeslice, it is moved to the "expired"
list.  When all currently running tasks expire their timeslice, the
scheduler begins servicing from the "expired" list (which then becomes
the "active" list, and the old active list becomes the expired).

This implies that a high priority, which has exhausted its timeslice,
will not be allowed to run again until _all_ other runnable tasks
exhaust their timeslice (this ignores the reinsertion into the active
array of interactive tasks, but that is an optimization that just
complicates this discussion).

If timeslices did not play a role, then high priority tasks would always
monopolize the system.

This is a classic priority-based round-robin scheduler.

> > Once a task exhausts its timeslice, it cannot run until all other tasks
> > exhaust their timeslice.  If this were not the case, high priority tasks
> > could monopolize the system.
> 
> All other? including sleeping?... How many tasks can be assumed to run
> on the cpu at a time?....

I wasn't clear: all other _runnable_ tasks.

Once a task "expires" (exhausts its timeslice), it will not run again
until all other tasks, even those of a lower priority, exhaust their
timeslice.

This is a major difference between normal tasks and real-time tasks.

> Should preempt send the new quantum value to all "low pri, high quantum"
> processes?

I don't follow this?

> Damn thats a tough cookie, i still think that the priority inversion is
> bad. Don't know enough about this to actually provide a solution... 
> Any one else that has a view point?

Priority inversion is bad, but the priority inversion in this case is
intended.  Higher priority tasks cannot starve lower ones.  It is a
classic Unix philosophy that 'all tasks make some forward progress'

If you need to guarantee that a task always runs when runnable, you want
real-time.

If you just want to give a scheduling boost, to ensure greater
runnability, lower latency, and larger timeslices... nice values
suffice.

> Hummm, the skips in xmms tells me that something is bad.. 
> (esp since it works perfectly on the previus scheduler)

A lot of this is just the interactivity estimator making the wrong
estimate.

> Since it's rescheduled after a short runtime or, might be.
> From someones mail i saw (afair), there was much more context switches
> in 2.6 than in 2.4. And each schedule consumes time and cycles.

Context switches (as in process to process changes) should be about the
same?

Interrupt frequency has gone up in x86 (1000 vs 100).  Maybe that is
what they are seeing.

> Oh yes, but otoh, if you are really keen on the latency then you'll do
> realtime =)

Agreed.  But at the same time, not every "interactive" task should be
real-time.  In fact, nearly all should not.  I do not want my text
editor or mailer to be RT, for example.

They just need a scheduling boost.

	Robert Love



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 23:41         ` Robert Love
@ 2003-09-01  0:00           ` Ian Kumlien
  2003-09-01  2:50             ` Con Kolivas
                               ` (2 more replies)
  2003-09-01 15:07           ` Daniel Phillips
  1 sibling, 3 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-09-01  0:00 UTC (permalink / raw)
  To: Robert Love; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5995 bytes --]

On Mon, 2003-09-01 at 01:41, Robert Love wrote:
> On Sun, 2003-08-31 at 18:41, Ian Kumlien wrote:
> 
> > hummm, I assume that a high pri process can preempt a low pri process...
> > The rest sounds sane to me =), Please tell me what i'm missing.. =)
> 
> No no.  The rule is "the highest priority process with timeslice
> remaining runs" not just "the highest priority process runs."

Then i'm beginning to agree with the time unit... Large timeslice but in
units for high pri tasks... So that high pri can run (if needed) 2 or 3
times / timeslice.

More like MAX_QUANT_ON_QUEUE/MIN_QUANT or so... 

> Otherwise, timeslice wouldn't matter much!

Just sounds odd to me.

> When a process exhausts its timeslice, it is moved to the "expired"
> list.  When all currently running tasks expire their timeslice, the
> scheduler begins servicing from the "expired" list (which then becomes
> the "active" list, and the old active list becomes the expired).

Ok, good solution

> This implies that a high priority, which has exhausted its timeslice,
> will not be allowed to run again until _all_ other runnable tasks
> exhaust their timeslice (this ignores the reinsertion into the active
> array of interactive tasks, but that is an optimization that just
> complicates this discussion).

So it's penalised by being in the corner for one go? or just pri
penalised (sounds like it could get a corner from what you wrote... Or
is it time for bed).

> If timeslices did not play a role, then high priority tasks would always
> monopolize the system.

> This is a classic priority-based round-robin scheduler.

> > > Once a task exhausts its timeslice, it cannot run until all other tasks
> > > exhaust their timeslice.  If this were not the case, high priority tasks
> > > could monopolize the system.
> > 
> > All other? including sleeping?... How many tasks can be assumed to run
> > on the cpu at a time?....
> 
> I wasn't clear: all other _runnable_ tasks.

Yes, but how many runable tasks would you have on a system in one go,
while maintaining interactivity...
(Ie, what amount would the scheduler actually have to deal with..)

> Once a task "expires" (exhausts its timeslice), it will not run again
> until all other tasks, even those of a lower priority, exhaust their
> timeslice.

Yeah, it seems like my idea would need several run queues with diff
timeslices to make up.

> This is a major difference between normal tasks and real-time tasks.
> 
> > Should preempt send the new quantum value to all "low pri, high quantum"
> > processes?
> 
> I don't follow this?

Never mind, bad idea, sucky thing... =P

> > Damn thats a tough cookie, i still think that the priority inversion is
> > bad. Don't know enough about this to actually provide a solution... 
> > Any one else that has a view point?
> 
> Priority inversion is bad, but the priority inversion in this case is
> intended.  Higher priority tasks cannot starve lower ones.  It is a
> classic Unix philosophy that 'all tasks make some forward progress'

Yes, like the feedback scheduler... 

> If you need to guarantee that a task always runs when runnable, you want
> real-time.

... yes... =)

> If you just want to give a scheduling boost, to ensure greater
> runnability, lower latency, and larger timeslices... nice values
> suffice.

nicevalues/pri is always the best way imho.

> > Hummm, the skips in xmms tells me that something is bad.. 
> > (esp since it works perfectly on the previus scheduler)
> 
> A lot of this is just the interactivity estimator making the wrong
> estimate.

Yes, But... When you come from AmigaOS, and have used Executive...
things like this is dis concerning. Executive is a scheduler addition
for amigaos that has many schedulers to choose from. One of which is the
original feedback scheduler. While a feedback scheduler consumes some
cpu it still allows you to play mp3's while surfing the net on a 50 mhz
68060. Hearing about 500mhz machines that skip is somewhat.. odd.

And afair it has no real interactivity estimator. 

(If you are interested you can always search for Executive on aminet..
It has several scheduler policies including those that work great on
small machines (25mhz or so))

> > Since it's rescheduled after a short runtime or, might be.
> > From someones mail i saw (afair), there was much more context switches
> > in 2.6 than in 2.4. And each schedule consumes time and cycles.
> 
> Context switches (as in process to process changes) should be about the
> same?

Apparently they are not... I should have saved the link...

> Interrupt frequency has gone up in x86 (1000 vs 100).  Maybe that is
> what they are seeing.

I dunno, i didn't pay that much attention and i can't find it now =P

> > Oh yes, but otoh, if you are really keen on the latency then you'll do
> > realtime =)
> 
> Agreed.  But at the same time, not every "interactive" task should be
> real-time.  In fact, nearly all should not.  I do not want my text
> editor or mailer to be RT, for example.

Well, there is latency and there is latency. To take the AmigaOS
example. Voyager, a webbrowser for AmigaOS uses MUI (a fully dynamic gui
with weighted(prioritized) sections) and renders images. It's responsive
even on a 40mhz 68040 using Executive with the feedback scheduler.

500 mhz is a lot of horsepower when it comes to playing mp3's and
scheduling.. It feels like something is wrong when i see all these
discussions but i most certainly don't know enough to even begin to
understand it. I only tried to show the thing i thought was really wrong
but you do have a point with the runqueues and timeslices =P

> They just need a scheduling boost.

imho, that shouldn't really be needed... =P
(although executive apparently had a pri boost for active window... I
doubt that i ran with it though... Been a while =))

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  0:00           ` Ian Kumlien
@ 2003-09-01  2:50             ` Con Kolivas
  2003-09-01 15:58               ` Antonio Vargas
  2003-09-01 22:19               ` Ian Kumlien
  2003-09-01  4:03             ` Robert Love
  2003-09-01 14:21             ` Antonio Vargas
  2 siblings, 2 replies; 36+ messages in thread
From: Con Kolivas @ 2003-09-01  2:50 UTC (permalink / raw)
  To: Ian Kumlien, Robert Love; +Cc: linux-kernel

On Mon, 1 Sep 2003 10:00, Ian Kumlien wrote:
> On Mon, 2003-09-01 at 01:41, Robert Love wrote:
> > This implies that a high priority, which has exhausted its timeslice,
> > will not be allowed to run again until _all_ other runnable tasks
> > exhaust their timeslice (this ignores the reinsertion into the active
> > array of interactive tasks, but that is an optimization that just
> > complicates this discussion).
>
> So it's penalised by being in the corner for one go? or just pri
> penalised (sounds like it could get a corner from what you wrote... Or
> is it time for bed).

Please read my RFC 
(http://marc.theaimsgroup.com/?l=linux-kernel&m=106178160825835&w=2) which 
has this extensively explained. If this were the case after one timeslice, 
then dragging a window in X at load of say 32 would be impossible; the window 
would move for 0.1 second, stand still for 3.2 seconds then move for another 
0.1 second.

> > > Damn thats a tough cookie, i still think that the priority inversion is
> > > bad. Don't know enough about this to actually provide a solution...
> > > Any one else that has a view point?
> >
> > Priority inversion is bad, but the priority inversion in this case is
> > intended.  Higher priority tasks cannot starve lower ones.  It is a
> > classic Unix philosophy that 'all tasks make some forward progress'
>
> Yes, like the feedback scheduler...

Priority inversion to some extent will exist in any scheduler design that has 
priorities. There are solutions available but they incur a performance 
penalty elsewhere (some people are currently experimenting). The inversion 
problems inherent in my earlier patches are largely gone with the duration 
and severity of inversion being either equal to or smaller than the instances 
that occur in the vanilla scheduler. Nick's approach may work around it 
differently but documentation is hard to find (hint Nick*).

> > > Hummm, the skips in xmms tells me that something is bad..
> > > (esp since it works perfectly on the previus scheduler)
> >
> > A lot of this is just the interactivity estimator making the wrong
> > estimate.
>
> Yes, But... When you come from AmigaOS, and have used Executive...
> things like this is dis concerning. Executive is a scheduler addition
> for amigaos that has many schedulers to choose from. One of which is the
> original feedback scheduler. While a feedback scheduler consumes some
> cpu it still allows you to play mp3's while surfing the net on a 50 mhz
> 68060. Hearing about 500mhz machines that skip is somewhat.. odd.

That's in an attempt to make them as high throughput machines as possible. 
Xmms skipping is basically killed off as a problem in both Nick's and my 
patches. If it still remains it is almost certainly a disk i/o problem (no 
dma) or hitting swap memory.

> Well, there is latency and there is latency. To take the AmigaOS
> example. Voyager, a webbrowser for AmigaOS uses MUI (a fully dynamic gui
> with weighted(prioritized) sections) and renders images. It's responsive
> even on a 40mhz 68040 using Executive with the feedback scheduler.

Multiple processors to do different tasks on amigas kinda helped there...

> 500 mhz is a lot of horsepower when it comes to playing mp3's and
> scheduling.. It feels like something is wrong when i see all these
> discussions but i most certainly don't know enough to even begin to
> understand it. I only tried to show the thing i thought was really wrong
> but you do have a point with the runqueues and timeslices =P

Things are _never ever ever ever_ as simple as they appear on the surface.

Cheers,
Con


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  0:00           ` Ian Kumlien
  2003-09-01  2:50             ` Con Kolivas
@ 2003-09-01  4:03             ` Robert Love
  2003-09-01  5:07               ` Con Kolivas
  2003-09-01 22:24               ` Ian Kumlien
  2003-09-01 14:21             ` Antonio Vargas
  2 siblings, 2 replies; 36+ messages in thread
From: Robert Love @ 2003-09-01  4:03 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: linux-kernel

On Sun, 2003-08-31 at 20:00, Ian Kumlien wrote:

> Then i'm beginning to agree with the time unit... Large timeslice but in
> units for high pri tasks... So that high pri can run (if needed) 2 or 3
> times / timeslice.

Exactly.

> > This implies that a high priority, which has exhausted its timeslice,
> > will not be allowed to run again until _all_ other runnable tasks
> > exhaust their timeslice (this ignores the reinsertion into the active
> > array of interactive tasks, but that is an optimization that just
> > complicates this discussion).
> 
> So it's penalised by being in the corner for one go? or just pri
> penalised (sounds like it could get a corner from what you wrote... Or
> is it time for bed).

Not penalized... all tasks go through the same thing.

Look at it like this.  Assume we have:

	Task A, B, and C at priority 10 (the highest)
	Task D at priority 5
	Tasks E and F at priority 0 (the lowest)

We run them in that order: A, B, C, D, E, then F.  And repeat. 
(Actually, within a given priority, the tasks are run round-robin in any
nonspecific order.. effectively first-come, first-served scheduling).

If [any task] has exhausted its timeslice, it will not run until the
remaining tasks exhaust their timeslice.  Once all tasks have expired,
we start over.

So the total scheduling routine tasks the sum of the timeslices of tasks
A,B,C,D,E,F.  And only when they are all 100% CPU bound.

> Yes, but how many runable tasks would you have on a system in one go,
> while maintaining interactivity...
> (Ie, what amount would the scheduler actually have to deal with..)

Most systems only have a handful (1-2) running tasks that are actually
running.

For example:

	$ ps aux|awk '{print $8}'|grep R|wc -l
	     4
	$ ps aux|wc -l
	     87

But Unix is designed for timesharing among many interactive tasks.  It
works.  The problem faced today in 2.6 is juggling throughput versus
latency in the scheduler, with the interactivity estimator.

	Robert Love



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  4:03             ` Robert Love
@ 2003-09-01  5:07               ` Con Kolivas
  2003-09-01  5:55                 ` Robert Love
  2003-09-01 22:24               ` Ian Kumlien
  1 sibling, 1 reply; 36+ messages in thread
From: Con Kolivas @ 2003-09-01  5:07 UTC (permalink / raw)
  To: Robert Love, Ian Kumlien; +Cc: linux-kernel

On Mon, 1 Sep 2003 14:03, Robert Love wrote:
> Look at it like this.  Assume we have:
>
> 	Task A, B, and C at priority 10 (the highest)
> 	Task D at priority 5
> 	Tasks E and F at priority 0 (the lowest)
>
> We run them in that order: A, B, C, D, E, then F.  And repeat.
> (Actually, within a given priority, the tasks are run round-robin in any
> nonspecific order.. effectively first-come, first-served scheduling).
>
> If [any task] has exhausted its timeslice, it will not run until the
> remaining tasks exhaust their timeslice.  Once all tasks have expired,
> we start over.

I hate to keep butting in and saying this but this is not quite what happens. 
If a task is considered interactive (a priority boost of 2 or more) and it 
uses up a full timeslice then it is checked to see if a starvation limit has 
been exceeded by the tasks on the expired array. If it hasn't exceeded the 
limit, the interactive task will be rescheduled again ahead of everything 
else. ie if A is the only task still considered interactive after using up 
it's timeslice the first time it will go

A,B,C,A 
before anything else

and if nothing else is interactive it can even go
A,B,C,A,A,A
etc until A is not considered interactive (boost lost) or the starvation limit 
is exceeded.

This is not just with my patches; this is Ingo's design.

Con


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  5:07               ` Con Kolivas
@ 2003-09-01  5:55                 ` Robert Love
  0 siblings, 0 replies; 36+ messages in thread
From: Robert Love @ 2003-09-01  5:55 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Ian Kumlien, linux-kernel

On Mon, 2003-09-01 at 01:07, Con Kolivas wrote:

> I hate to keep butting in and saying this but this is not quite what happens. 
> If a task is considered interactive (a priority boost of 2 or more) and it 
> uses up a full timeslice then it is checked to see if a starvation limit has 
> been exceeded by the tasks on the expired array. If it hasn't exceeded the 
> limit, the interactive task will be rescheduled again ahead of everything 
> else. ie if A is the only task still considered interactive after using up 
> it's timeslice the first time it will go

I know this.  I mentioned earlier what I was saying was ignoring the
interactive task reinsertion optimization.

I am trying to explain things in general.

	Robert Love



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01 15:07           ` Daniel Phillips
@ 2003-09-01 14:16             ` Antonio Vargas
  2003-09-01 23:03             ` Ian Kumlien
  1 sibling, 0 replies; 36+ messages in thread
From: Antonio Vargas @ 2003-09-01 14:16 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Robert Love, Ian Kumlien, linux-kernel

On Mon, Sep 01, 2003 at 05:07:19PM +0200, Daniel Phillips wrote:
> On Monday 01 September 2003 01:41, Robert Love wrote:
> > Once a task "expires" (exhausts its timeslice), it will not run again
> > until all other tasks, even those of a lower priority, exhaust their
> > timeslice.
> >
> > ...
> >
> > Priority inversion is bad, but the priority inversion in this case is
> > intended.  Higher priority tasks cannot starve lower ones.  It is a
> > classic Unix philosophy that 'all tasks make some forward progress'
> 
> So if I have 1000 low priority tasks and one high priority task, all CPU 
> bound, the high priority task gets 0.1% CPU.  This is not the desirable or 
> expected behaviour.
> 
> My conclusion is, the strategy of expiring the whole active array before any 
> expired tasks are allowed to run again is incorrect.  Instead, each active 
> list should be refreshed from the expired list individually.  This does not 

AFAIK, this could be implemented with a "list swap" operation, taking
the list head for one priority and exchanging them between expired and
active, or perhaps more properly, taking the expired list and adding it
to the end of the active one. This would be O(1) since the task list for
each priority is double-linked and thus has a "last element" pointer on
the list header.

> affect the desirable O(1) scheduling property.  To prevent low priority 
> starvation, the high-to-low scan should be elaborated to skip some runnable, 
> high priority tasks occasionally in a *controlled* way.

Perhaps this could be done with a random but skewed proportion, similar
to the way you select the level to insert into on the "skip list"
datastructure.
 
> IMHO, this minor change will provide a more solid, predictable base for Con 
> and Nick's dynamic priority and dynamic timeslice experiments.
> 
> Regards,
> 
> Daniel
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
winden/network

1. Dado un programa, siempre tiene al menos un fallo.
2. Dadas varias lineas de codigo, siempre se pueden acortar a menos lineas.
3. Por induccion, todos los programas se pueden
   reducir a una linea que no funciona.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  0:00           ` Ian Kumlien
  2003-09-01  2:50             ` Con Kolivas
  2003-09-01  4:03             ` Robert Love
@ 2003-09-01 14:21             ` Antonio Vargas
  2003-09-01 19:36               ` Geert Uytterhoeven
  2003-09-01 22:49               ` Ian Kumlien
  2 siblings, 2 replies; 36+ messages in thread
From: Antonio Vargas @ 2003-09-01 14:21 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: Robert Love, linux-kernel, geert

On Mon, Sep 01, 2003 at 02:00:09AM +0200, Ian Kumlien wrote:
> On Mon, 2003-09-01 at 01:41, Robert Love wrote:
> > On Sun, 2003-08-31 at 18:41, Ian Kumlien wrote:
> > 
> > > hummm, I assume that a high pri process can preempt a low pri process...
> > > The rest sounds sane to me =), Please tell me what i'm missing.. =)
> > 
> > No no.  The rule is "the highest priority process with timeslice
> > remaining runs" not just "the highest priority process runs."
> 
> Then i'm beginning to agree with the time unit... Large timeslice but in
> units for high pri tasks... So that high pri can run (if needed) 2 or 3
> times / timeslice.
> 
> More like MAX_QUANT_ON_QUEUE/MIN_QUANT or so... 
> 
> > Otherwise, timeslice wouldn't matter much!
> 
> Just sounds odd to me.
> 
> > When a process exhausts its timeslice, it is moved to the "expired"
> > list.  When all currently running tasks expire their timeslice, the
> > scheduler begins servicing from the "expired" list (which then becomes
> > the "active" list, and the old active list becomes the expired).
> 
> Ok, good solution
> 
> > This implies that a high priority, which has exhausted its timeslice,
> > will not be allowed to run again until _all_ other runnable tasks
> > exhaust their timeslice (this ignores the reinsertion into the active
> > array of interactive tasks, but that is an optimization that just
> > complicates this discussion).
> 
> So it's penalised by being in the corner for one go? or just pri
> penalised (sounds like it could get a corner from what you wrote... Or
> is it time for bed).
> 
> > If timeslices did not play a role, then high priority tasks would always
> > monopolize the system.

This happened on Amiga.
 
> > This is a classic priority-based round-robin scheduler.
> 
> > > > Once a task exhausts its timeslice, it cannot run until all other tasks
> > > > exhaust their timeslice.  If this were not the case, high priority tasks
> > > > could monopolize the system.
> > > 
> > > All other? including sleeping?... How many tasks can be assumed to run
> > > on the cpu at a time?....
> > 
> > I wasn't clear: all other _runnable_ tasks.
> 
> Yes, but how many runable tasks would you have on a system in one go,
> while maintaining interactivity...
> (Ie, what amount would the scheduler actually have to deal with..)
> 
> > Once a task "expires" (exhausts its timeslice), it will not run again
> > until all other tasks, even those of a lower priority, exhaust their
> > timeslice.
> 
> Yeah, it seems like my idea would need several run queues with diff
> timeslices to make up.
> 
> > This is a major difference between normal tasks and real-time tasks.
> > 
> > > Should preempt send the new quantum value to all "low pri, high quantum"
> > > processes?
> > 
> > I don't follow this?
> 
> Never mind, bad idea, sucky thing... =P
> 
> > > Damn thats a tough cookie, i still think that the priority inversion is
> > > bad. Don't know enough about this to actually provide a solution... 
> > > Any one else that has a view point?
> > 
> > Priority inversion is bad, but the priority inversion in this case is
> > intended.  Higher priority tasks cannot starve lower ones.  It is a
> > classic Unix philosophy that 'all tasks make some forward progress'
> 
> Yes, like the feedback scheduler... 
> 
> > If you need to guarantee that a task always runs when runnable, you want
> > real-time.
> 
> ... yes... =)
> 
> > If you just want to give a scheduling boost, to ensure greater
> > runnability, lower latency, and larger timeslices... nice values
> > suffice.
> 
> nicevalues/pri is always the best way imho.
> 
> > > Hummm, the skips in xmms tells me that something is bad.. 
> > > (esp since it works perfectly on the previus scheduler)
> > 
> > A lot of this is just the interactivity estimator making the wrong
> > estimate.
> 
> Yes, But... When you come from AmigaOS, and have used Executive...
> things like this is dis concerning. Executive is a scheduler addition
> for amigaos that has many schedulers to choose from. One of which is the
> original feedback scheduler. While a feedback scheduler consumes some
> cpu it still allows you to play mp3's while surfing the net on a 50 mhz
> 68060. Hearing about 500mhz machines that skip is somewhat.. odd.

Ian, I came from Amiga to Linux many moons ago, and their target are
very different... on Amiga, the mouse pointer is drawn as a hardware
sprite (same as an C64 or an arcade machine), and the mouse
movement counters are handled in hardware too, so your mouse pointer
can't _EVER_ get laggy.

The sound system is very different, on Amiga you ask the system to
callback to you when audio needs replenishing, and anyways you could
boost the player priority so that multichanel or mp3 playing gets more
priority than other tasks. I can recall playing mp3 on a 68030/50
and having to boost the player's priority so that it would get no
skipping. As you probably know 68060 machines, even if at the same mhz,
have about 8x raw calculation power.

So, I also feel very bad about linux when my audio skips on my 900mhz
machine and I see reports that it does the same on 2400mhz ones, but I
can understand that the general design and target is not the same...
Amiga was _designed_, both software and hardware wise, for realtime
while Unix and thus Linux is designed for multiuser timesharing.

All that said, having a mp3 decoder as a kernel module reading from 
mlocked ram would a great way to have Amiga-like music replaying ;)

Geert, perhaps you could tell us how linux music playing feels
for a desktop m68k machine? 

[ I'm CCing you since you are the only one from the m68k port    
  which I can see posting on a regular basis.]

> And afair it has no real interactivity estimator. 
> 
> (If you are interested you can always search for Executive on aminet..
> It has several scheduler policies including those that work great on
> small machines (25mhz or so))

For those with no Amiga background, the original Amiga task scheduler
can be assumed to work like linux' RR_REALTIME scheduler with 80ms
timeslices. Important system tasks such as filesystems, disks and input
device handlers ran each on it's own task (shared memory microkernel
design) with adjusted priorities, and then all user-initiated tasks,
window manager included, ran with default priority zero.

[for more info, there are great discussions about Amiga-internals on
very early posts to linux-kernel / linux-activists from about 1991/1992
timeframe]

If the user or a program decided so, it could _always_ change a task
priority to upper or lower levels, which is what I did to my mp3 player
to avoid skips on my under powered machine (mp3 playing used 85%cpu) ;).

"Executive" was an application which patched the Amiga scheduler and
hooked up a priority manager. By altering task' priorities, it managed
to get the standard round-robin scheduler to behave like a feedback one.
(Executive was _G_R_E_A_T_ :)))

Executive was configured to never touch tasks with elevated priorities,
so in fact all user tasks would get the feedback scheduler but system
drivers such as keyboard input system would continue running as realtime
round-robin.
 
> > > Since it's rescheduled after a short runtime or, might be.
> > > From someones mail i saw (afair), there was much more context switches
> > > in 2.6 than in 2.4. And each schedule consumes time and cycles.
> > 
> > Context switches (as in process to process changes) should be about the
> > same?
> 
> Apparently they are not... I should have saved the link...
> 
> > Interrupt frequency has gone up in x86 (1000 vs 100).  Maybe that is
> > what they are seeing.
> 
> I dunno, i didn't pay that much attention and i can't find it now =P
> 
> > > Oh yes, but otoh, if you are really keen on the latency then you'll do
> > > realtime =)
> > 
> > Agreed.  But at the same time, not every "interactive" task should be
> > real-time.  In fact, nearly all should not.  I do not want my text
> > editor or mailer to be RT, for example.
> 
> Well, there is latency and there is latency. To take the AmigaOS
> example. Voyager, a webbrowser for AmigaOS uses MUI (a fully dynamic gui
> with weighted(prioritized) sections) and renders images. It's responsive
> even on a 40mhz 68040 using Executive with the feedback scheduler.
> 500 mhz is a lot of horsepower when it comes to playing mp3's and
> scheduling.. It feels like something is wrong when i see all these
> discussions but i most certainly don't know enough to even begin to
> understand it. I only tried to show the thing i thought was really wrong
> but you do have a point with the runqueues and timeslices =P
> 
> > They just need a scheduling boost.
> 
> imho, that shouldn't really be needed... =P
> (although executive apparently had a pri boost for active window... I
> doubt that i ran with it though... Been a while =))

Yes, it added +1 to the task which owner the active window
(this is also used in Windows if I recall correctly). But even without
this "hack", both executive-enabled and standard systems ran great.
 
Greets, Antonio.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-08-31 23:41         ` Robert Love
  2003-09-01  0:00           ` Ian Kumlien
@ 2003-09-01 15:07           ` Daniel Phillips
  2003-09-01 14:16             ` Antonio Vargas
  2003-09-01 23:03             ` Ian Kumlien
  1 sibling, 2 replies; 36+ messages in thread
From: Daniel Phillips @ 2003-09-01 15:07 UTC (permalink / raw)
  To: Robert Love, Ian Kumlien; +Cc: linux-kernel

On Monday 01 September 2003 01:41, Robert Love wrote:
> Once a task "expires" (exhausts its timeslice), it will not run again
> until all other tasks, even those of a lower priority, exhaust their
> timeslice.
>
> ...
>
> Priority inversion is bad, but the priority inversion in this case is
> intended.  Higher priority tasks cannot starve lower ones.  It is a
> classic Unix philosophy that 'all tasks make some forward progress'

So if I have 1000 low priority tasks and one high priority task, all CPU 
bound, the high priority task gets 0.1% CPU.  This is not the desirable or 
expected behaviour.

My conclusion is, the strategy of expiring the whole active array before any 
expired tasks are allowed to run again is incorrect.  Instead, each active 
list should be refreshed from the expired list individually.  This does not 
affect the desirable O(1) scheduling property.  To prevent low priority 
starvation, the high-to-low scan should be elaborated to skip some runnable, 
high priority tasks occasionally in a *controlled* way.

IMHO, this minor change will provide a more solid, predictable base for Con 
and Nick's dynamic priority and dynamic timeslice experiments.

Regards,

Daniel


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  2:50             ` Con Kolivas
@ 2003-09-01 15:58               ` Antonio Vargas
  2003-09-01 22:19               ` Ian Kumlien
  1 sibling, 0 replies; 36+ messages in thread
From: Antonio Vargas @ 2003-09-01 15:58 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Ian Kumlien, Robert Love, linux-kernel

On Mon, Sep 01, 2003 at 12:50:48PM +1000, Con Kolivas wrote:
> On Mon, 1 Sep 2003 10:00, Ian Kumlien wrote:
> > On Mon, 2003-09-01 at 01:41, Robert Love wrote:

[ big snip ]
 
> > Well, there is latency and there is latency. To take the AmigaOS
> > example. Voyager, a webbrowser for AmigaOS uses MUI (a fully dynamic gui
> > with weighted(prioritized) sections) and renders images. It's responsive
> > even on a 40mhz 68040 using Executive with the feedback scheduler.
> 
> Multiple processors to do different tasks on amigas kinda helped there...

Amiga had just one multipurpose CPU. All other processors were
completelly specialized. It was just that one of these, the blitter,
could be used as a generic "memcpy on steroids" processor, allowing
you to mix 3 sources with shifting and logical operations onto one
destination.
 
> > 500 mhz is a lot of horsepower when it comes to playing mp3's and
> > scheduling.. It feels like something is wrong when i see all these
> > discussions but i most certainly don't know enough to even begin to
> > understand it. I only tried to show the thing i thought was really wrong
> > but you do have a point with the runqueues and timeslices =P
> 
> Things are _never ever ever ever_ as simple as they appear on the surface.

This is SOOOO true :)
 
Greets, Antonio.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01 14:21             ` Antonio Vargas
@ 2003-09-01 19:36               ` Geert Uytterhoeven
  2003-09-01 22:49               ` Ian Kumlien
  1 sibling, 0 replies; 36+ messages in thread
From: Geert Uytterhoeven @ 2003-09-01 19:36 UTC (permalink / raw)
  To: Antonio Vargas; +Cc: Ian Kumlien, Robert Love, Linux Kernel Development

On Mon, 1 Sep 2003, Antonio Vargas wrote:
> On Mon, Sep 01, 2003 at 02:00:09AM +0200, Ian Kumlien wrote:
> > On Mon, 2003-09-01 at 01:41, Robert Love wrote:
> > > On Sun, 2003-08-31 at 18:41, Ian Kumlien wrote:
> Ian, I came from Amiga to Linux many moons ago, and their target are
> very different... on Amiga, the mouse pointer is drawn as a hardware
> sprite (same as an C64 or an arcade machine), and the mouse
> movement counters are handled in hardware too, so your mouse pointer
> can't _EVER_ get laggy.

That's not completely true...

The hardware keeps track of the mouse counter (but may overflow).
The mouse counters are checked periodically (in an interrupt, IIRC) and the
sprite position is updated.

So there's no real difference from a hardware point of view between Amiga
mouse/sprite hardware and PCs with PS/2 or serial mice and hardware cursors.
Both mouse counters and serial buffers can overflow, though ;-)

> Geert, perhaps you could tell us how linux music playing feels
> for a desktop m68k machine? 

Last time I tried it was worse than AmigaOS.

> [ I'm CCing you since you are the only one from the m68k port    
>   which I can see posting on a regular basis.]

What about Roman Zippel? And you can always try the linux-m68k list ;-)

> > > Agreed.  But at the same time, not every "interactive" task should be
> > > real-time.  In fact, nearly all should not.  I do not want my text
> > > editor or mailer to be RT, for example.
> > 
> > Well, there is latency and there is latency. To take the AmigaOS
> > example. Voyager, a webbrowser for AmigaOS uses MUI (a fully dynamic gui
> > with weighted(prioritized) sections) and renders images. It's responsive
> > even on a 40mhz 68040 using Executive with the feedback scheduler.

The biggest part of the responsiveness comes from the cheap context switching.
Not needing a MMU can have its advantages... Plus there's no demand paging or
swapping to wait for.

That's also the reason why you can have higher serial speed under AmigaOS than
under Linux/m68k: a serial port with a 1-byte buffer needs fast and cheap
context switching.

Gr{oetje,eeting}s,

						Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
							    -- Linus Torvalds


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  2:50             ` Con Kolivas
  2003-09-01 15:58               ` Antonio Vargas
@ 2003-09-01 22:19               ` Ian Kumlien
  1 sibling, 0 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-09-01 22:19 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Robert Love, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4699 bytes --]

On Mon, 2003-09-01 at 04:50, Con Kolivas wrote:
> On Mon, 1 Sep 2003 10:00, Ian Kumlien wrote:
> > On Mon, 2003-09-01 at 01:41, Robert Love wrote:
> > > This implies that a high priority, which has exhausted its timeslice,
> > > will not be allowed to run again until _all_ other runnable tasks
> > > exhaust their timeslice (this ignores the reinsertion into the active
> > > array of interactive tasks, but that is an optimization that just
> > > complicates this discussion).
> >
> > So it's penalised by being in the corner for one go? or just pri
> > penalised (sounds like it could get a corner from what you wrote... Or
> > is it time for bed).
> 
> Please read my RFC 
> (http://marc.theaimsgroup.com/?l=linux-kernel&m=106178160825835&w=2) which 
> has this extensively explained. If this were the case after one timeslice, 
> then dragging a window in X at load of say 32 would be impossible; the window 
> would move for 0.1 second, stand still for 3.2 seconds then move for another 
> 0.1 second.

Thats nicely written, but it feels very complex...
The more i read it the more i like the "currency" implementation in
Executive (since i never understood how feedback worked exactly, it was
a while ago since i read it).

Just give all processes money to spend on cpu time... Set a high limit
and let em spend =)

> > > > Damn thats a tough cookie, i still think that the priority inversion is
> > > > bad. Don't know enough about this to actually provide a solution...
> > > > Any one else that has a view point?
> > >
> > > Priority inversion is bad, but the priority inversion in this case is
> > > intended.  Higher priority tasks cannot starve lower ones.  It is a
> > > classic Unix philosophy that 'all tasks make some forward progress'
> >
> > Yes, like the feedback scheduler...
> 
> Priority inversion to some extent will exist in any scheduler design that has 
> priorities. There are solutions available but they incur a performance 
> penalty elsewhere (some people are currently experimenting). The inversion 
> problems inherent in my earlier patches are largely gone with the duration 
> and severity of inversion being either equal to or smaller than the instances 
> that occur in the vanilla scheduler. Nick's approach may work around it 
> differently but documentation is hard to find (hint Nick*).

What i meant with priority inversion is that highpri should have small
timeslices and low pri should have large... Sorry if i was unclear.
(maybe the same size timeslice but separated in to timeunits)

> > > > Hummm, the skips in xmms tells me that something is bad..
> > > > (esp since it works perfectly on the previus scheduler)
> > >
> > > A lot of this is just the interactivity estimator making the wrong
> > > estimate.
> >
> > Yes, But... When you come from AmigaOS, and have used Executive...
> > things like this is dis concerning. Executive is a scheduler addition
> > for amigaos that has many schedulers to choose from. One of which is the
> > original feedback scheduler. While a feedback scheduler consumes some
> > cpu it still allows you to play mp3's while surfing the net on a 50 mhz
> > 68060. Hearing about 500mhz machines that skip is somewhat.. odd.
> 
> That's in an attempt to make them as high throughput machines as possible. 
> Xmms skipping is basically killed off as a problem in both Nick's and my 
> patches. If it still remains it is almost certainly a disk i/o problem (no 
> dma) or hitting swap memory.

Humm, ok... The only desktop i have where i have switched between O(1)
and common is my laptop... which sadly didn't run either nick or your
work... So i have no comparison with your or nicks work.

> > Well, there is latency and there is latency. To take the AmigaOS
> > example. Voyager, a webbrowser for AmigaOS uses MUI (a fully dynamic gui
> > with weighted(prioritized) sections) and renders images. It's responsive
> > even on a 40mhz 68040 using Executive with the feedback scheduler.
> 
> Multiple processors to do different tasks on amigas kinda helped there...

Well, yes, but... Not when it comes to scheduling.

> > 500 mhz is a lot of horsepower when it comes to playing mp3's and
> > scheduling.. It feels like something is wrong when i see all these
> > discussions but i most certainly don't know enough to even begin to
> > understand it. I only tried to show the thing i thought was really wrong
> > but you do have a point with the runqueues and timeslices =P
> 
> Things are _never ever ever ever_ as simple as they appear on the surface.

If they were... Life would be boring =P

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01  4:03             ` Robert Love
  2003-09-01  5:07               ` Con Kolivas
@ 2003-09-01 22:24               ` Ian Kumlien
  1 sibling, 0 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-09-01 22:24 UTC (permalink / raw)
  To: Robert Love; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1255 bytes --]

On Mon, 2003-09-01 at 06:03, Robert Love wrote:
> On Sun, 2003-08-31 at 20:00, Ian Kumlien wrote:
> 
> > Then i'm beginning to agree with the time unit... Large timeslice but in
> > units for high pri tasks... So that high pri can run (if needed) 2 or 3
> > times / timeslice.
> 
> Exactly.
> 
> > > This implies that a high priority, which has exhausted its timeslice,
> > > will not be allowed to run again until _all_ other runnable tasks
> > > exhaust their timeslice (this ignores the reinsertion into the active
> > > array of interactive tasks, but that is an optimization that just
> > > complicates this discussion).
> > 
> > So it's penalised by being in the corner for one go? or just pri
> > penalised (sounds like it could get a corner from what you wrote... Or
> > is it time for bed).
> 
> Not penalized... all tasks go through the same thing.

Yeah, that part was unclear though. =)

[Snip: Thanks for the explanation i'll reply in Con's mail if needed ]

> But Unix is designed for timesharing among many interactive tasks.  It
> works.  The problem faced today in 2.6 is juggling throughput versus
> latency in the scheduler, with the interactivity estimator.

Yeah...

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01 14:21             ` Antonio Vargas
  2003-09-01 19:36               ` Geert Uytterhoeven
@ 2003-09-01 22:49               ` Ian Kumlien
  1 sibling, 0 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-09-01 22:49 UTC (permalink / raw)
  To: Antonio Vargas; +Cc: Robert Love, linux-kernel, geert

[-- Attachment #1: Type: text/plain, Size: 4857 bytes --]

On Mon, 2003-09-01 at 16:21, Antonio Vargas wrote:
> On Mon, Sep 01, 2003 at 02:00:09AM +0200, Ian Kumlien wrote:
> > Yes, But... When you come from AmigaOS, and have used Executive...
> > things like this is dis concerning. Executive is a scheduler addition
> > for amigaos that has many schedulers to choose from. One of which is the
> > original feedback scheduler. While a feedback scheduler consumes some
> > cpu it still allows you to play mp3's while surfing the net on a 50 mhz
> > 68060. Hearing about 500mhz machines that skip is somewhat.. odd.
> 
> Ian, I came from Amiga to Linux many moons ago, and their target are
> very different... on Amiga, the mouse pointer is drawn as a hardware
> sprite (same as an C64 or an arcade machine), and the mouse
> movement counters are handled in hardware too, so your mouse pointer
> can't _EVER_ get laggy.

I also joined linux a long time ago... but the hardware pointer is not
the issue. I'm more referring to using cpublit and dragging the
scrollbar in a window... Like f.ex. Voyager.

(And, in this case, 50Mhz is the limitation not the scheduler. With 10
times that power i bet you could do it all with the cpu, just like a
common pc today.)

> The sound system is very different, on Amiga you ask the system to
> callback to you when audio needs replenishing, and anyways you could
> boost the player priority so that multichanel or mp3 playing gets more
> priority than other tasks. I can recall playing mp3 on a 68030/50
> and having to boost the player's priority so that it would get no
> skipping. As you probably know 68060 machines, even if at the same mhz,
> have about 8x raw calculation power.

Eh, a 060 struggles with a mp3, it consumes most of the cpu and it can
still render a browser window (not as fast as it could originally, but
thats not my point).

> So, I also feel very bad about linux when my audio skips on my 900mhz
> machine and I see reports that it does the same on 2400mhz ones, but I
> can understand that the general design and target is not the same...
> Amiga was _designed_, both software and hardware wise, for realtime
> while Unix and thus Linux is designed for multiuser timesharing.

AmigaOS is lowlatency but not realtime. And, with executive you run a
unix scheduler (feedback).

> All that said, having a mp3 decoder as a kernel module reading from 
> mlocked ram would a great way to have Amiga-like music replaying ;)

Hummm, Last time i played a mp3 on my amiga i kinda decoded it in the
delfina dsp =)
(to bad noone has done that for emu10k1... )

> Geert, perhaps you could tell us how linux music playing feels
> for a desktop m68k machine? 
> 
> [ I'm CCing you since you are the only one from the m68k port    
>   which I can see posting on a regular basis.]
> 
> > And afair it has no real interactivity estimator. 
> > 
> > (If you are interested you can always search for Executive on aminet..
> > It has several scheduler policies including those that work great on
> > small machines (25mhz or so))

> If the user or a program decided so, it could _always_ change a task
> priority to upper or lower levels, which is what I did to my mp3 player
> to avoid skips on my under powered machine (mp3 playing used 85%cpu) ;).

Heh, what cpu?

But a task raising it's cpu on it's own can be a pain since it could
dominate the machine on non executive running machines.

> "Executive" was an application which patched the Amiga scheduler and
> hooked up a priority manager. By altering task' priorities, it managed
> to get the standard round-robin scheduler to behave like a feedback one.
> (Executive was _G_R_E_A_T_ :)))

Executive implements dynamic priorities in a fixed priority OS, ie nice.
You select the range it should use to catch processes and where it
should place em. And, Executive is still great even though i haven't
used it for quite some time.

> Executive was configured to never touch tasks with elevated priorities,
> so in fact all user tasks would get the feedback scheduler but system
> drivers such as keyboard input system would continue running as realtime
> round-robin.

Hummm, input.device would never consume so much cpu that it would need
to be penalized. 
 
> > imho, that shouldn't really be needed... =P
> > (although executive apparently had a pri boost for active window... I
> > doubt that i ran with it though... Been a while =))
> 
> Yes, it added +1 to the task which owner the active window
> (this is also used in Windows if I recall correctly). But even without
> this "hack", both executive-enabled and standard systems ran great.

Give priority to "front most" program is in Windows and OS/2. I disabled
it in OS/2 Warp something on a p120 and watched it crawl =P.

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01 15:07           ` Daniel Phillips
  2003-09-01 14:16             ` Antonio Vargas
@ 2003-09-01 23:03             ` Ian Kumlien
  2003-09-02  0:04               ` Nick Piggin
  2003-09-02  0:23               ` Con Kolivas
  1 sibling, 2 replies; 36+ messages in thread
From: Ian Kumlien @ 2003-09-01 23:03 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: linux-kernel, Robert Love

[-- Attachment #1: Type: text/plain, Size: 1985 bytes --]

[Forgot CC to LKML and Robert Love, sorry ]

On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
> On Monday 01 September 2003 01:41, Robert Love wrote:
> > Priority inversion is bad, but the priority inversion in this case is
> > intended.  Higher priority tasks cannot starve lower ones.  It is a
> > classic Unix philosophy that 'all tasks make some forward progress'
> 
> So if I have 1000 low priority tasks and one high priority task, all CPU 
> bound, the high priority task gets 0.1% CPU.  This is not the desirable or 
> expected behaviour.

> My conclusion is, the strategy of expiring the whole active array before any 
> expired tasks are allowed to run again is incorrect.  Instead, each active 
> list should be refreshed from the expired list individually.  This does not 
> affect the desirable O(1) scheduling property.  To prevent low priority 
> starvation, the high-to-low scan should be elaborated to skip some runnable, 
> high priority tasks occasionally in a *controlled* way.

I like this idea.
You could handle the priority starvation with a "old process" boost.
(i don't know which would be simpler or if there is something even
simpler out there)

This would ensure that all processes are run sooner or later. Real
cpuhogs would run very seldom due to being starved, but run when they
get the boost. On a loaded system this might be desirable since most
login tools would be "normal" or "high pri" from the get go.
(there might be a problem with locks though)

This should also work hand in hand with timeslice changes imho. Aswell
as process preemption. If we assume that cpu hogs has work that they
want to get done, let em do it for as long as possible. If something
"important" happens, it'll be preempted right?

> IMHO, this minor change will provide a more solid, predictable base for Con 
> and Nick's dynamic priority and dynamic timeslice experiments.

Most definitely.

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01 23:03             ` Ian Kumlien
@ 2003-09-02  0:04               ` Nick Piggin
  2003-09-02  0:23               ` Con Kolivas
  1 sibling, 0 replies; 36+ messages in thread
From: Nick Piggin @ 2003-09-02  0:04 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: Daniel Phillips, linux-kernel, Robert Love

Ian Kumlien wrote:

>[Forgot CC to LKML and Robert Love, sorry ]
>
>On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
>
>>On Monday 01 September 2003 01:41, Robert Love wrote:
>>
>>>Priority inversion is bad, but the priority inversion in this case is
>>>intended.  Higher priority tasks cannot starve lower ones.  It is a
>>>classic Unix philosophy that 'all tasks make some forward progress'
>>>
>>So if I have 1000 low priority tasks and one high priority task, all CPU 
>>bound, the high priority task gets 0.1% CPU.  This is not the desirable or 
>>expected behaviour.
>>

In my implementation, the high prio guy gets 1.9% CPU and the others get
0.09%. However, in all implementations, the high priority one will be 
allowed
to preempt the any of others, of course.

At this point you can safely abandon the consideration that a user might be
running KDE as well ;)

>
>>My conclusion is, the strategy of expiring the whole active array before any 
>>expired tasks are allowed to run again is incorrect.  Instead, each active 
>>list should be refreshed from the expired list individually.  This does not 
>>affect the desirable O(1) scheduling property.  To prevent low priority 
>>starvation, the high-to-low scan should be elaborated to skip some runnable, 
>>high priority tasks occasionally in a *controlled* way.
>>
>
>I like this idea.
>You could handle the priority starvation with a "old process" boost.
>(i don't know which would be simpler or if there is something even
>simpler out there)
>
>This would ensure that all processes are run sooner or later. Real
>cpuhogs would run very seldom due to being starved, but run when they
>get the boost. On a loaded system this might be desirable since most
>login tools would be "normal" or "high pri" from the get go.
>(there might be a problem with locks though)
>
>This should also work hand in hand with timeslice changes imho. Aswell
>as process preemption. If we assume that cpu hogs has work that they
>want to get done, let em do it for as long as possible. If something
>"important" happens, it'll be preempted right?
>

This is really just another variation on the idea of dynamic timeslices.
Mine does it explicitly. This idea and the interactivity idea do it
implicitly (not that thats bad).



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-01 23:03             ` Ian Kumlien
  2003-09-02  0:04               ` Nick Piggin
@ 2003-09-02  0:23               ` Con Kolivas
  2003-09-02 10:25                 ` Ian Kumlien
  2003-09-02 10:44                 ` Wes Janzen
  1 sibling, 2 replies; 36+ messages in thread
From: Con Kolivas @ 2003-09-02  0:23 UTC (permalink / raw)
  To: Ian Kumlien, Daniel Phillips; +Cc: linux-kernel, Robert Love

On Tue, 2 Sep 2003 09:03, Ian Kumlien wrote:
> On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
> > IMHO, this minor change will provide a more solid, predictable base for
> > Con and Nick's dynamic priority and dynamic timeslice experiments.
>
> Most definitely.

No, the correct answer is maybe... if after it's redesigned and put through 
lots of testing to ensure it doesn't create other regressions. I'm not saying 
it isn't correct, just that it's a major architectural change you're 
promoting. Now isn't the time for that.

Why not just wait till 2.6.10 and plop in a new scheduler a'la dropping in a 
new vm into 2.4.10... <sigh> 

The cpu scheduler simply isn't broken as the people on this mailing list seem 
to think it is. While my tweaks _look_ large, they're really just tweaking 
the way the numbers feed back into a basically unchanged design. All the 
incremental changes have been modifying the same small sections of sched.c 
over and over again. Nick's changes change the size of timeslices and the 
priority variation in a much more fundamental way but still use the basic 
architecture of the scheduler. 

Promoting a new scheduler design entirely is admirable and ultimately probably 
worth pursuing but not 2.6 stuff.

Con


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-02  0:23               ` Con Kolivas
@ 2003-09-02 10:25                 ` Ian Kumlien
  2003-09-02 11:08                   ` Nick Piggin
  2003-09-02 10:44                 ` Wes Janzen
  1 sibling, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-09-02 10:25 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Daniel Phillips, linux-kernel, Robert Love

[-- Attachment #1: Type: text/plain, Size: 2485 bytes --]

On Tue, 2003-09-02 at 02:23, Con Kolivas wrote:
> On Tue, 2 Sep 2003 09:03, Ian Kumlien wrote:
> > On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
> > > IMHO, this minor change will provide a more solid, predictable base for
> > > Con and Nick's dynamic priority and dynamic timeslice experiments.
> >
> > Most definitely.
> 
> No, the correct answer is maybe... if after it's redesigned and put through 
> lots of testing to ensure it doesn't create other regressions. I'm not saying 
> it isn't correct, just that it's a major architectural change you're 
> promoting. Now isn't the time for that.
> 
> Why not just wait till 2.6.10 and plop in a new scheduler a'la dropping in a 
> new vm into 2.4.10... <sigh> 

Wouldn't a new scheduler be easier to test? And your patches changes
it's behavior quite a lot. Wouldn't they require the same testing?
(And Nicks for that mater, who changes more)

> The cpu scheduler simply isn't broken as the people on this mailing list seem 
> to think it is. While my tweaks _look_ large, they're really just tweaking 
> the way the numbers feed back into a basically unchanged design. All the 
> incremental changes have been modifying the same small sections of sched.c 
> over and over again. Nick's changes change the size of timeslices and the 
> priority variation in a much more fundamental way but still use the basic 
> architecture of the scheduler. 

But, can't this scheduler suffer from starvation? If the run queue is
long enough? Either via that deadline or via processes not running...

Wouldn't a starved process boost ensure that even hogs on a loaded
system got their share now and then?

You could say that the problem the current scheduler has is that it's
not allowed to starve anything, thats why we add stuff to give
interactive bonus. But if it *was* allowed to starve but gave bonus to
the starved processes that would make most of the interactive detection
useless (yes, we still need the "didn't use their timeslice" bit and
with a timeslice that gets smaller the higher the pri we'd automagically
balance most processes).

(As usual my assumptions might be really wrong...)

> Promoting a new scheduler design entirely is admirable and ultimately probably 
> worth pursuing but not 2.6 stuff.

Well, discussing and creating it is one thing. Implementing it is
another. Accepting the patch is still up to the-powers-that-be(tm)

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-02  0:23               ` Con Kolivas
  2003-09-02 10:25                 ` Ian Kumlien
@ 2003-09-02 10:44                 ` Wes Janzen
  1 sibling, 0 replies; 36+ messages in thread
From: Wes Janzen @ 2003-09-02 10:44 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Ian Kumlien, Daniel Phillips, linux-kernel, Robert Love

Con Kolivas wrote:

> ...
>
>The cpu scheduler simply isn't broken as the people on this mailing list seem 
>to think it is. While my tweaks _look_ large, they're really just tweaking 
>the way the numbers feed back into a basically unchanged design.
>
>...
>

For what it's worth, I haven't had any problems with Con's O19int.  I've 
been trying to repeat a case of priority inversion I experienced with 
O18.1...but it seems to be cured (and that was really my only problem 
with it).  I was already getting fewer skips in XMMS with 
2.6.0-test3-mm2 than I did with 2.4.18 and the same sort of workload.  I 
can't really test xmms now that the ACPI changes have obliterated my 
chances of freeing IRQ 5 for my sb16 -- but from the improvements I feel 
in the last few patches from Con, I imagine that it wouldn't skip 
anymore.  (On a side note, I really miss xmms where random means quite 
random, unlike my CD changer which is repeat the same songs in a four 
hour block.)  I certainly am not running any sort of high-end machine 
with a K6-2 400 ;-)  The mouse might lag slightly for a few seconds when 
starting up a build, but as soon as the scheduler adjusts, I can't tell 
whether I have four builds running at the same time or just one.  In 
2.4.18, it has a slow feeling that let's you know the system is loaded 
-- and it never goes away (well, until the compiling's done).

-Wes-


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-02 10:25                 ` Ian Kumlien
@ 2003-09-02 11:08                   ` Nick Piggin
  2003-09-02 17:22                     ` Ian Kumlien
  0 siblings, 1 reply; 36+ messages in thread
From: Nick Piggin @ 2003-09-02 11:08 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: Con Kolivas, Daniel Phillips, linux-kernel, Robert Love



Ian Kumlien wrote:

>On Tue, 2003-09-02 at 02:23, Con Kolivas wrote:
>
>>On Tue, 2 Sep 2003 09:03, Ian Kumlien wrote:
>>
>>>On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
>>>
>>>>IMHO, this minor change will provide a more solid, predictable base for
>>>>Con and Nick's dynamic priority and dynamic timeslice experiments.
>>>>
>>>Most definitely.
>>>
>>No, the correct answer is maybe... if after it's redesigned and put through 
>>lots of testing to ensure it doesn't create other regressions. I'm not saying 
>>it isn't correct, just that it's a major architectural change you're 
>>promoting. Now isn't the time for that.
>>
>>Why not just wait till 2.6.10 and plop in a new scheduler a'la dropping in a 
>>new vm into 2.4.10... <sigh> 
>>
>
>Wouldn't a new scheduler be easier to test? And your patches changes
>it's behavior quite a lot. Wouldn't they require the same testing?
>(And Nicks for that mater, who changes more)
>

Well a new scheduler needs the same testing as an old scheduler.
The difference is less of it has been done.

>
>>The cpu scheduler simply isn't broken as the people on this mailing list seem 
>>to think it is. While my tweaks _look_ large, they're really just tweaking 
>>the way the numbers feed back into a basically unchanged design. All the 
>>incremental changes have been modifying the same small sections of sched.c 
>>over and over again. Nick's changes change the size of timeslices and the 
>>priority variation in a much more fundamental way but still use the basic 
>>architecture of the scheduler. 
>>
>
>But, can't this scheduler suffer from starvation? If the run queue is
>long enough? Either via that deadline or via processes not running...
>
>Wouldn't a starved process boost ensure that even hogs on a loaded
>system got their share now and then?
>
>You could say that the problem the current scheduler has is that it's
>not allowed to starve anything, thats why we add stuff to give
>interactive bonus. But if it *was* allowed to starve but gave bonus to
>the starved processes that would make most of the interactive detection
>useless (yes, we still need the "didn't use their timeslice" bit and
>with a timeslice that gets smaller the higher the pri we'd automagically
>balance most processes).
>
>(As usual my assumptions might be really wrong...)
>

First off, no general purpose scheduler should allow starvation depending
on your definition. The interactivity stuff, and even dynamic priorities
allow short term unfairness.

A cpu hog is not a starved process. It becomes a CPU hog because it is
running all the time. True, it mustn't be starved indefinitely by a lot
of higher priority processes. This is something Con's and my schedulers
both ensure.

Hmm... what else? The "didn't use their timeslice" thing is not
applicable: a new timeslice doesn't get handed out until the previous one
is used. The priorities thing is done based on how much sleeping the
process does.

Its funny, everyone seems to have very similar ideas that they are
expressing by describing different implementations they have in mind.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Ian Kumlien wrote:

>On Tue, 2003-09-02 at 02:23, Con Kolivas wrote:
>
>>On Tue, 2 Sep 2003 09:03, Ian Kumlien wrote:
>>
>>>On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
>>>
>>>>IMHO, this minor change will provide a more solid, predictable base for
>>>>Con and Nick's dynamic priority and dynamic timeslice experiments.
>>>>
>>>Most definitely.
>>>
>>No, the correct answer is maybe... if after it's redesigned and put through 
>>lots of testing to ensure it doesn't create other regressions. I'm not saying 
>>it isn't correct, just that it's a major architectural change you're 
>>promoting. Now isn't the time for that.
>>
>>Why not just wait till 2.6.10 and plop in a new scheduler a'la dropping in a 
>>new vm into 2.4.10... <sigh> 
>>
>
>Wouldn't a new scheduler be easier to test? And your patches changes
>it's behavior quite a lot. Wouldn't they require the same testing?
>(And Nicks for that mater, who changes more)
>

Well a new scheduler needs the same testing as an old scheduler.
The difference is less of it has been done.

>
>>The cpu scheduler simply isn't broken as the people on this mailing list seem 
>>to think it is. While my tweaks _look_ large, they're really just tweaking 
>>the way the numbers feed back into a basically unchanged design. All the 
>>incremental changes have been modifying the same small sections of sched.c 
>>over and over again. Nick's changes change the size of timeslices and the 
>>priority variation in a much more fundamental way but still use the basic 
>>architecture of the scheduler. 
>>
>
>But, can't this scheduler suffer from starvation? If the run queue is
>long enough? Either via that deadline or via processes not running...
>
>Wouldn't a starved process boost ensure that even hogs on a loaded
>system got their share now and then?
>
>You could say that the problem the current scheduler has is that it's
>not allowed to starve anything, thats why we add stuff to give
>interactive bonus. But if it *was* allowed to starve but gave bonus to
>the starved processes that would make most of the interactive detection
>useless (yes, we still need the "didn't use their timeslice" bit and
>with a timeslice that gets smaller the higher the pri we'd automagically
>balance most processes).
>
>(As usual my assumptions might be really wrong...)
>

First off, no general purpose scheduler should allow starvation depending
on your definition. The interactivity stuff, and even dynamic priorities
allow short term unfairness.

A cpu hog is not a starved process. It becomes a CPU hog because it is
running all the time. True, it mustn't be starved indefinitely by a lot
of higher priority processes. This is something Con's and my schedulers
both ensure.

Hmm... what else? The "didn't use their timeslice" thing is not
applicable: a new timeslice doesn't get handed out until the previous one
is used. The priorities thing is done based on how much sleeping the
process does.

Its funny, everyone seems to have very similar ideas that they are
expressing by describing different implementations they have in mind.




^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-02 11:08                   ` Nick Piggin
@ 2003-09-02 17:22                     ` Ian Kumlien
  2003-09-02 23:49                       ` Nick Piggin
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-09-02 17:22 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Con Kolivas, Daniel Phillips, linux-kernel, Robert Love

[-- Attachment #1: Type: text/plain, Size: 1649 bytes --]

On Tue, 2003-09-02 at 13:08, Nick Piggin wrote:
> Ian Kumlien wrote:
> >You could say that the problem the current scheduler has is that it's
> >not allowed to starve anything, thats why we add stuff to give
> >interactive bonus. But if it *was* allowed to starve but gave bonus to
> >the starved processes that would make most of the interactive detection
> >useless (yes, we still need the "didn't use their timeslice" bit and
> >with a timeslice that gets smaller the higher the pri we'd automagically
> >balance most processes).
> >
> >(As usual my assumptions might be really wrong...)
> 
> First off, no general purpose scheduler should allow starvation depending
> on your definition. The interactivity stuff, and even dynamic priorities
> allow short term unfairness.

When you reach a certain load you *have to* allow starvation. Ie, you
can't work around it... All i say is that if we have a more relaxed
method we might benefit from it.

> Hmm... what else? The "didn't use their timeslice" thing is not
> applicable: a new timeslice doesn't get handed out until the previous one
> is used. The priorities thing is done based on how much sleeping the
> process does.

And not the amount of cpu consumed by the app / go?

> Its funny, everyone seems to have very similar ideas that they are
> expressing by describing different implementations they have in mind.

Yes =), I'm mailing Con directly now as well, to save some unwanted
traffic here =). I just hope that we'll reach a agreement somewhere
about whats sane or not...

Mail me if you're interested as well.

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-02 17:22                     ` Ian Kumlien
@ 2003-09-02 23:49                       ` Nick Piggin
  2003-09-03 23:02                         ` Ian Kumlien
  0 siblings, 1 reply; 36+ messages in thread
From: Nick Piggin @ 2003-09-02 23:49 UTC (permalink / raw)
  To: Ian Kumlien; +Cc: Con Kolivas, Daniel Phillips, linux-kernel, Robert Love



Ian Kumlien wrote:

>On Tue, 2003-09-02 at 13:08, Nick Piggin wrote:
>
>>Ian Kumlien wrote:
>>
>>>You could say that the problem the current scheduler has is that it's
>>>not allowed to starve anything, thats why we add stuff to give
>>>interactive bonus. But if it *was* allowed to starve but gave bonus to
>>>the starved processes that would make most of the interactive detection
>>>useless (yes, we still need the "didn't use their timeslice" bit and
>>>with a timeslice that gets smaller the higher the pri we'd automagically
>>>balance most processes).
>>>
>>>(As usual my assumptions might be really wrong...)
>>>
>>First off, no general purpose scheduler should allow starvation depending
>>on your definition. The interactivity stuff, and even dynamic priorities
>>allow short term unfairness.
>>
>
>When you reach a certain load you *have to* allow starvation. Ie, you
>can't work around it... All i say is that if we have a more relaxed
>method we might benefit from it.
>

Depending on your definition. If 1000 processes get 10ms CPU every
10000ms I would not call that being starved. Maybe thats misleading.

>
>>Hmm... what else? The "didn't use their timeslice" thing is not
>>applicable: a new timeslice doesn't get handed out until the previous one
>>is used. The priorities thing is done based on how much sleeping the
>>process does.
>>
>
>And not the amount of cpu consumed by the app / go?
>

Well yeah in a way. Consuming CPU lowers priority, sleeping raises.

>
>>Its funny, everyone seems to have very similar ideas that they are
>>expressing by describing different implementations they have in mind.
>>
>
>Yes =), I'm mailing Con directly now as well, to save some unwanted
>traffic here =). I just hope that we'll reach a agreement somewhere
>about whats sane or not...
>
>Mail me if you're interested as well.
>

OK CC me


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-02 23:49                       ` Nick Piggin
@ 2003-09-03 23:02                         ` Ian Kumlien
  2003-09-04  1:39                           ` Mike Fedyk
  0 siblings, 1 reply; 36+ messages in thread
From: Ian Kumlien @ 2003-09-03 23:02 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Con Kolivas, Daniel Phillips, linux-kernel, Robert Love

[-- Attachment #1: Type: text/plain, Size: 1070 bytes --]

On Wed, 2003-09-03 at 01:49, Nick Piggin wrote:
> Ian Kumlien wrote:
> >When you reach a certain load you *have to* allow starvation. Ie, you
> >can't work around it... All i say is that if we have a more relaxed
> >method we might benefit from it.

> Depending on your definition. If 1000 processes get 10ms CPU every
> 10000ms I would not call that being starved. Maybe thats misleading.

[Sorry, i'm tired, i hope this comes out right ]

What i'm thinking of is more of a, "Hey, hog, we didn't manage to get
you in on this 'system wide schedule' (doing all the tasks before
restarting) due to heavy load, so we'll give you this boost so that you
can compete better next time".

> >And not the amount of cpu consumed by the app / go?

> Well yeah in a way. Consuming CPU lowers priority, sleeping raises.

Thought so. And afair it does use "timeslice useage" at one time or has
that changed?


> >Mail me if you're interested as well.

> OK CC me

As soon as Con has me straight on a few things... =)

-- 
Ian Kumlien <pomac@vapor.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [SHED] Questions.
  2003-09-03 23:02                         ` Ian Kumlien
@ 2003-09-04  1:39                           ` Mike Fedyk
  0 siblings, 0 replies; 36+ messages in thread
From: Mike Fedyk @ 2003-09-04  1:39 UTC (permalink / raw)
  To: Ian Kumlien
  Cc: Nick Piggin, Con Kolivas, Daniel Phillips, linux-kernel, Robert Love

On Thu, Sep 04, 2003 at 01:02:29AM +0200, Ian Kumlien wrote:
> On Wed, 2003-09-03 at 01:49, Nick Piggin wrote:
> > Ian Kumlien wrote:
> > >And not the amount of cpu consumed by the app / go?
> 
> > Well yeah in a way. Consuming CPU lowers priority, sleeping raises.
> 
> Thought so. And afair it does use "timeslice useage" at one time or has
> that changed?

I think that's part of the interactivity estimator Nick removed in his
patches...

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2003-09-04  1:39 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-08-31 10:07 [SHED] Questions Ian Kumlien
2003-08-31 10:17 ` Nick Piggin
2003-08-31 10:24   ` Ian Kumlien
2003-08-31 10:41     ` Nick Piggin
2003-08-31 10:46       ` Nick Piggin
     [not found]       ` <1062326980.9959.65.camel@big.pomac.com>
     [not found]         ` <3F51D4A4.4090501@cyberone.com.au>
2003-08-31 11:08           ` Ian Kumlien
2003-08-31 11:31             ` Nick Piggin
2003-08-31 11:43               ` Ian Kumlien
2003-08-31 18:53 ` Robert Love
2003-08-31 19:31   ` Ian Kumlien
2003-08-31 19:51     ` Robert Love
2003-08-31 22:41       ` Ian Kumlien
2003-08-31 23:41         ` Robert Love
2003-09-01  0:00           ` Ian Kumlien
2003-09-01  2:50             ` Con Kolivas
2003-09-01 15:58               ` Antonio Vargas
2003-09-01 22:19               ` Ian Kumlien
2003-09-01  4:03             ` Robert Love
2003-09-01  5:07               ` Con Kolivas
2003-09-01  5:55                 ` Robert Love
2003-09-01 22:24               ` Ian Kumlien
2003-09-01 14:21             ` Antonio Vargas
2003-09-01 19:36               ` Geert Uytterhoeven
2003-09-01 22:49               ` Ian Kumlien
2003-09-01 15:07           ` Daniel Phillips
2003-09-01 14:16             ` Antonio Vargas
2003-09-01 23:03             ` Ian Kumlien
2003-09-02  0:04               ` Nick Piggin
2003-09-02  0:23               ` Con Kolivas
2003-09-02 10:25                 ` Ian Kumlien
2003-09-02 11:08                   ` Nick Piggin
2003-09-02 17:22                     ` Ian Kumlien
2003-09-02 23:49                       ` Nick Piggin
2003-09-03 23:02                         ` Ian Kumlien
2003-09-04  1:39                           ` Mike Fedyk
2003-09-02 10:44                 ` Wes Janzen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).