linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <piggin@cyberone.com.au>
To: Ian Kumlien <pomac@vapor.com>
Cc: Con Kolivas <kernel@kolivas.org>,
	Daniel Phillips <phillips@arcor.de>,
	linux-kernel@vger.kernel.org, Robert Love <rml@tech9.net>
Subject: Re: [SHED] Questions.
Date: Tue, 02 Sep 2003 21:08:59 +1000	[thread overview]
Message-ID: <3F547A4B.7060309@cyberone.com.au> (raw)
In-Reply-To: <1062498307.5171.267.camel@big.pomac.com>



Ian Kumlien wrote:

>On Tue, 2003-09-02 at 02:23, Con Kolivas wrote:
>
>>On Tue, 2 Sep 2003 09:03, Ian Kumlien wrote:
>>
>>>On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
>>>
>>>>IMHO, this minor change will provide a more solid, predictable base for
>>>>Con and Nick's dynamic priority and dynamic timeslice experiments.
>>>>
>>>Most definitely.
>>>
>>No, the correct answer is maybe... if after it's redesigned and put through 
>>lots of testing to ensure it doesn't create other regressions. I'm not saying 
>>it isn't correct, just that it's a major architectural change you're 
>>promoting. Now isn't the time for that.
>>
>>Why not just wait till 2.6.10 and plop in a new scheduler a'la dropping in a 
>>new vm into 2.4.10... <sigh> 
>>
>
>Wouldn't a new scheduler be easier to test? And your patches changes
>it's behavior quite a lot. Wouldn't they require the same testing?
>(And Nicks for that mater, who changes more)
>

Well a new scheduler needs the same testing as an old scheduler.
The difference is less of it has been done.

>
>>The cpu scheduler simply isn't broken as the people on this mailing list seem 
>>to think it is. While my tweaks _look_ large, they're really just tweaking 
>>the way the numbers feed back into a basically unchanged design. All the 
>>incremental changes have been modifying the same small sections of sched.c 
>>over and over again. Nick's changes change the size of timeslices and the 
>>priority variation in a much more fundamental way but still use the basic 
>>architecture of the scheduler. 
>>
>
>But, can't this scheduler suffer from starvation? If the run queue is
>long enough? Either via that deadline or via processes not running...
>
>Wouldn't a starved process boost ensure that even hogs on a loaded
>system got their share now and then?
>
>You could say that the problem the current scheduler has is that it's
>not allowed to starve anything, thats why we add stuff to give
>interactive bonus. But if it *was* allowed to starve but gave bonus to
>the starved processes that would make most of the interactive detection
>useless (yes, we still need the "didn't use their timeslice" bit and
>with a timeslice that gets smaller the higher the pri we'd automagically
>balance most processes).
>
>(As usual my assumptions might be really wrong...)
>

First off, no general purpose scheduler should allow starvation depending
on your definition. The interactivity stuff, and even dynamic priorities
allow short term unfairness.

A cpu hog is not a starved process. It becomes a CPU hog because it is
running all the time. True, it mustn't be starved indefinitely by a lot
of higher priority processes. This is something Con's and my schedulers
both ensure.

Hmm... what else? The "didn't use their timeslice" thing is not
applicable: a new timeslice doesn't get handed out until the previous one
is used. The priorities thing is done based on how much sleeping the
process does.

Its funny, everyone seems to have very similar ideas that they are
expressing by describing different implementations they have in mind.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Ian Kumlien wrote:

>On Tue, 2003-09-02 at 02:23, Con Kolivas wrote:
>
>>On Tue, 2 Sep 2003 09:03, Ian Kumlien wrote:
>>
>>>On Mon, 2003-09-01 at 17:07, Daniel Phillips wrote:
>>>
>>>>IMHO, this minor change will provide a more solid, predictable base for
>>>>Con and Nick's dynamic priority and dynamic timeslice experiments.
>>>>
>>>Most definitely.
>>>
>>No, the correct answer is maybe... if after it's redesigned and put through 
>>lots of testing to ensure it doesn't create other regressions. I'm not saying 
>>it isn't correct, just that it's a major architectural change you're 
>>promoting. Now isn't the time for that.
>>
>>Why not just wait till 2.6.10 and plop in a new scheduler a'la dropping in a 
>>new vm into 2.4.10... <sigh> 
>>
>
>Wouldn't a new scheduler be easier to test? And your patches changes
>it's behavior quite a lot. Wouldn't they require the same testing?
>(And Nicks for that mater, who changes more)
>

Well a new scheduler needs the same testing as an old scheduler.
The difference is less of it has been done.

>
>>The cpu scheduler simply isn't broken as the people on this mailing list seem 
>>to think it is. While my tweaks _look_ large, they're really just tweaking 
>>the way the numbers feed back into a basically unchanged design. All the 
>>incremental changes have been modifying the same small sections of sched.c 
>>over and over again. Nick's changes change the size of timeslices and the 
>>priority variation in a much more fundamental way but still use the basic 
>>architecture of the scheduler. 
>>
>
>But, can't this scheduler suffer from starvation? If the run queue is
>long enough? Either via that deadline or via processes not running...
>
>Wouldn't a starved process boost ensure that even hogs on a loaded
>system got their share now and then?
>
>You could say that the problem the current scheduler has is that it's
>not allowed to starve anything, thats why we add stuff to give
>interactive bonus. But if it *was* allowed to starve but gave bonus to
>the starved processes that would make most of the interactive detection
>useless (yes, we still need the "didn't use their timeslice" bit and
>with a timeslice that gets smaller the higher the pri we'd automagically
>balance most processes).
>
>(As usual my assumptions might be really wrong...)
>

First off, no general purpose scheduler should allow starvation depending
on your definition. The interactivity stuff, and even dynamic priorities
allow short term unfairness.

A cpu hog is not a starved process. It becomes a CPU hog because it is
running all the time. True, it mustn't be starved indefinitely by a lot
of higher priority processes. This is something Con's and my schedulers
both ensure.

Hmm... what else? The "didn't use their timeslice" thing is not
applicable: a new timeslice doesn't get handed out until the previous one
is used. The priorities thing is done based on how much sleeping the
process does.

Its funny, everyone seems to have very similar ideas that they are
expressing by describing different implementations they have in mind.




  reply	other threads:[~2003-09-02 11:09 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-08-31 10:07 [SHED] Questions Ian Kumlien
2003-08-31 10:17 ` Nick Piggin
2003-08-31 10:24   ` Ian Kumlien
2003-08-31 10:41     ` Nick Piggin
2003-08-31 10:46       ` Nick Piggin
     [not found]       ` <1062326980.9959.65.camel@big.pomac.com>
     [not found]         ` <3F51D4A4.4090501@cyberone.com.au>
2003-08-31 11:08           ` Ian Kumlien
2003-08-31 11:31             ` Nick Piggin
2003-08-31 11:43               ` Ian Kumlien
2003-08-31 18:53 ` Robert Love
2003-08-31 19:31   ` Ian Kumlien
2003-08-31 19:51     ` Robert Love
2003-08-31 22:41       ` Ian Kumlien
2003-08-31 23:41         ` Robert Love
2003-09-01  0:00           ` Ian Kumlien
2003-09-01  2:50             ` Con Kolivas
2003-09-01 15:58               ` Antonio Vargas
2003-09-01 22:19               ` Ian Kumlien
2003-09-01  4:03             ` Robert Love
2003-09-01  5:07               ` Con Kolivas
2003-09-01  5:55                 ` Robert Love
2003-09-01 22:24               ` Ian Kumlien
2003-09-01 14:21             ` Antonio Vargas
2003-09-01 19:36               ` Geert Uytterhoeven
2003-09-01 22:49               ` Ian Kumlien
2003-09-01 15:07           ` Daniel Phillips
2003-09-01 14:16             ` Antonio Vargas
2003-09-01 23:03             ` Ian Kumlien
2003-09-02  0:04               ` Nick Piggin
2003-09-02  0:23               ` Con Kolivas
2003-09-02 10:25                 ` Ian Kumlien
2003-09-02 11:08                   ` Nick Piggin [this message]
2003-09-02 17:22                     ` Ian Kumlien
2003-09-02 23:49                       ` Nick Piggin
2003-09-03 23:02                         ` Ian Kumlien
2003-09-04  1:39                           ` Mike Fedyk
2003-09-02 10:44                 ` Wes Janzen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3F547A4B.7060309@cyberone.com.au \
    --to=piggin@cyberone.com.au \
    --cc=kernel@kolivas.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=phillips@arcor.de \
    --cc=pomac@vapor.com \
    --cc=rml@tech9.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).