linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Con Kolivas <kernel@kolivas.org>
Cc: Voluspa <lista1@comhem.se>,
	linux kernel mailing list <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@osdl.org>
Subject: Re: [PATCH] O17int
Date: Thu, 21 Aug 2003 17:14:09 +0200	[thread overview]
Message-ID: <5.2.1.1.2.20030821154224.01990b48@pop.gmx.net> (raw)
In-Reply-To: <200308212146.14493.kernel@kolivas.org>

At 09:46 PM 8/21/2003 +1000, Con Kolivas wrote:
>On Thu, 21 Aug 2003 17:53, Mike Galbraith wrote:
> > At 03:26 PM 8/21/2003 +1000, Con Kolivas wrote:
> > >Unhappy with this latest O16.3-O17int patch I'm withdrawing it, and
> > >recommending nothing on top of O16.3 yet.
> > >
> > >More and more it just seems to be a bandaid to the priority inverting spin
> > > on waiters, and it does seem to be of detriment to general interacivity.
> > > I can now reproduce some loss of interactive feel with O17.
> > >
> > >Something specific for the spin on waiters is required that doesn't affect
> > >general performance. The fact that I can reproduce the same starvation in
> > >vanilla 2.6.0-test3 but to a lesser extent means this is an intrinsic
> > > problem that needs a specific solution.
> >
> > I can see only one possible answer to this - never allow a normal task to
> > hold the cpu for long stretches (define) while there are other tasks
> > runnable.  (see attachment)
>
>I assume you mean the strace ? That was the only attachment, and it just 
>looks
>like shiploads of schedule() from the get time of day. Yes?

(no, ~2 seconds of X being awol)

> > I think the _easiest_ fix for this particular starvation (without tossing
> > baby out with bath water;) is to try to "down-shift" in schedule() when
> > next == prev.  This you can do very cheaply with a find_next_bit().  That
> > won't help the case where there are multiple tasks involved, but should fix
> > the most common case for dirt cheap.  (another simple alternative is to
> > globally "down-shift" periodically)
>
>Err funny you should say that; that's what O17 did. But it hurt because it
>would never allow a task that used a full timeslice to be next==prev. The

If someone is marked for resched, it means we want to give someone else the 
cpu right?  In this case at least, re-selecting blender is not the right 
thing to do.  Looks like he's burning rubber... going nowhere fast.

>  less I throttled that, the less effective the antistarvation was. However
>this is clearly a problem without using up full timeslices. I originally
>thought they weren't trying to schedule lots because of the drop in ctx
>during starvation but I forgot that rescheduling the same task doesnt count
>as a ctx.

Hmm.  In what way did it hurt interactivity?  I know that if you pass the 
cpu off to non-gui task who's going to use his full 100 ms slice, you'll 
definitely feel it.  (made workaround, will spare delicate tummies;)  If 
you mean that, say X releases the cpu and has only a couple of ms left on 
it's slice and is alone in it's queue, that preempting it at the end of 
it's slice after having only had the cpu for such a short time after wakeup 
hurts, you can qualify the preempt decision with a cpu possession time check.

>  Also I recall that winegames got much better in O10 when everything was
>charged at least one jiffy (pre nanosecond timing) suggesting those that were
>waking up for minute amounts of time repeatedly were being penalised; thus
>taking out the possibility of the starving task staying high priority for
>long.

(unsure what you mean here)

> > The most generally effective form of the "down-shift" anti-starvation
> > tactic that I've tried, is to periodically check the head of all queues
> > below current position (can do very quickly), and actively select the
> > oldest task who hasn't run since some defined deadline.  Queues are
> > serviced based upon priority most of the time, and based upon age some of
> > the time.
>
>Hmm also sounds fudgy.

Yeah.  I crossbred it with a ~deadline scheduler, and created a mutt.

         -Mike (2'6"") 


  reply	other threads:[~2003-08-21 15:10 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-08-19 15:01 [PATCH] O17int Con Kolivas
2003-08-19 16:39 ` Måns Rullgård
2003-08-20  1:23   ` Con Kolivas
2003-08-20  9:19     ` Måns Rullgård
2003-08-19 18:58 ` Måns Rullgård
2003-08-20  1:19   ` Con Kolivas
2003-08-20  8:53     ` Måns Rullgård
2003-08-20 16:27 ` Wiktor Wodecki
2003-08-20 16:42   ` Nick Piggin
2003-08-20 21:23   ` Con Kolivas
2003-08-21  5:26     ` Con Kolivas
2003-08-21  7:53       ` Mike Galbraith
2003-08-21 11:46         ` Con Kolivas
2003-08-21 15:14           ` Mike Galbraith [this message]
2003-08-21 22:18             ` Wes Janzen
2003-08-22  0:09               ` Con Kolivas
2003-08-22 21:17                 ` Wes Janzen
2003-08-22  0:42               ` Felipe Alfaro Solana
2003-08-22  5:34               ` Mike Galbraith
2003-08-22 20:48                 ` Wes Janzen
2003-08-21 23:59             ` Con Kolivas
2003-08-22  5:11               ` Mike Galbraith
2003-08-21  5:30 ` Apurva Mehta
2003-08-20  4:55 Voluspa
2003-08-20  8:00 ` Mike Galbraith
2003-08-20 11:21   ` Con Kolivas
2003-08-20 22:51 Voluspa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5.2.1.1.2.20030821154224.01990b48@pop.gmx.net \
    --to=efault@gmx.de \
    --cc=akpm@osdl.org \
    --cc=kernel@kolivas.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lista1@comhem.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).