linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Con Kolivas <kernel@kolivas.org>
To: Al Boldi <a1426z@gawab.com>
Cc: ck list <ck@vds.kolivas.org>,
	linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>
Subject: Re: [patch][rfc] quell interactive feeding frenzy
Date: Wed, 12 Apr 2006 19:36:15 +1000	[thread overview]
Message-ID: <200604121936.16584.kernel@kolivas.org> (raw)
In-Reply-To: <200604121117.01393.a1426z@gawab.com>

On Wednesday 12 April 2006 18:17, Al Boldi wrote:
> Con Kolivas wrote:
> > Single heavily cpu bound computationally intensive tasks (think rendering
> > etc).
>
> Why do you need a switch for that?

Because avoiding doing need_resched and reassessing priority at less regular 
intervals means less overhead, and there is always something else running on 
a pc. At low loads the longer timeslices and delayed preemption contribute 
considerably to cache warmth and throughput. Comparing staircase's 
sched_compute mode on kernbench at "optimal loads" (make -j4 x num_cpus) 
showed the best throughput of all the schedulers tested.

> > Sorry I don't understand what you mean. Why do you say it's not fair (got
> > a testcase?). What do you mean by "definitely not smooth". What is
> > smoothness and on what workloads is it not smooth? Also by ia you mean
> > what?
>
> ia=interactivity i.e: responsiveness under high load.
> smooth=not jumpy i.e: run '# gears & morph3d & reflect &' w/o stutter

Installed and tested here just now. They run smoothly concurrently here. Are 
you testing on staircase15?

> fair=non hogging i.e: spreading cpu-load across tasks evenly (top d.1)

Only unblocked processes/threads where one depends on the other don't get 
equal share, which is as broken a testcase as relying on sched_yield. I have 
not seen a testcase demonstrating unfairness on current staircase. top shows 
me fair cpu usage.

> > Again I don't understand. Just how heavy a load is heavy? Your testcases
> > are already in what I would call stratospheric range. I don't personally
> > think a cpu scheduler should be optimised for load infinity. And how are
> > you defining efficient? You say it doesn't "look" efficient? What "looks"
> > inefficient about it?
>
> The idea here is to expose inefficiencies by driving the system into
> saturation, and although staircase is more efficient than the default 2.6
> scheduler, it is obviously less efficient than spa.

Where do you stop calling something saturation and start calling it absurd? By 
your reckoning staircase is stable to loads of 300 on one cpu. spa being 
stable to higher loads is hardly comparable given the interactivity disparity 
between it and staircase. A compromise is one that does both very well; not 
one perfectly and the other poorly.

> > You want tunables? The only tunable in staircase is rr_interval which (in
> > -ck) has an on/off for big/small (sched_compute) since most other numbers
> > in between (in my experience) are pretty meaningless. I could export
> > rr_interval directly instead... I've not seen a good argument for doing
> > that. Got one?
>
> Smoothness control, maybe?

Have to think about that one. I'm not seeing a smoothness issue.

> > However there are no other tunables at all (just look at
> > the code). All tasks of any nice level have available the whole priority
> > range from 100-139 which appears as PRIO 0-39 on top. Limiting that
> > (again) changes the semantics.
>
> Yes, limiting this could change the semantics for the sake of fairness,
> it's up to you.

There is no problem with fairness that I am aware of.

Thanks!

-- 
-ck

  reply	other threads:[~2006-04-12  9:36 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <200604112100.28725.kernel@kolivas.org>
2006-04-11 17:03 ` Fwd: Re: [patch][rfc] quell interactive feeding frenzy Al Boldi
2006-04-11 22:56   ` Con Kolivas
2006-04-12  5:41     ` Al Boldi
2006-04-12  6:22       ` Con Kolivas
2006-04-12  8:17         ` Al Boldi
2006-04-12  9:36           ` Con Kolivas [this message]
2006-04-12 10:39             ` Al Boldi
2006-04-12 11:27               ` Con Kolivas
2006-04-12 15:25                 ` Al Boldi
2006-04-13 11:51                   ` Con Kolivas
2006-04-14  3:16                     ` Al Boldi
2006-04-15  7:05                       ` Con Kolivas
2006-04-15 18:23                         ` [ck] " Michael Gerdau
2006-04-15 20:45                         ` Al Boldi
2006-04-15 23:22                           ` Con Kolivas
2006-04-16 18:44                             ` [ck] " Andreas Mohr
2006-04-17  0:08                               ` Con Kolivas
2006-04-19  8:37                                 ` Andreas Mohr
2006-04-19  8:59                                   ` jos poortvliet
2006-04-15 22:32                         ` jos poortvliet
2006-04-15 23:06                           ` Con Kolivas
2006-04-16  6:02                   ` Con Kolivas
2006-04-16  8:31                     ` Al Boldi
2006-04-16  8:58                       ` Con Kolivas
2006-04-16 10:37                       ` was " Con Kolivas
2006-04-16 19:03                         ` Al Boldi
2006-04-16 23:26                           ` Con Kolivas
2006-04-09 16:44 [patch][rfc] " Al Boldi
2006-04-09 18:33 ` Mike Galbraith
2006-04-10 14:43   ` Al Boldi
2006-04-11 10:57     ` Con Kolivas
  -- strict thread matches above, loose matches on Subject: below --
2006-04-07  9:38 Mike Galbraith
2006-04-07  9:47 ` Andrew Morton
2006-04-07  9:52   ` Ingo Molnar
2006-04-07 10:57     ` Mike Galbraith
2006-04-07 11:00       ` Con Kolivas
2006-04-07 11:09         ` Mike Galbraith
2006-04-07 10:40   ` Mike Galbraith
2006-04-07 12:56 ` Con Kolivas
2006-04-07 13:37   ` Mike Galbraith
2006-04-07 13:56     ` Con Kolivas
2006-04-07 14:14       ` Mike Galbraith
2006-04-07 15:16         ` Mike Galbraith
2006-04-09 11:14         ` bert hubert
2006-04-09 11:39           ` Mike Galbraith
2006-04-09 12:14             ` bert hubert
2006-04-09 18:07               ` Mike Galbraith
2006-04-10  9:12                 ` bert hubert
2006-04-10 10:00                   ` Mike Galbraith
2006-04-10 14:56                     ` Mike Galbraith
2006-04-13  7:41                       ` Mike Galbraith
2006-04-13 10:16                         ` Con Kolivas
2006-04-13 11:05                           ` Mike Galbraith
2006-04-09 18:24               ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200604121936.16584.kernel@kolivas.org \
    --to=kernel@kolivas.org \
    --cc=a1426z@gawab.com \
    --cc=ck@vds.kolivas.org \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).