linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Con Kolivas <kernel@kolivas.org>
To: Al Boldi <a1426z@gawab.com>
Cc: ck list <ck@vds.kolivas.org>,
	linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>
Subject: Re: [patch][rfc] quell interactive feeding frenzy
Date: Sun, 16 Apr 2006 18:58:23 +1000	[thread overview]
Message-ID: <200604161858.24171.kernel@kolivas.org> (raw)
In-Reply-To: <200604161131.02585.a1426z@gawab.com>

On Sunday 16 April 2006 18:31, Al Boldi wrote:
> Con Kolivas wrote:
> > On Thursday 13 April 2006 01:25, Al Boldi wrote:
> > > Con Kolivas wrote:
> > > > mean 68.7 seconds
> > > >
> > > > range 63-73 seconds.
> > >
> > > Could this 10s skew be improved to around 1s to aid smoothness?
> >
> > It turns out to be dependant on accounting of system time which only
> > staircase does at the moment btw. Currently it's done on a jiffy basis.
> > To increase the accuracy of this would incur incredible cost which I
> > don't consider worth it.
>
> Is this also related to that?

No.

> > > Much smoother, but I still get this choke w/ 2 eatm 9999 loops running:
> > >
> > > 9 MB 783 KB eaten in 130 msec (74 MB/s)
> > > 9 MB 783 KB eaten in 2416 msec (3 MB/s)		<<<<<<<<<<<<<
> > > 9 MB 783 KB eaten in 197 msec (48 MB/s)
> > >
> > > You may have to adjust the kb to get the same effect.
> >
> > I've seen it. It's an artefact of timekeeping that it takes an
> > accumulation of data to get all the information. Not much I can do about
> > it except to have timeslices so small that they thrash the crap out of
> > cpu caches and completely destroy throughput.
>
> So why is this not visible in other schedulers?

When I said there's not much I can do about it I mean with respect to the 
design.

> Are you sure this is not a priority boost problem?

Indeed it is related to the way cpu is proportioned out in staircase being 
both priority and slice. Problem? The magnitude of said problem is up to the 
observer to decide. It's a phenomenon of only two infinitely repeating 
concurrent rapidly forking workloads when one forks every less than 100ms and 
the other more; ie your test case. I'm sure there's a real world workload 
somewhere somehow that exhibits this, but it's important to remember that 
overall it's fair with the occasional blip.

> > The current value, 6ms at 1000HZ, is chosen because it's the largest
> > value that can schedule a task in less than normal human perceptible
> > range when two competing heavily cpu bound tasks are the same priority.
> > At 250HZ it works out to 7.5ms and 10ms at 100HZ. Ironically in my
> > experimenting I found the cpu cache improvements become much less
> > significant above 7ms so I'm very happy with this compromise.
>
> Would you think this is dependent on cache-size and cpu-speed?

It is. Cache warmth time varies on architecture and design. Of course you're 
going to tell me to add a tunable and/or autotune this. Then that undoes the 
limiting it to human perception range. It really does cost us to export these 
things which are otherwise compile time constants... sigh.

> Also, what's this iso_cpu thing?

SCHED_ISO cpu usage which you're not using.

-- 
-ck

  reply	other threads:[~2006-04-16  8:58 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <200604112100.28725.kernel@kolivas.org>
2006-04-11 17:03 ` Fwd: Re: [patch][rfc] quell interactive feeding frenzy Al Boldi
2006-04-11 22:56   ` Con Kolivas
2006-04-12  5:41     ` Al Boldi
2006-04-12  6:22       ` Con Kolivas
2006-04-12  8:17         ` Al Boldi
2006-04-12  9:36           ` Con Kolivas
2006-04-12 10:39             ` Al Boldi
2006-04-12 11:27               ` Con Kolivas
2006-04-12 15:25                 ` Al Boldi
2006-04-13 11:51                   ` Con Kolivas
2006-04-14  3:16                     ` Al Boldi
2006-04-15  7:05                       ` Con Kolivas
2006-04-15 18:23                         ` [ck] " Michael Gerdau
2006-04-15 20:45                         ` Al Boldi
2006-04-15 23:22                           ` Con Kolivas
2006-04-16 18:44                             ` [ck] " Andreas Mohr
2006-04-17  0:08                               ` Con Kolivas
2006-04-19  8:37                                 ` Andreas Mohr
2006-04-19  8:59                                   ` jos poortvliet
2006-04-15 22:32                         ` jos poortvliet
2006-04-15 23:06                           ` Con Kolivas
2006-04-16  6:02                   ` Con Kolivas
2006-04-16  8:31                     ` Al Boldi
2006-04-16  8:58                       ` Con Kolivas [this message]
2006-04-16 10:37                       ` was " Con Kolivas
2006-04-16 19:03                         ` Al Boldi
2006-04-16 23:26                           ` Con Kolivas
2006-04-09 16:44 [patch][rfc] " Al Boldi
2006-04-09 18:33 ` Mike Galbraith
2006-04-10 14:43   ` Al Boldi
2006-04-11 10:57     ` Con Kolivas
  -- strict thread matches above, loose matches on Subject: below --
2006-04-07  9:38 Mike Galbraith
2006-04-07  9:47 ` Andrew Morton
2006-04-07  9:52   ` Ingo Molnar
2006-04-07 10:57     ` Mike Galbraith
2006-04-07 11:00       ` Con Kolivas
2006-04-07 11:09         ` Mike Galbraith
2006-04-07 10:40   ` Mike Galbraith
2006-04-07 12:56 ` Con Kolivas
2006-04-07 13:37   ` Mike Galbraith
2006-04-07 13:56     ` Con Kolivas
2006-04-07 14:14       ` Mike Galbraith
2006-04-07 15:16         ` Mike Galbraith
2006-04-09 11:14         ` bert hubert
2006-04-09 11:39           ` Mike Galbraith
2006-04-09 12:14             ` bert hubert
2006-04-09 18:07               ` Mike Galbraith
2006-04-10  9:12                 ` bert hubert
2006-04-10 10:00                   ` Mike Galbraith
2006-04-10 14:56                     ` Mike Galbraith
2006-04-13  7:41                       ` Mike Galbraith
2006-04-13 10:16                         ` Con Kolivas
2006-04-13 11:05                           ` Mike Galbraith
2006-04-09 18:24               ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200604161858.24171.kernel@kolivas.org \
    --to=kernel@kolivas.org \
    --cc=a1426z@gawab.com \
    --cc=ck@vds.kolivas.org \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).