linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Con Kolivas <kernel@kolivas.org>
To: Nick Piggin <piggin@cyberone.com.au>
Cc: linux-kernel@vger.kernel.org, Andrew Morton <akpm@digeo.com>
Subject: Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
Date: Tue, 4 Mar 2003 16:29:45 +1100	[thread overview]
Message-ID: <200303041629.46019.kernel@kolivas.org> (raw)
In-Reply-To: <3E64390F.7090309@cyberone.com.au>

On Tue, 4 Mar 2003 04:26 pm, Nick Piggin wrote:
> Con Kolivas wrote:
> >On Tue, 4 Mar 2003 03:18 pm, Nick Piggin wrote:
> >>small randomish reads vs large writes _is_ where AS really can
> >>perform better than non a non AS scheduler. Unfortunately gcc
> >>doesn't have the _best_ IO pattern for AS ;)
> >
> >Yes I recall this discussion against a gcc based benchmark. However it is
> >interesting that it still performed by far the best.
>
> Yes, AS obviously does help gcc against io_load. My
> "unfortunately" comment was just a pun, of course we
> don't want to just test where AS does well.
>
> >>>CFQ and DL scheduler were faster compiling the kernel under read_load,
> >>>list_load and dbench_load.
> >>>
> >>>Mem_load result of AS being slower was just plain weird with the result
> >>>rising from 100 to 150 during testing.
> >>
> >>I would like to see if AS helps much with a swap/memory
> >>thrashing load.
> >
> >That's what mem_load is. It repeatedly tries to access 110% of available
> > ram. quote from original post:
> >mem_load:
> >Kernel         [runs]   Time    CPU%    Loads   LCPU%   Ratio
> >2.5.63              3   104     75.0    57.7    1.9     1.32
> >2.5.63-mm2cfq       3   101     76.2    52.3    2.0     1.28
> >2.5.63-mm2          3   132     59.1    90.3    2.3     1.65
> >2.5.63-mm2dl        3   100     79.0    52.0    2.0     1.27
> >
> >Note that mm2 with AS performed equivalent to the other schedulers but on
> >later runs took longer. (99, 148,150) This is usually suspicious of a
> > memory leak that contest is unusually sensitive at picking up, but there
> > wasn't anything suspicious about the meminfo after these runs, and none
> > of the other loads changed over time. io_load usually shows drastic
> > prolongation when memory is leaking.
>
> Ah ok. And this change didn't affect other schedulers on mm2? Is
> it reproducable with AS? I'll have to keep this in mind and take
> another look at it after a few othe bugs are fixed.

Not on the other schedulers, no. I'll throw some more benchmarks at it to see 
if it recurs. I didn't think much of it at the time.

Con

  reply	other threads:[~2003-03-04  5:19 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-03-04  2:54 [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest Con Kolivas
2003-03-04  4:18 ` Nick Piggin
2003-03-04  5:15   ` Con Kolivas
2003-03-04  5:26     ` Nick Piggin
2003-03-04  5:29       ` Con Kolivas [this message]
2003-03-04  8:10 ` Andrew Morton
2003-03-04  8:20   ` Con Kolivas
2003-03-05  6:02   ` Con Kolivas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200303041629.46019.kernel@kolivas.org \
    --to=kernel@kolivas.org \
    --cc=akpm@digeo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=piggin@cyberone.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).