linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@digeo.com>
To: rwhron@earthlink.net
Cc: linux-kernel@vger.kernel.org
Subject: Re: [BENCHMARK] 2.5.68 and 2.5.68-mm2
Date: Fri, 25 Apr 2003 16:25:48 -0700	[thread overview]
Message-ID: <20030425162548.3c32c207.akpm@digeo.com> (raw)
In-Reply-To: <20030425230939.GA2281@rushmore>

rwhron@earthlink.net wrote:
>
> There are a few benchmarks that have changed dramatically
> between 2.5.68 and 2.5.68-mm2.  
> 
> Machine is Quad P3 700 mhz Xeon with 1M cache.
> 3.75 GB RAM.
> RAID0 LUN
> QLogic 2200 Fiber channel

SMP + SCSI + ext[23] + tiobench -> sad.

> One recent change is -mm2 is 17-19% faster at tbench.  
> The logfiles don't indicate any errors.  Wonder what helped?

CPU scheduler changes perhaps.  There are some in there, but they are
small.

> The autoconf-2.53 make/make check is a fork test.   2.5.68
> is about 13% faster here.

I wonder why.  Which fs was it?

> On the AIM7 database test, -mm2 was about 18% faster and
> uses about 15% more CPU time.  (Real and CPU are in seconds).  
> The new Qlogic driver helps AIM7.

iirc, AIM7 is dominated by lots of O_SYNC writes.  I'd have expected the
anticipatory scheduler to do worse.  Odd.  Which fs was it?

> On the AIM7 shared test, -mm2 is 15-19% faster and 
> uses about 5% more CPU time.

Might be the ext3 BKL removal?

> 
> Processor, Processes - times in microseconds - smaller is better
> ----------------------------------------------------------------
>                    fork    execve  /bin/sh
> kernel           process  process  process
> -------------    -------  -------  -------
> 2.5.68               243      979     4401
> 2.5.68-mm2           502     1715     5200

How odd.  I'll check that.

> tiobench-0.3.3

tiobench will create a bunch of processes, each growing a large file, all
in the same directory.  Doing this on SMP (especially with SCSI) jsut goes
straight for the jugular of the ext3 and ext2 block allocators.  This is
why you saw such crap numbers with the fs comparison.

The tiobench datafiles end up being astonishingly fragmented.

You won't see the problem on uniprocessor, because each task can allocate
continuous runs of blocks in a timeslice.

It's better (but still crap) on ext2 because ext2 manages to allocate
fragments of 8 blocks, not 1 block.  If you bump
EXT2_DEFAULT_PREALLOC_BLOCKS from 8 to 32 you'll get better ext2 numbers.

The benchmark is hitting a pathologoical case.  Yeah, it's a problem, but
it's not as bad as tiobench indicates.

The place where it is more likely to hurt is when an application is slowly
growing more than one file.  Something like:

	for (lots) {
		write(fd1, buf1, 4096);
		write(fd2, buf2, 4096);
	}

the ext[23] block allcoators need significant redesign to fix this up for
real.



  reply	other threads:[~2003-04-25 23:16 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-04-25 23:09 [BENCHMARK] 2.5.68 and 2.5.68-mm2 rwhron
2003-04-25 23:25 ` Andrew Morton [this message]
2003-04-26  1:58 rwhron
2003-04-26  2:20 ` Nick Piggin
2003-04-26  3:11   ` Nick Piggin
2003-04-28 21:58 rwhron
2003-04-30  0:59 rwhron
2003-05-01 18:10 ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20030425162548.3c32c207.akpm@digeo.com \
    --to=akpm@digeo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rwhron@earthlink.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).