All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Wolfgang Denk <wd@denx.de>
Cc: Justin Piszcz <jpiszcz@lucidpixels.com>,
	linux-raid@vger.kernel.org, xfs@oss.sgi.com
Subject: Re: Optimize RAID0 for max IOPS?
Date: Tue, 25 Jan 2011 10:03:14 +1100	[thread overview]
Message-ID: <20110124230314.GA11040@dastard> (raw)
In-Reply-To: <20110124215713.82D75B187@gemini.denx.de>

On Mon, Jan 24, 2011 at 10:57:13PM +0100, Wolfgang Denk wrote:
> Dear Justin,
> 
> In message <alpine.DEB.2.00.1101241024230.14640@p34.internal.lan> you wrote:
> > 
> > Some info on XFS benchmark with delaylog here:
> > http://comments.gmane.org/gmane.comp.file-systems.xfs.general/34379
> 
> For the record: I tested both the "delaylog" and "logbsize=262144" on
> two systems running Fedora 14 x86_64 (kernel version
> 2.6.35.10-74.fc14.x86_64).
> 
> 
> Test No.	Mount options
> 1		rw,noatime
> 2		rw,noatime,delaylog
> 3		rw,noatime,delaylog,logbsize=262144
> 
> 
> System A: Gigabyte EP35C-DS3R Mainbord, Core 2 Quad CPU Q9550 @ 2.83GHz, 4 GB RAM
> --------- software RAID 5 using 4 x old Maxtor 7Y250M0 S-ATA I disks
> 	  (chunk size 16 kB, using S-ATA ports on main board), XFS
> 
> Test 1:
> 
> Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> A1               8G   844  96 153107  19 56427  11  2006  98 127174  15 369.4   6
> Latency             13686us    1480ms    1128ms   14986us     136ms   74911us
> Version  1.96       ------Sequential Create------ --------Random Create--------
> A1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16   104   0 +++++ +++   115   0    89   0 +++++ +++   111   0

Only 16 files? You need to test something that takes more than 5
milliseconds to run. Given that XFS can run at >20,000 creates/s for
a single threaded sequential create like this, perhaps you should
start at 100,000 files (maybe a million) so you get an idea of
sustained performance.

.....

> I do not see any significant improvement in any of the parameters -
> especially when compared to the serious performance degradation (down
> to 44% for block write, 42% for block read) on system A.

delaylog does not affect the block IO path in any way, so something
else is going on there. You need to sort that out before drawing any
conclusions.

Similarly, you need to test something relevant to your workload, not
use a canned benchmarks in the expectation the results are in any
way meaningful to your real workload. Also, if you do use a stupid
canned benchmark, make sure you configure it to test something
relevant to what you are trying to compare...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Wolfgang Denk <wd@denx.de>
Cc: linux-raid@vger.kernel.org,
	Justin Piszcz <jpiszcz@lucidpixels.com>,
	xfs@oss.sgi.com
Subject: Re: Optimize RAID0 for max IOPS?
Date: Tue, 25 Jan 2011 10:03:14 +1100	[thread overview]
Message-ID: <20110124230314.GA11040@dastard> (raw)
In-Reply-To: <20110124215713.82D75B187@gemini.denx.de>

On Mon, Jan 24, 2011 at 10:57:13PM +0100, Wolfgang Denk wrote:
> Dear Justin,
> 
> In message <alpine.DEB.2.00.1101241024230.14640@p34.internal.lan> you wrote:
> > 
> > Some info on XFS benchmark with delaylog here:
> > http://comments.gmane.org/gmane.comp.file-systems.xfs.general/34379
> 
> For the record: I tested both the "delaylog" and "logbsize=262144" on
> two systems running Fedora 14 x86_64 (kernel version
> 2.6.35.10-74.fc14.x86_64).
> 
> 
> Test No.	Mount options
> 1		rw,noatime
> 2		rw,noatime,delaylog
> 3		rw,noatime,delaylog,logbsize=262144
> 
> 
> System A: Gigabyte EP35C-DS3R Mainbord, Core 2 Quad CPU Q9550 @ 2.83GHz, 4 GB RAM
> --------- software RAID 5 using 4 x old Maxtor 7Y250M0 S-ATA I disks
> 	  (chunk size 16 kB, using S-ATA ports on main board), XFS
> 
> Test 1:
> 
> Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> A1               8G   844  96 153107  19 56427  11  2006  98 127174  15 369.4   6
> Latency             13686us    1480ms    1128ms   14986us     136ms   74911us
> Version  1.96       ------Sequential Create------ --------Random Create--------
> A1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16   104   0 +++++ +++   115   0    89   0 +++++ +++   111   0

Only 16 files? You need to test something that takes more than 5
milliseconds to run. Given that XFS can run at >20,000 creates/s for
a single threaded sequential create like this, perhaps you should
start at 100,000 files (maybe a million) so you get an idea of
sustained performance.

.....

> I do not see any significant improvement in any of the parameters -
> especially when compared to the serious performance degradation (down
> to 44% for block write, 42% for block read) on system A.

delaylog does not affect the block IO path in any way, so something
else is going on there. You need to sort that out before drawing any
conclusions.

Similarly, you need to test something relevant to your workload, not
use a canned benchmarks in the expectation the results are in any
way meaningful to your real workload. Also, if you do use a stupid
canned benchmark, make sure you configure it to test something
relevant to what you are trying to compare...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-01-24 23:03 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-18 21:01 Optimize RAID0 for max IOPS? Wolfgang Denk
2011-01-18 22:18 ` Roberto Spadim
2011-01-19  7:04   ` Wolfgang Denk
2011-01-18 23:15 ` Stefan /*St0fF*/ Hübner
2011-01-19  0:05   ` Roberto Spadim
2011-01-19  7:11     ` Wolfgang Denk
2011-01-19  8:18       ` Stefan /*St0fF*/ Hübner
2011-01-19  8:29         ` Jaap Crezee
2011-01-19  9:32           ` Jan Kasprzak
2011-01-19  7:10   ` Wolfgang Denk
2011-01-19 19:21   ` Wolfgang Denk
2011-01-19 19:50     ` Roberto Spadim
2011-01-19 22:36       ` Stefan /*St0fF*/ Hübner
2011-01-19 23:09         ` Roberto Spadim
2011-01-19 23:18           ` Roberto Spadim
2011-01-20  2:48             ` Keld Jørn Simonsen
2011-01-20  3:53               ` Roberto Spadim
2011-01-21 19:34             ` Wolfgang Denk
2011-01-21 20:03               ` Roberto Spadim
2011-01-21 20:04                 ` Roberto Spadim
2011-01-24 14:40     ` CoolCold
2011-01-24 15:25       ` Justin Piszcz
2011-01-24 15:25         ` Justin Piszcz
2011-01-24 20:48         ` Wolfgang Denk
2011-01-24 20:48           ` Wolfgang Denk
2011-01-24 21:57         ` Wolfgang Denk
2011-01-24 21:57           ` Wolfgang Denk
2011-01-24 23:03           ` Dave Chinner [this message]
2011-01-24 23:03             ` Dave Chinner
2011-01-25  7:39             ` Emmanuel Florac
2011-01-25  7:39               ` Emmanuel Florac
2011-01-25  8:36               ` Dave Chinner
2011-01-25  8:36                 ` Dave Chinner
2011-01-25 12:45                 ` Wolfgang Denk
2011-01-25 12:45                   ` Wolfgang Denk
2011-01-25 12:51                   ` Emmanuel Florac
2011-01-25 12:51                     ` Emmanuel Florac
2011-01-24 20:43       ` Wolfgang Denk
2011-01-25 17:10 ` Christoph Hellwig
2011-01-25 18:41   ` Wolfgang Denk
2011-01-25 21:35     ` Christoph Hellwig
2011-01-26  7:16       ` Wolfgang Denk
2011-01-26  8:32         ` Stan Hoeppner
2011-01-26  8:42           ` Wolfgang Denk
2011-01-26  9:38         ` Christoph Hellwig
2011-01-26  9:41           ` CoolCold

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110124230314.GA11040@dastard \
    --to=david@fromorbit.com \
    --cc=jpiszcz@lucidpixels.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=wd@denx.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.