linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Journal Filesystem Comparison on Netbench
@ 2001-08-27 15:02 Andrew Theurer
  2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Andrew Theurer @ 2001-08-27 15:02 UTC (permalink / raw)
  To: linux-kernel

Hello all,

I recently starting doing some fs performance comparisons with Netbench
and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
Ext3.  I thought some of you may be interested in the results.  Below
is the README from the http://lse.sourceforge.net.  There is a kernprof
for each test, and I am working on the lockmeter stuff right now.  Let
me
know if you have any comments.  

Andrew Theurer
IBM LTC 


README:

http://lse.sourceforge.net/benchmarks/netbench/results/august_2001/filesystems/raid1e/README

The following is a filesystem comparison on the NetBench workload.
Filesystems tested include EXT2, EXT3, Reiserfs, XFS, and JFS.  
Server hardware is an 8 processor Intel Xeon, 700 MHz, 1MB L2
cache, Profusion chipset, 4GB interleaved memory, and 4 Intel
gigabit ethernet cards.  This test was conducted using a 
RAID disk system, consisting of an IBM ServeRAID 4H SCSI adapter, 
equipped with 32 MB cache, using one SCSI channel, attached to 10
disks, each having a capacity of 9 GB and a speed 10,000 RPM.  
The RAID was configured for level 10, a 5 disk stripe with mirror.   

The server was tested using linux 2.4.7, Samba 2.2.0, and NetBench
7.0.1.
Since we only have enough clients to drive a 4-way SMP test (44), the 
kernel used 4 processors instead of eight.  The 
"Enterprise Disk Suite" test was used for NetBench.  Each filesystem
was tested with the same test, starting with 4 clients and increasing
clients by 4 up to 44 clients.  

Some optimizations were used for linux, including zerocopy,
IRQ affinity, and interrupt delay for the gigabit cards, 
and process affinity for the smbd processes.

Default configurations for all filesystems were used, except ext3 
used mode "data=writeback".  No special options were chosen 
for performance for these initial tests.  If you know of
performance options that would benefit this test, please send
me email, habanero@us.ibm.com  


Peak Performance Results:

EXT2      773 Mbps @ 44 clients
EXT3      660 Mbps @ 44 clients
Reiserfs  532 Mbps @ 28 clients
XFS       661 Mbps @ 44 clients
JFS       683 Mbps @ 40 clients


Data Files:

This directory contains:

        kp.html         Kernprof top 25 list for all filesystems, 
                        recorded during a 44 client test.
        lock.html       -pending completion-  Lockmeter results,
                        recorded during a 44 client test.
                        -update: Reiserfs lockmeter is completed. 
                        look at ./reiserfs/4p/lockmeter.txt for complete
lockstat file.

        README          This file
        ./<fstype>      Test data for filesystem, <fstype> =
[ext2|ext3|xfs|reiserfs|jfs]
                        First subdirectory is the SMP config (4P for
these tests)
                        Next level directories are: 
                          
                        sar:  sysstat log for test
                        netbench: netbench results in Excel format
                        proc:  some proc info before test
                        samba:  samba info


Notes:

In this test, JFS had the best peak throughput for journal filesystems,
and ext2 had the best peak throughput for all filesystems.  Reiserfs 
had the lowest peak throughput, and also had the most % time in
stext_lock
(as shown in kp.html).  

Netbench is usually an "in memory" test, such that all files stay in
buffer cache.  Actually, during the test, kupdate is stopped.  No file
data
is ever written to disk, but with the introduction on journal
filesystems,
journal data is written to disk.  This allows the opportunity to compare
how much data and how often data is written to disk for the 4 journal
filesystems tested.  the sysstat information shows blocks/sec for
devices.
In these tests, the device for the samba share is on dev8-1. 
JFS experienced a peak blocks/sec of ~10,000, while XFS was ~4100, EXT3
was ~1100, and Reiserfs was ~800.  It was interesting to see Reiserfs
write at 800 blocks/sec, then nothing for 30 seconds, then again at 
~800 blocks/sec.  No other journal filesystem experienced that pattern
of journal activity.  


Next Steps:

Finish lockmeter reports
Same tests on non-cached, single SCSI disk
Investigate performance options for each filesystem

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Journal FS Comparison on IOzone (was Netbench)
  2001-08-27 15:02 Journal Filesystem Comparison on Netbench Andrew Theurer
@ 2001-08-27 18:24 ` Randy.Dunlap
  2001-08-27 18:59   ` Brian
                     ` (2 more replies)
  2001-08-27 20:04 ` Journal Filesystem Comparison on Netbench Hans Reiser
  2001-08-28 10:05 ` Roberto Nibali
  2 siblings, 3 replies; 13+ messages in thread
From: Randy.Dunlap @ 2001-08-27 18:24 UTC (permalink / raw)
  To: Andrew Theurer; +Cc: linux-kernel, linux-fsdevel

Hi,

I am doing some similar FS comparisons, but using IOzone
(www.iozone.org)
instead of Netbench.

Some preliminary (mostly raw) data are available at:
http://www.osdlab.org/reports/journal_fs/
(updated today).

I am using a Linux 2.4.7 on a 4-way VA Linux system.
It has 4 GB of RAM, but I have limited it to 256 MB in
accordance with IOzone run rules.

However, I suspect that this causes IOzone to measure disk
subsystem or PCI bus performance more than it does FS performance.
Any comments on this?

Default configurations for all filesystems were used.

Future:
. measure operations/second
. kernel profiling
. measure CPU utilization for each FS
. make graphs more readable
. do some FS comparison graphs


Regards,
~Randy


Andrew Theurer wrote:
> 
> Hello all,
> 
> I recently starting doing some fs performance comparisons with Netbench
> and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
> Ext3.  I thought some of you may be interested in the results.  Below
> is the README from the http://lse.sourceforge.net.  There is a kernprof
> for each test, and I am working on the lockmeter stuff right now.  Let
> me know if you have any comments.
> 
> Andrew Theurer
> IBM LTC

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal FS Comparison on IOzone (was Netbench)
  2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
@ 2001-08-27 18:59   ` Brian
  2001-08-27 19:28   ` Andrew Theurer
  2001-08-30 15:08   ` YAFB: Yet Another Filesystem Bench Yves Rougy
  2 siblings, 0 replies; 13+ messages in thread
From: Brian @ 2001-08-27 18:59 UTC (permalink / raw)
  To: Randy.Dunlap; +Cc: linux-kernel

On Monday 27 August 2001 02:24 pm, Randy.Dunlap wrote:
> I am using a Linux 2.4.7 on a 4-way VA Linux system.
> It has 4 GB of RAM, but I have limited it to 256 MB in
> accordance with IOzone run rules.

I might have gone with a dual-proc, simply because they seem to be the 
server config of choice around here, but that may not hold true for your 
own needs.

> However, I suspect that this causes IOzone to measure disk
> subsystem or PCI bus performance more than it does FS performance.
> Any comments on this?

It gives you a mix of in-memory and on-disk operations.  The on-disk work 
is worth noting -- it tells you how well the FS handles/causes 
fragmentation.  FAT, WAFL, and Tux2, for instance, would probably do very 
poorly on random reads, since they tend to have a lot of fragmentation.  
WAFL and Tux2, on the other hand, should slaughter everyone on random 
writes.

	-- Brian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal FS Comparison on IOzone (was Netbench)
  2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
  2001-08-27 18:59   ` Brian
@ 2001-08-27 19:28   ` Andrew Theurer
  2001-08-29 16:39     ` Randy.Dunlap
  2001-08-30 15:08   ` YAFB: Yet Another Filesystem Bench Yves Rougy
  2 siblings, 1 reply; 13+ messages in thread
From: Andrew Theurer @ 2001-08-27 19:28 UTC (permalink / raw)
  To: Randy.Dunlap; +Cc: linux-kernel, linux-fsdevel

On Monday 27 August 2001 01:24 pm, Randy.Dunlap wrote:
> Hi,
> 
> I am doing some similar FS comparisons, but using IOzone
> (www.iozone.org)
> instead of Netbench.
> 
> Some preliminary (mostly raw) data are available at:
> http://www.osdlab.org/reports/journal_fs/
> (updated today).
> 
> I am using a Linux 2.4.7 on a 4-way VA Linux system.
> It has 4 GB of RAM, but I have limited it to 256 MB in
> accordance with IOzone run rules.
> 
> However, I suspect that this causes IOzone to measure disk
> subsystem or PCI bus performance more than it does FS performance.
> Any comments on this?

Randy, 

You are definitly exceeding what the kernel will cache and writing to disk on 
some tests.  I guess it depends on what is more important to you.  I think 
both are valid things to test, and you may want to try not limiting memory to 
get just FS performace in memory for large files.  However, writing to disk 
is important, especially for things like bounce-buffer.  Did you have himem 
support in your kernel?  If so, did you have a bounce-buffer elimination 
patch as well? 

Does the storage system/controller have a disk cache?  What size?

Also, does IOzone default to num procs=num cpus?  I didn't see any options in 
your cmdline for num_procs.  

-Andrew




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal Filesystem Comparison on Netbench
  2001-08-27 15:02 Journal Filesystem Comparison on Netbench Andrew Theurer
  2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
@ 2001-08-27 20:04 ` Hans Reiser
  2001-08-27 20:29   ` Andrew Theurer
  2001-08-27 21:19   ` Randy.Dunlap
  2001-08-28 10:05 ` Roberto Nibali
  2 siblings, 2 replies; 13+ messages in thread
From: Hans Reiser @ 2001-08-27 20:04 UTC (permalink / raw)
  To: Andrew Theurer; +Cc: linux-kernel, reiserfs-dev

Please mount with -notails and repeat your results.  ReiserFS can either save
you on disk space, or save you on performance, but not both at the same time. 
That said, it does not surprise me that our locking is coarser than other
filesystems, and we will be fixing that in version 4.  Unfortunately we don't
have the hardware to replicate your results.

Hans

Andrew Theurer wrote:
> 
> Hello all,
> 
> I recently starting doing some fs performance comparisons with Netbench
> and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
> Ext3.  I thought some of you may be interested in the results.  Below
> is the README from the http://lse.sourceforge.net.  There is a kernprof
> for each test, and I am working on the lockmeter stuff right now.  Let
> me
> know if you have any comments.
> 
> Andrew Theurer
> IBM LTC
> 
> README:
> 
> http://lse.sourceforge.net/benchmarks/netbench/results/august_2001/filesystems/raid1e/README
> 
> The following is a filesystem comparison on the NetBench workload.
> Filesystems tested include EXT2, EXT3, Reiserfs, XFS, and JFS.
> Server hardware is an 8 processor Intel Xeon, 700 MHz, 1MB L2
> cache, Profusion chipset, 4GB interleaved memory, and 4 Intel
> gigabit ethernet cards.  This test was conducted using a
> RAID disk system, consisting of an IBM ServeRAID 4H SCSI adapter,
> equipped with 32 MB cache, using one SCSI channel, attached to 10
> disks, each having a capacity of 9 GB and a speed 10,000 RPM.
> The RAID was configured for level 10, a 5 disk stripe with mirror.
> 
> The server was tested using linux 2.4.7, Samba 2.2.0, and NetBench
> 7.0.1.
> Since we only have enough clients to drive a 4-way SMP test (44), the
> kernel used 4 processors instead of eight.  The
> "Enterprise Disk Suite" test was used for NetBench.  Each filesystem
> was tested with the same test, starting with 4 clients and increasing
> clients by 4 up to 44 clients.
> 
> Some optimizations were used for linux, including zerocopy,
> IRQ affinity, and interrupt delay for the gigabit cards,
> and process affinity for the smbd processes.
> 
> Default configurations for all filesystems were used, except ext3
> used mode "data=writeback".  No special options were chosen
> for performance for these initial tests.  If you know of
> performance options that would benefit this test, please send
> me email, habanero@us.ibm.com
> 
> Peak Performance Results:
> 
> EXT2      773 Mbps @ 44 clients
> EXT3      660 Mbps @ 44 clients
> Reiserfs  532 Mbps @ 28 clients
> XFS       661 Mbps @ 44 clients
> JFS       683 Mbps @ 40 clients
> 
> Data Files:
> 
> This directory contains:
> 
>         kp.html         Kernprof top 25 list for all filesystems,
>                         recorded during a 44 client test.
>         lock.html       -pending completion-  Lockmeter results,
>                         recorded during a 44 client test.
>                         -update: Reiserfs lockmeter is completed.
>                         look at ./reiserfs/4p/lockmeter.txt for complete
> lockstat file.
> 
>         README          This file
>         ./<fstype>      Test data for filesystem, <fstype> =
> [ext2|ext3|xfs|reiserfs|jfs]
>                         First subdirectory is the SMP config (4P for
> these tests)
>                         Next level directories are:
> 
>                         sar:  sysstat log for test
>                         netbench: netbench results in Excel format
>                         proc:  some proc info before test
>                         samba:  samba info
> 
> Notes:
> 
> In this test, JFS had the best peak throughput for journal filesystems,
> and ext2 had the best peak throughput for all filesystems.  Reiserfs
> had the lowest peak throughput, and also had the most % time in
> stext_lock
> (as shown in kp.html).
> 
> Netbench is usually an "in memory" test, such that all files stay in
> buffer cache.  Actually, during the test, kupdate is stopped.  No file
> data
> is ever written to disk, but with the introduction on journal
> filesystems,
> journal data is written to disk.  This allows the opportunity to compare
> how much data and how often data is written to disk for the 4 journal
> filesystems tested.  the sysstat information shows blocks/sec for
> devices.
> In these tests, the device for the samba share is on dev8-1.
> JFS experienced a peak blocks/sec of ~10,000, while XFS was ~4100, EXT3
> was ~1100, and Reiserfs was ~800.  It was interesting to see Reiserfs
> write at 800 blocks/sec, then nothing for 30 seconds, then again at
> ~800 blocks/sec.  No other journal filesystem experienced that pattern
> of journal activity.
> 
> Next Steps:
> 
> Finish lockmeter reports
> Same tests on non-cached, single SCSI disk
> Investigate performance options for each filesystem
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal Filesystem Comparison on Netbench
  2001-08-27 20:04 ` Journal Filesystem Comparison on Netbench Hans Reiser
@ 2001-08-27 20:29   ` Andrew Theurer
  2001-08-27 21:19   ` Randy.Dunlap
  1 sibling, 0 replies; 13+ messages in thread
From: Andrew Theurer @ 2001-08-27 20:29 UTC (permalink / raw)
  To: Hans Reiser; +Cc: linux-kernel, reiserfs-dev

I will test -notail as soon as possible and let you know the results. 
Thanks,

Andrew Theurer

Hans Reiser wrote:
> 
> Please mount with -notails and repeat your results.  ReiserFS can either save
> you on disk space, or save you on performance, but not both at the same time.
> That said, it does not surprise me that our locking is coarser than other
> filesystems, and we will be fixing that in version 4.  Unfortunately we don't
> have the hardware to replicate your results.
> 
> Hans
> 
> Andrew Theurer wrote:
> >
> > Hello all,
> >
> > I recently starting doing some fs performance comparisons with Netbench
> > and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
> > Ext3.  I thought some of you may be interested in the results.  Below
> > is the README from the http://lse.sourceforge.net.  There is a kernprof
> > for each test, and I am working on the lockmeter stuff right now.  Let
> > me
> > know if you have any comments.
> >
> > Andrew Theurer
> > IBM LTC
> >
[snip]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal Filesystem Comparison on Netbench
  2001-08-27 20:04 ` Journal Filesystem Comparison on Netbench Hans Reiser
  2001-08-27 20:29   ` Andrew Theurer
@ 2001-08-27 21:19   ` Randy.Dunlap
  2001-08-27 21:41     ` [reiserfs-dev] " Hans Reiser
  1 sibling, 1 reply; 13+ messages in thread
From: Randy.Dunlap @ 2001-08-27 21:19 UTC (permalink / raw)
  To: Hans Reiser; +Cc: Andrew Theurer, linux-kernel, reiserfs-dev

Hans-

Have you consider using OSDL machines for your testing?
It probably wouldn't replicate Andrew's systems exactly,
but we do have [2,4,8]-way systems that could be made
available for your use.

~Randy

Hans Reiser wrote:
> 
> Please mount with -notails and repeat your results.  ReiserFS can either save
> you on disk space, or save you on performance, but not both at the same time.
> That said, it does not surprise me that our locking is coarser than other
> filesystems, and we will be fixing that in version 4.  Unfortunately we don't
> have the hardware to replicate your results.
> 
> Hans
> 
> Andrew Theurer wrote:
> >
> > Hello all,
> >
> > I recently starting doing some fs performance comparisons with Netbench
> > and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
> > Ext3.  I thought some of you may be interested in the results.  Below
> > is the README from the http://lse.sourceforge.net.  There is a kernprof
> > for each test, and I am working on the lockmeter stuff right now.  Let
> > me
> > know if you have any comments.
> >
> > Andrew Theurer
> > IBM LTC

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [reiserfs-dev] Re: Journal Filesystem Comparison on Netbench
  2001-08-27 21:19   ` Randy.Dunlap
@ 2001-08-27 21:41     ` Hans Reiser
  0 siblings, 0 replies; 13+ messages in thread
From: Hans Reiser @ 2001-08-27 21:41 UTC (permalink / raw)
  To: Randy.Dunlap
  Cc: Andrew Theurer, linux-kernel, reiserfs-dev, Nikita Danilov,
	Alexander Zarochentcev

"Randy.Dunlap" wrote:
> 
> Hans-
> 
> Have you consider using OSDL machines for your testing?
> It probably wouldn't replicate Andrew's systems exactly,
> but we do have [2,4,8]-way systems that could be made
> available for your use.
> 
> ~Randy
> 
> Hans Reiser wrote:
> >
> > Please mount with -notails and repeat your results.  ReiserFS can either save
> > you on disk space, or save you on performance, but not both at the same time.
> > That said, it does not surprise me that our locking is coarser than other
> > filesystems, and we will be fixing that in version 4.  Unfortunately we don't
> > have the hardware to replicate your results.
> >
> > Hans
> >
> > Andrew Theurer wrote:
> > >
> > > Hello all,
> > >
> > > I recently starting doing some fs performance comparisons with Netbench
> > > and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
> > > Ext3.  I thought some of you may be interested in the results.  Below
> > > is the README from the http://lse.sourceforge.net.  There is a kernprof
> > > for each test, and I am working on the lockmeter stuff right now.  Let
> > > me
> > > know if you have any comments.
> > >
> > > Andrew Theurer
> > > IBM LTC


Ok, let's take you up on that....

we are having discussions right now of whether ReiserFS is too coarsely grained
(I argue yes based on code inspection not code measurement, others measure no
contention on a two CPU machine and say no, none of us have the hardware to
really know....)

Your hardware might help us quite a bit.

Elena, Zam, and Nikita, please email Randy off-list and make arrangements.

Hans

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal Filesystem Comparison on Netbench
  2001-08-27 15:02 Journal Filesystem Comparison on Netbench Andrew Theurer
  2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
  2001-08-27 20:04 ` Journal Filesystem Comparison on Netbench Hans Reiser
@ 2001-08-28 10:05 ` Roberto Nibali
  2001-08-28 15:28   ` Andrew Theurer
  2 siblings, 1 reply; 13+ messages in thread
From: Roberto Nibali @ 2001-08-28 10:05 UTC (permalink / raw)
  To: Andrew Theurer; +Cc: linux-kernel

Hello,

Thank you for those interesting tests.

> Some optimizations were used for linux, including zerocopy,
> IRQ affinity, and interrupt delay for the gigabit cards,
> and process affinity for the smbd processes.

Why is ext3 the only tested journaling filesystem that showed
dropped packets [1] during the test and how do you explain it?

[1]: http://lse.sourceforge.net/benchmarks/netbench/results/\
     august_2001/filesystems/raid1e/ext3/4p/droppped_packets.txt

Regards,
Roberto Nibali, ratz

-- 
mailto: `echo NrOatSz@tPacA.cMh | sed 's/[NOSPAM]//g'`

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal Filesystem Comparison on Netbench
  2001-08-28 10:05 ` Roberto Nibali
@ 2001-08-28 15:28   ` Andrew Theurer
  2001-08-28 18:38     ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Theurer @ 2001-08-28 15:28 UTC (permalink / raw)
  To: Roberto Nibali; +Cc: linux-kernel

Roberto Nibali wrote:
> 
> Hello,
> 
> Thank you for those interesting tests.
> 
> > Some optimizations were used for linux, including zerocopy,
> > IRQ affinity, and interrupt delay for the gigabit cards,
> > and process affinity for the smbd processes.
> 
> Why is ext3 the only tested journaling filesystem that showed
> dropped packets [1] during the test and how do you explain it?
> 
> [1]: http://lse.sourceforge.net/benchmarks/netbench/results/\
>      august_2001/filesystems/raid1e/ext3/4p/droppped_packets.txt

Dropped packets are usually a side effect of the interrupt delay option
in the e1000 driver.  I choose 256 usec delay (default is 64) for all
these tests, and usually there is a very small % of dropped packets,
which usually shows up as 0.00%, since I only show 1/100's of a percent
in that output.  The other tests do have dropped packets, and I should
change that script to have more significant digits to show that.  I'm
not sure why ext3 shows more than the others.  Does ext3 have any spin
locks with interrupts disabled?

Andrew Theurer

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal Filesystem Comparison on Netbench
  2001-08-28 15:28   ` Andrew Theurer
@ 2001-08-28 18:38     ` Andrew Morton
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2001-08-28 18:38 UTC (permalink / raw)
  To: Andrew Theurer; +Cc: Roberto Nibali, linux-kernel

Andrew Theurer wrote:
> 
> Does ext3 have any spin locks with interrupts disabled?
> 

No.  But raid1 does.

-

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Journal FS Comparison on IOzone (was Netbench)
  2001-08-27 19:28   ` Andrew Theurer
@ 2001-08-29 16:39     ` Randy.Dunlap
  0 siblings, 0 replies; 13+ messages in thread
From: Randy.Dunlap @ 2001-08-29 16:39 UTC (permalink / raw)
  To: habanero; +Cc: linux-kernel, linux-fsdevel


Andrew Theurer wrote:
> 
> On Monday 27 August 2001 01:24 pm, Randy.Dunlap wrote:
> > Hi,
> >
> > I am doing some similar FS comparisons, but using IOzone
> > (www.iozone.org) instead of Netbench.
> >
> > Some preliminary (mostly raw) data are available at:
> > http://www.osdlab.org/reports/journal_fs/
> > (updated today).
> >
> > I am using a Linux 2.4.7 on a 4-way VA Linux system.
> > It has 4 GB of RAM, but I have limited it to 256 MB in
> > accordance with IOzone run rules.
> >
> > However, I suspect that this causes IOzone to measure disk
> > subsystem or PCI bus performance more than it does FS performance.
> > Any comments on this?
> 
> Randy,
> 
> You are definitly exceeding what the kernel will cache and writing to disk on
> some tests.  I guess it depends on what is more important to you.  I think
> both are valid things to test, and you may want to try not limiting memory to
> get just FS performace in memory for large files.  However, writing to disk
> is important, especially for things like bounce-buffer.  Did you have himem
> support in your kernel?  If so, did you have a bounce-buffer elimination
> patch as well?

Hi-

Sorry about the delay in responding.

I'm interested in filesystem performance.  I'm not trying to
document IDE vs. SCSI vs. FC performance/price tradeoffs, benefits,
etc.

> Does the storage system/controller have a disk cache?  What size?

Good questions, but I'm having trouble finding answers for them.
(hence the delay in responding)

The FC host controller is a QLogic 2200.  It is attached to an
IBM FAStT controller/drive array -- one controller with 10
attached drives.  I've been looking at the IBM FAStT OS console
interface, but I can't see much cache info there.
There is one item:  cache/processor sizes: 88/40 MB

> Also, does IOzone default to num procs=num cpus?  I didn't see any options in
> your cmdline for num_procs.

No, IOzone doesn't default to num_processes = num_cpus.
That's a command-line option that I didn't use, although I expect
to do some testing with that option also.

Thanks for your comments.

~Randy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* YAFB: Yet Another Filesystem Bench
  2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
  2001-08-27 18:59   ` Brian
  2001-08-27 19:28   ` Andrew Theurer
@ 2001-08-30 15:08   ` Yves Rougy
  2 siblings, 0 replies; 13+ messages in thread
From: Yves Rougy @ 2001-08-30 15:08 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel

Hi,

I am also doing such comparisons, with IOZone and Bonnie++
The currents results are available at 
http://www.pingouin.org/linux/fsbench/

More results are to come, especially to see the notail option impact of
Reiserfs with iozone and bonnie++.

Of course, comments are welcome...

	Regards,
		Yves

Randy.Dunlap(rddunlap@osdlab.org)@Mon, Aug 27, 2001 at 11:24:48AM -0700:
> Hi,
> 
> I am doing some similar FS comparisons, but using IOzone
> (www.iozone.org)
> instead of Netbench.
> 
> Some preliminary (mostly raw) data are available at:
> http://www.osdlab.org/reports/journal_fs/
> (updated today).
[...]
> Andrew Theurer wrote:
> > 
> > Hello all,
> > 
> > I recently starting doing some fs performance comparisons with Netbench
> > and the journal filesystems available in 2.4:  Reiserfs, JFS, XFS, and
> > Ext3.  I thought some of you may be interested in the results.  Below
> > is the README from the http://lse.sourceforge.net.  There is a kernprof
> > for each test, and I am working on the lockmeter stuff right now.  Let
> > me know if you have any comments.
> > 
> > Andrew Theurer
> > IBM LTC
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

-- 
Yves ROUGY - Yves.Rougy@fr.alcove.com
Coordinateur du Laboratoire - Lab Manager
Ingénieur Logiciels Libres - Open Source Software Engineer

Alcôve "L'informatique est libre" http://www.alcove.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2001-08-30 15:08 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-27 15:02 Journal Filesystem Comparison on Netbench Andrew Theurer
2001-08-27 18:24 ` Journal FS Comparison on IOzone (was Netbench) Randy.Dunlap
2001-08-27 18:59   ` Brian
2001-08-27 19:28   ` Andrew Theurer
2001-08-29 16:39     ` Randy.Dunlap
2001-08-30 15:08   ` YAFB: Yet Another Filesystem Bench Yves Rougy
2001-08-27 20:04 ` Journal Filesystem Comparison on Netbench Hans Reiser
2001-08-27 20:29   ` Andrew Theurer
2001-08-27 21:19   ` Randy.Dunlap
2001-08-27 21:41     ` [reiserfs-dev] " Hans Reiser
2001-08-28 10:05 ` Roberto Nibali
2001-08-28 15:28   ` Andrew Theurer
2001-08-28 18:38     ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).