* 2.6.38.4: xfs speed problem?
@ 2011-05-07 16:09 ` Justin Piszcz
0 siblings, 0 replies; 7+ messages in thread
From: Justin Piszcz @ 2011-05-07 16:09 UTC (permalink / raw)
To: xfs; +Cc: linux-kernel
Hello,
Using 2.6.38.4 on two hosts:
Host 1:
$ /usr/bin/time find geocities.data 1> /dev/null
80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
0inputs+0outputs (0major+73373minor)pagefaults 0swaps
# xfs_db -c frag -f /dev/sda1
actual 40203982, ideal 40088075, fragmentation factor 0.29%
meta-data=/dev/sda1 isize=256 agcount=44, agsize=268435455 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=11718704640, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
--
Host 2:
$ /usr/bin/time find geocities.data 1>/dev/null
54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
0inputs+0outputs (1major+72981minor)pagefaults 0swaps
# xfs_db -c frag -f /dev/sdb1
actual 37998306, ideal 37939331, fragmentation factor 0.16%
meta-data=/dev/sdb1 isize=256 agcount=10, agsize=268435455 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=2441379328, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
--
Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)
Host 2: RAID-6 (7200 RPM Drives, 12)
Each system uses a 3ware 9750-24i4e controller, same settings.
Any thoughts why one is > 2x faster than the other?
Justin.
^ permalink raw reply [flat|nested] 7+ messages in thread
* 2.6.38.4: xfs speed problem?
@ 2011-05-07 16:09 ` Justin Piszcz
0 siblings, 0 replies; 7+ messages in thread
From: Justin Piszcz @ 2011-05-07 16:09 UTC (permalink / raw)
To: xfs; +Cc: linux-kernel
Hello,
Using 2.6.38.4 on two hosts:
Host 1:
$ /usr/bin/time find geocities.data 1> /dev/null
80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
0inputs+0outputs (0major+73373minor)pagefaults 0swaps
# xfs_db -c frag -f /dev/sda1
actual 40203982, ideal 40088075, fragmentation factor 0.29%
meta-data=/dev/sda1 isize=256 agcount=44, agsize=268435455 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=11718704640, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
--
Host 2:
$ /usr/bin/time find geocities.data 1>/dev/null
54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
0inputs+0outputs (1major+72981minor)pagefaults 0swaps
# xfs_db -c frag -f /dev/sdb1
actual 37998306, ideal 37939331, fragmentation factor 0.16%
meta-data=/dev/sdb1 isize=256 agcount=10, agsize=268435455 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=2441379328, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
--
Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)
Host 2: RAID-6 (7200 RPM Drives, 12)
Each system uses a 3ware 9750-24i4e controller, same settings.
Any thoughts why one is > 2x faster than the other?
Justin.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 2.6.38.4: xfs speed problem?
2011-05-07 16:09 ` Justin Piszcz
@ 2011-05-08 0:33 ` Dave Chinner
-1 siblings, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2011-05-08 0:33 UTC (permalink / raw)
To: Justin Piszcz; +Cc: xfs, linux-kernel
On Sat, May 07, 2011 at 12:09:46PM -0400, Justin Piszcz wrote:
> Hello,
>
> Using 2.6.38.4 on two hosts:
>
> Host 1:
> $ /usr/bin/time find geocities.data 1> /dev/null
> 80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
> 0inputs+0outputs (0major+73373minor)pagefaults 0swaps
>
> # xfs_db -c frag -f /dev/sda1
> actual 40203982, ideal 40088075, fragmentation factor 0.29%
>
> meta-data=/dev/sda1 isize=256 agcount=44, agsize=268435455 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=11718704640, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=521728, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> --
>
> Host 2:
> $ /usr/bin/time find geocities.data 1>/dev/null
> 54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
> 0inputs+0outputs (1major+72981minor)pagefaults 0swaps
>
> # xfs_db -c frag -f /dev/sdb1
> actual 37998306, ideal 37939331, fragmentation factor 0.16%
>
> meta-data=/dev/sdb1 isize=256 agcount=10, agsize=268435455 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=2441379328, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=521728, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
>
> --
>
> Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)
Those will be 3TB drives
> Host 2: RAID-6 (7200 RPM Drives, 12)
and those are 1TB drives.
Different hardware is guaranteed to give you different performance,
especially from a seek capability perspective.
> Each system uses a 3ware 9750-24i4e controller, same settings.
>
> Any thoughts why one is > 2x faster than the other?
Different filesystem sizes mean different directory, inode and data
layouts, especially if you are using inode64.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 2.6.38.4: xfs speed problem?
@ 2011-05-08 0:33 ` Dave Chinner
0 siblings, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2011-05-08 0:33 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-kernel, xfs
On Sat, May 07, 2011 at 12:09:46PM -0400, Justin Piszcz wrote:
> Hello,
>
> Using 2.6.38.4 on two hosts:
>
> Host 1:
> $ /usr/bin/time find geocities.data 1> /dev/null
> 80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
> 0inputs+0outputs (0major+73373minor)pagefaults 0swaps
>
> # xfs_db -c frag -f /dev/sda1
> actual 40203982, ideal 40088075, fragmentation factor 0.29%
>
> meta-data=/dev/sda1 isize=256 agcount=44, agsize=268435455 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=11718704640, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=521728, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> --
>
> Host 2:
> $ /usr/bin/time find geocities.data 1>/dev/null
> 54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
> 0inputs+0outputs (1major+72981minor)pagefaults 0swaps
>
> # xfs_db -c frag -f /dev/sdb1
> actual 37998306, ideal 37939331, fragmentation factor 0.16%
>
> meta-data=/dev/sdb1 isize=256 agcount=10, agsize=268435455 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=2441379328, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=521728, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
>
> --
>
> Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)
Those will be 3TB drives
> Host 2: RAID-6 (7200 RPM Drives, 12)
and those are 1TB drives.
Different hardware is guaranteed to give you different performance,
especially from a seek capability perspective.
> Each system uses a 3ware 9750-24i4e controller, same settings.
>
> Any thoughts why one is > 2x faster than the other?
Different filesystem sizes mean different directory, inode and data
layouts, especially if you are using inode64.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 2.6.38.4: xfs speed problem?
2011-05-08 0:33 ` Dave Chinner
@ 2011-05-08 17:18 ` Stan Hoeppner
-1 siblings, 0 replies; 7+ messages in thread
From: Stan Hoeppner @ 2011-05-08 17:18 UTC (permalink / raw)
To: Dave Chinner; +Cc: Justin Piszcz, linux-kernel, xfs
On 5/7/2011 7:33 PM, Dave Chinner wrote:
> On Sat, May 07, 2011 at 12:09:46PM -0400, Justin Piszcz wrote:
>> Hello,
>>
>> Using 2.6.38.4 on two hosts:
>>
>> Host 1:
>> $ /usr/bin/time find geocities.data 1> /dev/null
>> 80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
>> 0inputs+0outputs (0major+73373minor)pagefaults 0swaps
>>
>> # xfs_db -c frag -f /dev/sda1
>> actual 40203982, ideal 40088075, fragmentation factor 0.29%
>>
>> meta-data=/dev/sda1 isize=256 agcount=44, agsize=268435455 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=11718704640, imaxpct=5
>> = sunit=0 swidth=0 blks
>> naming =version 2 bsize=4096 ascii-ci=0
>> log =internal bsize=4096 blocks=521728, version=2
>> = sectsz=512 sunit=0 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
>>
>> --
>>
>> Host 2:
>> $ /usr/bin/time find geocities.data 1>/dev/null
>> 54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
>> 0inputs+0outputs (1major+72981minor)pagefaults 0swaps
>>
>> # xfs_db -c frag -f /dev/sdb1
>> actual 37998306, ideal 37939331, fragmentation factor 0.16%
>>
>> meta-data=/dev/sdb1 isize=256 agcount=10, agsize=268435455 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=2441379328, imaxpct=5
>> = sunit=0 swidth=0 blks
>> naming =version 2 bsize=4096 ascii-ci=0
>> log =internal bsize=4096 blocks=521728, version=2
>> = sectsz=512 sunit=0 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
How much would it help, if any, with this specific 'test', or with
overall XFS performance, if Justin were to...
>> Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)
remake the fs on the above device with 'sw=16' or remount with
appropriate sunit and swidth values?
>> Host 2: RAID-6 (7200 RPM Drives, 12)
remake the fs on the above device with 'sw=10' or remount with
appropriate sunit and swidth values?
--
Stan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 2.6.38.4: xfs speed problem?
@ 2011-05-08 17:18 ` Stan Hoeppner
0 siblings, 0 replies; 7+ messages in thread
From: Stan Hoeppner @ 2011-05-08 17:18 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs, Justin Piszcz, linux-kernel
On 5/7/2011 7:33 PM, Dave Chinner wrote:
> On Sat, May 07, 2011 at 12:09:46PM -0400, Justin Piszcz wrote:
>> Hello,
>>
>> Using 2.6.38.4 on two hosts:
>>
>> Host 1:
>> $ /usr/bin/time find geocities.data 1> /dev/null
>> 80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
>> 0inputs+0outputs (0major+73373minor)pagefaults 0swaps
>>
>> # xfs_db -c frag -f /dev/sda1
>> actual 40203982, ideal 40088075, fragmentation factor 0.29%
>>
>> meta-data=/dev/sda1 isize=256 agcount=44, agsize=268435455 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=11718704640, imaxpct=5
>> = sunit=0 swidth=0 blks
>> naming =version 2 bsize=4096 ascii-ci=0
>> log =internal bsize=4096 blocks=521728, version=2
>> = sectsz=512 sunit=0 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
>>
>> --
>>
>> Host 2:
>> $ /usr/bin/time find geocities.data 1>/dev/null
>> 54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
>> 0inputs+0outputs (1major+72981minor)pagefaults 0swaps
>>
>> # xfs_db -c frag -f /dev/sdb1
>> actual 37998306, ideal 37939331, fragmentation factor 0.16%
>>
>> meta-data=/dev/sdb1 isize=256 agcount=10, agsize=268435455 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=2441379328, imaxpct=5
>> = sunit=0 swidth=0 blks
>> naming =version 2 bsize=4096 ascii-ci=0
>> log =internal bsize=4096 blocks=521728, version=2
>> = sectsz=512 sunit=0 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
How much would it help, if any, with this specific 'test', or with
overall XFS performance, if Justin were to...
>> Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)
remake the fs on the above device with 'sw=16' or remount with
appropriate sunit and swidth values?
>> Host 2: RAID-6 (7200 RPM Drives, 12)
remake the fs on the above device with 'sw=10' or remount with
appropriate sunit and swidth values?
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 2.6.38.4: xfs speed problem?
2011-05-08 17:18 ` Stan Hoeppner
(?)
@ 2011-05-09 7:53 ` Michael Monnerie
-1 siblings, 0 replies; 7+ messages in thread
From: Michael Monnerie @ 2011-05-09 7:53 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: Text/Plain, Size: 847 bytes --]
[removed some recipients]
On Sonntag, 8. Mai 2011 Stan Hoeppner wrote:
> remake the fs on the above device with 'sw=16' or remount with
> appropriate sunit and swidth values?
A remount wouldn't help the existing metadata layout. Would it be
sufficient to remount with sw=16 and then create a top-level dir,
wherein you recreate all existing dirs new, then hard-link each file and
remove the old directory structure?
Or would it be needed to copy the files too to get advantage of the new
sw?
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
// ****** Radiointerview zum Thema Spam ******
// http://www.it-podcast.at/archiv.html#podcast-100716
//
// Haus zu verkaufen: http://zmi.at/langegg/
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2011-05-09 7:50 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-07 16:09 2.6.38.4: xfs speed problem? Justin Piszcz
2011-05-07 16:09 ` Justin Piszcz
2011-05-08 0:33 ` Dave Chinner
2011-05-08 0:33 ` Dave Chinner
2011-05-08 17:18 ` Stan Hoeppner
2011-05-08 17:18 ` Stan Hoeppner
2011-05-09 7:53 ` Michael Monnerie
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.