* XFS tune to adaptec ASR71605
@ 2014-05-06 10:25 Steve Brooks
2014-05-06 11:00 ` Emmanuel Florac
0 siblings, 1 reply; 10+ messages in thread
From: Steve Brooks @ 2014-05-06 10:25 UTC (permalink / raw)
To: xfs
Hi All,
New to the list so first hello to you all from St Andrews, Scotland.
We have three new raid hosts each with an Adaptec ASR71605
controller. Given upstream (Red Hat) are going to be using XFS as their
default file system we are going to use XFS on the three raid hosts. After
much reading around this is what I came up with.
All hosts have 16x4TB WD RE WD4000FYYZ drives and will run "RAID 6"
The underlying RAID details are
RAID level : 6 Reed-Solomon
Status of logical device : Optimal
Size : 53401590 MB
Stripe-unit size : 512 KB
Read-cache setting : Enabled
Read-cache status : On
Write-cache setting : Disabled
Write-cache status : Off
Partitioned : No
I built the filesystem with
mkfs.xfs -f -d su=512k,sw=14 /dev/sda
and mounted with fstab options
xfs defaults,inode64,nobarrier
My question is are the "mkfs.xfs" and the mount options I used sensible?
The RAID is to be used to store data from "numerical simulations" that
were run on a high performance cluster and is not mission critical in the
sense that it can be regenerated if lost. Of course that would take the
user/cluster some time.
Thanks for any advice.
Steve
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 10:25 XFS tune to adaptec ASR71605 Steve Brooks
@ 2014-05-06 11:00 ` Emmanuel Florac
2014-05-06 13:14 ` Steve Brooks
0 siblings, 1 reply; 10+ messages in thread
From: Emmanuel Florac @ 2014-05-06 11:00 UTC (permalink / raw)
To: xfs, Steve Brooks
Le Tue, 6 May 2014 11:25:24 +0100 (BST)
Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
>
> My question is are the "mkfs.xfs" and the mount options I used
> sensible? The RAID is to be used to store data from "numerical
> simulations" that were run on a high performance cluster and is not
> mission critical in the sense that it can be regenerated if lost. Of
> course that would take the user/cluster some time.
>
Looks fine to me. In my experience, using non default mkfs.xfs settings
makes very little to no difference at all, anyway. In case you're
interested, here are some benchmarks I've made on a similar setup with
bonnie++ (Raid controller 71604, 16x 3 To HGST, RAID-6 + spare (13 data
drives)).
1.96,1.96,storiq,1,1386247609,32160M,,,,886148,94,511728,76,,,1364991,99,699.9,36,50,,,,,32874,93,
+++++,+++,37064,95,32901,96,+++++,+++,33100,93,,22110us,348ms,,62214us,141ms,11973us,111us,141us,131us,10us,134us
1.96,1.96,storiq,1,1386247609,32160M,,,,989408,95,568600,82,,,1623220,95,869.3,17,50,,,,,34624,96,
+++++,+++,39390,94,34764,94,+++++,+++,37472,96,,22037us,213ms,,253ms,232ms,324us,109us,140us,319us,7us,108us
1.96,1.96,storiq,1,1386247609,32160M,,,,1023291,97,580413,81,,,1634417,98,725.4,35,50,,,,,34913,96,
+++++,+++,41099,99,34578,97,+++++,+++,33708,94,,186us,216ms,,54367us,63127us,1016us,104us,5508us,1029us,7us,138us
1.96,1.96,storiq,1,1386247609,32160M,,,,942138,97,578247,81,,,1643345,96,909.4,19,50,,,,,33042,96,
+++++,+++,36391,95,33893,95,+++++,+++,34003,93,,1813us,222ms,,83887us,69003us,1032us,108us,144us,133us,7us,133us
1.96,1.96,storiq,1,1386247609,32160M,,,,939598,97,580680,82,,,1611637,97,728.1,34,50,,,,,34819,96,
+++++,+++,39870,96,34614,97,+++++,+++,37632,96,,2053us,210ms,,71363us,84645us,990us,108us,139us,1030us,6us,1042us
1.96,1.96,storiq,1,1386247609,32160M,,,,972033,98,576180,81,,,1656062,98,722.0,35,50,,,,,34923,97,
+++++,+++,39788,96,33449,95,+++++,+++,32783,95,,3608us,279ms,,82386us,46777us,1026us,105us,137us,1059us,17us,142us
1.96,1.96,storiq,1,1386247609,32160M,,,,937764,98,578995,83,,,1496290,96,731.3,36,50,,,,,34852,97,
+++++,+++,39990,96,34625,97,+++++,+++,35387,96,,185us,206ms,,52160us,41925us,344us,106us,136us,993us,5us,128us
1.96,1.96,storiq,1,1386247609,32160M,,,,1037074,97,580587,81,,,1681379,98,719.0,36,50,,,,,34673,96,
+++++,+++,39961,96,34799,97,+++++,+++,37775,96,,175us,252ms,,29845us,78347us,1035us,113us,137us,1011us,7us,3389us
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 11:00 ` Emmanuel Florac
@ 2014-05-06 13:14 ` Steve Brooks
2014-05-06 13:51 ` Emmanuel Florac
0 siblings, 1 reply; 10+ messages in thread
From: Steve Brooks @ 2014-05-06 13:14 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
>> My question is are the "mkfs.xfs" and the mount options I used
>> sensible? The RAID is to be used to store data from "numerical
>> simulations" that were run on a high performance cluster and is not
>> mission critical in the sense that it can be regenerated if lost. Of
>> course that would take the user/cluster some time.
>>
>
> Looks fine to me. In my experience, using non default mkfs.xfs settings
> makes very little to no difference at all, anyway. In case you're
> interested, here are some benchmarks I've made on a similar setup with
> bonnie++ (Raid controller 71604, 16x 3 To HGST, RAID-6 + spare (13 data
> drives)).
>
> 1.96,1.96,storiq,1,1386247609,32160M,,,,886148,94,511728,76,,,1364991,99,699.9,36,50,,,,,32874,93,
> +++++,+++,37064,95,32901,96,+++++,+++,33100,93,,22110us,348ms,,62214us,141ms,11973us,111us,141us,131us,10us,134us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,989408,95,568600,82,,,1623220,95,869.3,17,50,,,,,34624,96,
> +++++,+++,39390,94,34764,94,+++++,+++,37472,96,,22037us,213ms,,253ms,232ms,324us,109us,140us,319us,7us,108us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,1023291,97,580413,81,,,1634417,98,725.4,35,50,,,,,34913,96,
> +++++,+++,41099,99,34578,97,+++++,+++,33708,94,,186us,216ms,,54367us,63127us,1016us,104us,5508us,1029us,7us,138us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,942138,97,578247,81,,,1643345,96,909.4,19,50,,,,,33042,96,
> +++++,+++,36391,95,33893,95,+++++,+++,34003,93,,1813us,222ms,,83887us,69003us,1032us,108us,144us,133us,7us,133us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,939598,97,580680,82,,,1611637,97,728.1,34,50,,,,,34819,96,
> +++++,+++,39870,96,34614,97,+++++,+++,37632,96,,2053us,210ms,,71363us,84645us,990us,108us,139us,1030us,6us,1042us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,972033,98,576180,81,,,1656062,98,722.0,35,50,,,,,34923,97,
> +++++,+++,39788,96,33449,95,+++++,+++,32783,95,,3608us,279ms,,82386us,46777us,1026us,105us,137us,1059us,17us,142us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,937764,98,578995,83,,,1496290,96,731.3,36,50,,,,,34852,97,
> +++++,+++,39990,96,34625,97,+++++,+++,35387,96,,185us,206ms,,52160us,41925us,344us,106us,136us,993us,5us,128us
> 1.96,1.96,storiq,1,1386247609,32160M,,,,1037074,97,580587,81,,,1681379,98,719.0,36,50,,,,,34673,96,
> +++++,+++,39961,96,34799,97,+++++,+++,37775,96,,175us,252ms,,29845us,78347us,1035us,113us,137us,1011us,7us,3389us
Thanks for the reply Emmanuel, I installed and rua bonnie++ although I
will need to research the results
> bonnie++ -d ./ -s 8192 -r 4096 -u root
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
sraid2v 8G 1845 94 76018 4 89110 4 4275 154 3908715 99 4952 93
Latency 8537us 164us 170us 2770us 45us 5867us
Version 1.96 ------Sequential Create------ --------Random Create--------
sraid2v -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 13100 21 +++++ +++ 24231 37 13289 22 +++++ +++ 28532 44
Latency 23062us 60us 117ms 25013us 29us 61us
Steve
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 13:14 ` Steve Brooks
@ 2014-05-06 13:51 ` Emmanuel Florac
2014-05-06 14:40 ` Steve Brooks
2014-05-06 20:09 ` XFS tune to adaptec ASR71605 Dave Chinner
0 siblings, 2 replies; 10+ messages in thread
From: Emmanuel Florac @ 2014-05-06 13:51 UTC (permalink / raw)
To: Steve Brooks; +Cc: xfs
Le Tue, 6 May 2014 14:14:35 +0100 (BST)
Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
> Thanks for the reply Emmanuel, I installed and rua bonnie++ although
> I will need to research the results
Yup, they're... weird :) Write speed is abysmal, but random seeks very
high. Please try my settings so that we can compare the numbers more
directly:
bonnie++ -f -d ./ -n 50
If you translate the output from my test file with bon_csv2
Did you turn off write cache? did you tune the IO scheduler, read
ahead, and queue length? Here are the settings I'm generally using
with this type of controllers (for better sequential IO):
echo none > /sys/block/sda/queue/scheduler
echo 1024 > /sys/block/sda/queue/nr_requests
echo 65536 > /sys/block/sda/queue/read_ahead_kb
You can check the cache status in the output from
arcconf getconfig 1
If you didn't install arcconf, you definitely should :) If you turned
off the write cache, be aware that this is not the expected mode of
operation for modern RAID controllers... Go buy a ZMM if necessary.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 13:51 ` Emmanuel Florac
@ 2014-05-06 14:40 ` Steve Brooks
2014-05-06 14:59 ` Emmanuel Florac
2014-05-06 20:09 ` XFS tune to adaptec ASR71605 Dave Chinner
1 sibling, 1 reply; 10+ messages in thread
From: Steve Brooks @ 2014-05-06 14:40 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
> Yup, they're... weird :) Write speed is abysmal, but random seeks very
> high. Please try my settings so that we can compare the numbers more
> directly:
>
> bonnie++ -f -d ./ -n 50
>
> If you translate the output from my test file with bon_csv2
>
> Did you turn off write cache? did you tune the IO scheduler, read
> ahead, and queue length? Here are the settings I'm generally using
> with this type of controllers (for better sequential IO):
>
> echo none > /sys/block/sda/queue/scheduler
> echo 1024 > /sys/block/sda/queue/nr_requests
> echo 65536 > /sys/block/sda/queue/read_ahead_kb
>
> You can check the cache status in the output from
>
> arcconf getconfig 1
>
> If you didn't install arcconf, you definitely should :) If you turned
> off the write cache, be aware that this is not the expected mode of
> operation for modern RAID controllers... Go buy a ZMM if necessary.
Hi,
Yes the write speed is very poor.. I am running "bonnie++" again with your
params but no sure how long it will take.
> bonnie++ -f -d ./ -n 50
Will thisno be different for our two machines as it seems to generate
other params depending on the size of the RAM. All ours have 64G RAM.
I disabled write cache on the controller as there is no ZMM flash backup
module and it seems to be advised that way. I could enable it and try
another test to see if that is a contribution to the poor write
performance. I do have "arcconf" on all our adaptec raid machines.
I have not tuned the IO scheduler, read ahead, and queue length? I guess
I can try this.
> echo none > /sys/block/sda/queue/scheduler
> echo 1024 > /sys/block/sda/queue/nr_requests
> echo 65536 > /sys/block/sda/queue/read_ahead_kb
Will this need to be done after every reboot? I guess it could go in
"/etc/rc.local" ?
Thanks, Steve
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 14:40 ` Steve Brooks
@ 2014-05-06 14:59 ` Emmanuel Florac
2014-05-06 15:22 ` Steve Brooks
2014-05-06 16:03 ` XFS tune to adaptec ASR71605 [SOLVED] Steve Brooks
0 siblings, 2 replies; 10+ messages in thread
From: Emmanuel Florac @ 2014-05-06 14:59 UTC (permalink / raw)
To: Steve Brooks; +Cc: xfs
Le Tue, 6 May 2014 15:40:37 +0100 (BST)
Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
> Yes the write speed is very poor.. I am running "bonnie++" again with
> your params but no sure how long it will take.
>
> > bonnie++ -f -d ./ -n 50
>
> Will thisno be different for our two machines as it seems to generate
> other params depending on the size of the RAM. All ours have 64G RAM.
>
You MUST test with a dataset bigger than RAM, else you're mostly testing
your RAM speed :) If you've got 64 GB, by default bonnie will test with
128 GB of data. The small size probably explains the very fast seek
speed... You're seeking in the RAM cache :)
>
> I disabled write cache on the controller as there is no ZMM flash
> backup module and it seems to be advised that way. I could enable it
> and try another test to see if that is a contribution to the poor
> write performance. I do have "arcconf" on all our adaptec raid
> machines.
Modern RAIDs need write cache or perform abysmally. Do yourself a
service and buy a ZMM. Without write cache it'll be so slow it will be
nearly unusable, really. Did you see the numbers? your RAID is more
than 12x slower than mine... actually slower than a single disk! You'll
simply fail at filling it up at these speeds.
> I have not tuned the IO scheduler, read ahead, and queue length? I
> guess I can try this.
>
> > echo none > /sys/block/sda/queue/scheduler
> > echo 1024 > /sys/block/sda/queue/nr_requests
> > echo 65536 > /sys/block/sda/queue/read_ahead_kb
>
> Will this need to be done after every reboot? I guess it could go in
> "/etc/rc.local" ?
>
Yep. You can tweak the settings and try various configurations.
However these work fine for me in most cases (particularly the noop
scheduler). Of course replace sda with the RAID array device or you may
end up tuning your boot drive instead :)
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 14:59 ` Emmanuel Florac
@ 2014-05-06 15:22 ` Steve Brooks
2014-05-06 16:03 ` XFS tune to adaptec ASR71605 [SOLVED] Steve Brooks
1 sibling, 0 replies; 10+ messages in thread
From: Steve Brooks @ 2014-05-06 15:22 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
On Tue, 6 May 2014, Emmanuel Florac wrote:
Hi,
> You MUST test with a dataset bigger than RAM, else you're mostly testing
> your RAM speed :) If you've got 64 GB, by default bonnie will test with
> 128 GB of data. The small size probably explains the very fast seek
> speed... You're seeking in the RAM cache :)
Yes that makes sense, reading the man page is should auto pick up the
amount of RAM and adjust appropriately. Still running at the moment.
I did pipe your results into "bon_csv2html" and used firefox to inspect
the results, a neat tool :-)..
> Modern RAIDs need write cache or perform abysmally. Do yourself a
> service and buy a ZMM. Without write cache it'll be so slow it will be
> nearly unusable, really. Did you see the numbers? your RAID is more
> than 12x slower than mine... actually slower than a single disk! You'll
> simply fail at filling it up at these speeds.
Ok so maybe the abysmal write speeds are a symptom of the disabled cache,
I hope so, once the current " bonnie++ -f -d ./ -n 50" finishes I will
enable the write cache on the controller and repeat the benchmark,
fingers crossed.
> Yep. You can tweak the settings and try various configurations.
> However these work fine for me in most cases (particularly the noop
> scheduler). Of course replace sda with the RAID array device or you may
> end up tuning your boot drive instead :)
Yes I noticed that too :-) .. the controllers here also post on
"/dev/sda" so would have been luck anyways..
Write just checked and the bonnie++ benchmark finished, and the results
below.. So without cache yours are eight times faster at writes :-/ ..
My reads seem ok though :-) .. Ok will try with write cache on..
-sh-4.1$ bonnie++ -f -d ./ -n 50
Writing intelligently...done
Rewriting...
Message from syslogd@sraid2v at May 6 15:50:30 ...
kernel:do_IRQ: 0.135 No irq handler for vector (irq -1)
done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
sraid2v 126G 112961 7 56056 4 1843032 80 491.8 33
Latency 460ms 566ms 50148us 42171us
Version 1.96 ------Sequential Create------ --------Random Create--------
sraid2v -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
50 14833 25 +++++ +++ 30047 47 27391 49 +++++ +++ 44988 76
Latency 11256us 70us 519ms 21504us 56us 72us
1.96,1.96,sraid2v,1,1399384651,126G,,,,112961,7,56056,4,,,1843032,80,491.8,33,50,,,,,14833,25,+++++,+++,30047,47,27391,49,+++++,+++,44988,76,,460ms,566ms,,50148us,42171us,11256us,70us,519ms,21504us,56us,72us
Many Thanks!
Steve
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605 [SOLVED]
2014-05-06 14:59 ` Emmanuel Florac
2014-05-06 15:22 ` Steve Brooks
@ 2014-05-06 16:03 ` Steve Brooks
1 sibling, 0 replies; 10+ messages in thread
From: Steve Brooks @ 2014-05-06 16:03 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
[-- Attachment #1: Type: TEXT/PLAIN, Size: 3559 bytes --]
> Le Tue, 6 May 2014 15:40:37 +0100 (BST)
> Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
>
>> Yes the write speed is very poor.. I am running "bonnie++" again with
>> your params but no sure how long it will take.
>>
>>> bonnie++ -f -d ./ -n 50
>>
>> Will thisno be different for our two machines as it seems to generate
>> other params depending on the size of the RAM. All ours have 64G RAM.
>>
>
> You MUST test with a dataset bigger than RAM, else you're mostly testing
> your RAM speed :) If you've got 64 GB, by default bonnie will test with
> 128 GB of data. The small size probably explains the very fast seek
> speed... You're seeking in the RAM cache :)
>
>>
>> I disabled write cache on the controller as there is no ZMM flash
>> backup module and it seems to be advised that way. I could enable it
>> and try another test to see if that is a contribution to the poor
>> write performance. I do have "arcconf" on all our adaptec raid
>> machines.
>
> Modern RAIDs need write cache or perform abysmally. Do yourself a
> service and buy a ZMM. Without write cache it'll be so slow it will be
> nearly unusable, really. Did you see the numbers? your RAID is more
> than 12x slower than mine... actually slower than a single disk! You'll
> simply fail at filling it up at these speeds.
>
>> I have not tuned the IO scheduler, read ahead, and queue length? I
>> guess I can try this.
>>
>>> echo none > /sys/block/sda/queue/scheduler
>>> echo 1024 > /sys/block/sda/queue/nr_requests
>>> echo 65536 > /sys/block/sda/queue/read_ahead_kb
>>
>> Will this need to be done after every reboot? I guess it could go in
>> "/etc/rc.local" ?
>>
>
> Yep. You can tweak the settings and try various configurations.
> However these work fine for me in most cases (particularly the noop
> scheduler). Of course replace sda with the RAID array device or you may
> end up tuning your boot drive instead :)
Thanks loads! As you stated it seems to have all been down to having write
cache disabled. The new results from "bonnie++" show that sequential
writes when the "write cach" is enabled have massively improved to
1642324 K/s ...
[root@sraid2v tmp]# bonnie++ -f -d ./ -n 50 -u root
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
sraid2v 126G 1642324 96 704219 51 2005711 81 529.8 36
Latency 29978us 131ms 93836us 47600us
Version 1.96 ------Sequential Create------ --------Random Create--------
sraid2v -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
50 45494 75 +++++ +++ 58252 84 51635 80 +++++ +++ 53103 81
Latency 15964us 191us 98us 80us 14us 60us
1.96,1.96,sraid2v,1,1399398521,126G,,,,1642324,96,704219,51,,,2005711,81,529.8,36,50,,,,,45494,75,+++++,+++,58252,84,51635,80,+++++,+++,53103,81,,29978us,131ms,,93836us,47600us,15964us,191us,98us,80us,14us,60us
Thanks again Emmanuel for your help!
Steve
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 13:51 ` Emmanuel Florac
2014-05-06 14:40 ` Steve Brooks
@ 2014-05-06 20:09 ` Dave Chinner
2014-05-06 23:20 ` Stan Hoeppner
1 sibling, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2014-05-06 20:09 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: Steve Brooks, xfs
On Tue, May 06, 2014 at 03:51:49PM +0200, Emmanuel Florac wrote:
> Le Tue, 6 May 2014 14:14:35 +0100 (BST)
> Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
>
> > Thanks for the reply Emmanuel, I installed and rua bonnie++ although
> > I will need to research the results
>
> Yup, they're... weird :) Write speed is abysmal, but random seeks very
> high. Please try my settings so that we can compare the numbers more
> directly:
Friends don't let friends use bonnie++ for benchmarking storage.
The numbers you get will be irrelevant to you application, and it's
so synthetic is doesn't reflect any real-world workload at all.
The only useful benchmark for determining if changes are going to
improve application performance is to measure your application's
performance.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: XFS tune to adaptec ASR71605
2014-05-06 20:09 ` XFS tune to adaptec ASR71605 Dave Chinner
@ 2014-05-06 23:20 ` Stan Hoeppner
0 siblings, 0 replies; 10+ messages in thread
From: Stan Hoeppner @ 2014-05-06 23:20 UTC (permalink / raw)
To: Dave Chinner, Emmanuel Florac; +Cc: Steve Brooks, xfs
On 5/6/2014 3:09 PM, Dave Chinner wrote:
> On Tue, May 06, 2014 at 03:51:49PM +0200, Emmanuel Florac wrote:
>> Le Tue, 6 May 2014 14:14:35 +0100 (BST)
>> Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
>>
>>> Thanks for the reply Emmanuel, I installed and rua bonnie++ although
>>> I will need to research the results
>>
>> Yup, they're... weird :) Write speed is abysmal, but random seeks very
>> high. Please try my settings so that we can compare the numbers more
>> directly:
>
> Friends don't let friends use bonnie++ for benchmarking storage.
> The numbers you get will be irrelevant to you application, and it's
> so synthetic is doesn't reflect any real-world workload at all.
>
> The only useful benchmark for determining if changes are going to
> improve application performance is to measure your application's
> performance.
Exactly. The OP's post begins with:
"After much reading around this is what I came up with... All hosts
have 16x4TB WD RE WD4000FYYZ drives and will run RAID 6...
Stripe-unit size : 512 KB"
That's a 7 MB stripe width. Such a setup is suitable for large
streaming workloads which generate no RMW, and that's about it. For
anything involving random writes the performance will be very low, even
with write cache enabled, because each writeback operation will involve
reading and writing 1.5 MB minimum. Depending on the ARC firmware, if
it does scrubbing, it may read/write 7 MB for each RMW operation.
"Everything begins and ends with the workload".
Describe the workload on each machine, if not all the same, and we'll be
in a far better position to advise what RAID level and stripe unit size
you should use, and how best to configure XFS.
All the best,
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2014-05-06 23:20 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-06 10:25 XFS tune to adaptec ASR71605 Steve Brooks
2014-05-06 11:00 ` Emmanuel Florac
2014-05-06 13:14 ` Steve Brooks
2014-05-06 13:51 ` Emmanuel Florac
2014-05-06 14:40 ` Steve Brooks
2014-05-06 14:59 ` Emmanuel Florac
2014-05-06 15:22 ` Steve Brooks
2014-05-06 16:03 ` XFS tune to adaptec ASR71605 [SOLVED] Steve Brooks
2014-05-06 20:09 ` XFS tune to adaptec ASR71605 Dave Chinner
2014-05-06 23:20 ` Stan Hoeppner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.