* extremely slow write performance plaintext
@ 2011-01-13 21:22 Cory Coager
2011-01-13 22:35 ` Emmanuel Florac
0 siblings, 1 reply; 15+ messages in thread
From: Cory Coager @ 2011-01-13 21:22 UTC (permalink / raw)
To: xfs
Hardware is 2x 2.6ghz cpu, 6gb RAM, 2 SAS arrays consisting of 24 drives
in hardware RAID 6, 5.87tb total. The two arrays were added to a volume
group and multiple logical volumes were created. I am getting over
180MB/s read speeds and 40MB/s write speeds on all the LV's except one.
One of the LV's is getting ~1MB/s write speeds, however read speeds
are fine. There are NO snapshots.
meta-data=/dev/mapper/vg0-shared isize=1024 agcount=32,
agsize=17616096 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=563715072, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# dd if=/dev/zero of=zero bs=1k count=1048576
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 2089.95 seconds, 514 kB/s
# dd if=zero of=/dev/null bs=1k
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 4.70878 seconds, 228 MB/s
Any idea what is wrong with this and how to fix it?
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-13 21:22 extremely slow write performance plaintext Cory Coager
@ 2011-01-13 22:35 ` Emmanuel Florac
2011-01-14 0:17 ` Cory Coager
0 siblings, 1 reply; 15+ messages in thread
From: Emmanuel Florac @ 2011-01-13 22:35 UTC (permalink / raw)
To: Cory Coager; +Cc: xfs
Le Thu, 13 Jan 2011 16:22:36 -0500 vous écriviez:
> I am getting over
> 180MB/s read speeds and 40MB/s write speeds on all the LV's except
> one.
This is pretty bad. My last 14 SAS drives RAID-6 array achieved 500
MB/s write, 1GB/s read sustained. What is the SAS controller? the RAID
controller? Please provide more details about the
enclosure/drives/cabling too.
What is the kernel/distro? Did you striped your LVs across multiple
physical volumes except the last one? What's the output for pvdisplay,
vgdisplay, lvdisplay?
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: extremely slow write performance plaintext
2011-01-13 22:35 ` Emmanuel Florac
@ 2011-01-14 0:17 ` Cory Coager
2011-01-14 19:51 ` Stan Hoeppner
0 siblings, 1 reply; 15+ messages in thread
From: Cory Coager @ 2011-01-14 0:17 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
> What is the SAS controller? the RAID controller?
HP Smart Array P600
> Please provide more details about the enclosure/drives/cabling too.
Both enclosures are HP MSA70, 25 drives each, SAS 10k rpm drives, not sure about the cabling (server is at another location)
> What is the kernel/distro?
SLES 10 SP2, 2.6.16.60-0.21-bigsmp, x86
> Did you striped your LVs across multiple physical volumes except the last one?
None of the LV's are stripped
> What's the output for pvdisplay, vgdisplay, lvdisplay?
# pvdisplay
--- Physical volume ---
PV Name /dev/cciss/c1d0p1
VG Name vg0
PV Size 2.94 TB / not usable 1.51 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 769902
Free PE 150192
Allocated PE 619710
PV UUID k52E36-n3Xw-10rw-hSXo-3col-TEvC-75iHZ2
--- Physical volume ---
PV Name /dev/cciss/c1d1p1
VG Name vg0
PV Size 2.94 TB / not usable 1.51 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 769902
Free PE 562351
Allocated PE 207551
PV UUID IAZAbk-eMNZ-puMJ-DqW1-zsC7-C0ux-66vYTN
# vgdisplay
--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 217
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 6
Max PV 0
Cur PV 2
Act PV 2
VG Size 5.87 TB
PE Size 4.00 MB
Total PE 1539804
Alloc PE / Size 827261 / 3.16 TB
Free PE / Size 712543 / 2.72 TB
VG UUID p1SANa-30vR-BymQ-cOls-XeVU-92nj-YP4wSU
# lvdisplay
--- Logical volume ---
LV Name /dev/vg0/apps
VG Name vg0
LV UUID LLMOor-D0mP-k2kM-qvM3-Vjse-6fE0-7DdRjJ
LV Write Access read/write
LV Status available
# open 2
LV Size 348.00 MB
Current LE 87
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/vg0/netlogon
VG Name vg0
LV UUID FxR324-yAH9-F6y3-Dz30-stqY-bYwc-4t6AZ7
LV Write Access read/write
LV Status available
# open 2
LV Size 52.00 MB
Current LE 13
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
--- Logical volume ---
LV Name /dev/vg0/print
VG Name vg0
LV UUID EAISuT-a8zd-dJQa-0c6t-Zkus-I3ws-0IU9q3
LV Write Access read/write
LV Status available
# open 2
LV Size 712.00 MB
Current LE 178
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:2
--- Logical volume ---
LV Name /dev/vg0/sqlbackup
VG Name vg0
LV UUID 22NfsZ-iqdm-jWUz-9MOW-fAh1-qfB7-nWkSSG
LV Write Access read/write
LV Status available
# open 1
LV Size 270.00 GB
Current LE 69120
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:3
--- Logical volume ---
LV Name /dev/vg0/homes
VG Name vg0
LV UUID HMGhLH-0iA3-fp7n-A5Za-svJF-Tzxb-D291WR
LV Write Access read/write
LV Status available
# open 2
LV Size 810.00 GB
Current LE 207360
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:5
--- Logical volume ---
LV Name /dev/vg0/shared
VG Name vg0
LV UUID Czj3gb-Adl9-FqRW-wPqH-JnW2-rDGT-toR8zL
LV Write Access read/write
LV Status available
# open 2
LV Size 2.10 TB
Current LE 550503
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:8
Keep in mind that I only have poor write performance on /dev/vg0/shared, the others are fine.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-14 0:17 ` Cory Coager
@ 2011-01-14 19:51 ` Stan Hoeppner
2011-01-14 20:48 ` Cory Coager
0 siblings, 1 reply; 15+ messages in thread
From: Stan Hoeppner @ 2011-01-14 19:51 UTC (permalink / raw)
To: xfs
Cory Coager put forth on 1/13/2011 6:17 PM:
> --- Physical volume ---
> PV Name /dev/cciss/c1d1p1
> VG Name vg0
> PV Size 2.94 TB / not usable 1.51 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 769902
> Free PE 562351
> Allocated PE 207551
> PV UUID IAZAbk-eMNZ-puMJ-DqW1-zsC7-C0ux-66vYTN
Make sure the write cache on the P600 (what size is it BTW?) is enabled and that
the BBU is in working order. Also make sure the P600 is disabling the write
caches on the drives themselves. Then...
Mount with 'nobarrier' so XFS isn't interfering with the hardware cache
performance of the P600. With barriers enabled (the default) XFS will
periodically flush the cache on the RAID card causing write performance problems.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-14 19:51 ` Stan Hoeppner
@ 2011-01-14 20:48 ` Cory Coager
2011-01-14 22:02 ` Stan Hoeppner
0 siblings, 1 reply; 15+ messages in thread
From: Cory Coager @ 2011-01-14 20:48 UTC (permalink / raw)
To: xfs; +Cc: Stan Hoeppner
On 01/14/2011 02:51 PM, Stan Hoeppner wrote:
> Make sure the write cache on the P600 (what size is it BTW?) is enabled and that
> the BBU is in working order. Also make sure the P600 is disabling the write
> caches on the drives themselves. Then...
>
Write cache is enabled on the controller, the size is 512MB, BBU is in
good conditioned (checked with the HP utility). How do I check the
write cache on the drives?
> Mount with 'nobarrier' so XFS isn't interfering with the hardware cache
> performance of the P600. With barriers enabled (the default) XFS will
> periodically flush the cache on the RAID card causing write performance problems.
>
Already using nobarrier.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-14 20:48 ` Cory Coager
@ 2011-01-14 22:02 ` Stan Hoeppner
2011-01-18 14:16 ` Cory Coager
0 siblings, 1 reply; 15+ messages in thread
From: Stan Hoeppner @ 2011-01-14 22:02 UTC (permalink / raw)
To: xfs
Cory Coager put forth on 1/14/2011 2:48 PM:
> On 01/14/2011 02:51 PM, Stan Hoeppner wrote:
>> Make sure the write cache on the P600 (what size is it BTW?) is enabled and that
>> the BBU is in working order. Also make sure the P600 is disabling the write
>> caches on the drives themselves. Then...
>>
> Write cache is enabled on the controller, the size is 512MB, BBU is in good
> conditioned (checked with the HP utility). How do I check the write cache on
> the drives?
The controller should do this automatically. You'll have to check the docs to
verify. This is to safeguard data. The BBWC protects unwritten data in the
controller cache only, not the drives' caches. It won't negatively affect
performance if the drives' caches are enabled. On the contrary, it would
probably increase performance a bit. It's simply less safe having them enabled
in the event of a crash.
After rereading your original post I don't think there's any issue here anyway.
You stated you have 24 drives in 2 arrays (although you didn't state if all the
disks are on one P600 or two).
>> Mount with 'nobarrier' so XFS isn't interfering with the hardware cache
>> performance of the P600. With barriers enabled (the default) XFS will
>> periodically flush the cache on the RAID card causing write performance problems.
>>
> Already using nobarrier.
This was the important part I was looking for. It's apparently not a cache
issue then, unless the utility is lying or querying the wrong controller or
something.
Nothing relevant in dmesg or any other logs? No errors of any kind? Does
iostat reveal anything even slightly odd?
I also just noticed you're testing writes with a 1k block size. That seems
awefully small. Does the write throughput increase any when you test with a
4k/8k/16k block size?
BTW, this is an old machine. PCI-X is dead. Did this slow write trouble just
start recently? What has changed since it previously worked fine?
You're making it very difficult to assist you by not providing basic
troubleshooting information. I.e.
What has changed since the system functioned properly?
When did it change?
Did it ever work properly?
Etc.
God I hate pulling teeth... :)
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-14 22:02 ` Stan Hoeppner
@ 2011-01-18 14:16 ` Cory Coager
2011-01-19 9:59 ` Stan Hoeppner
2011-01-25 6:21 ` Michael Monnerie
0 siblings, 2 replies; 15+ messages in thread
From: Cory Coager @ 2011-01-18 14:16 UTC (permalink / raw)
To: xfs; +Cc: Stan Hoeppner
On 01/14/2011 05:02 PM, Stan Hoeppner wrote:
> The controller should do this automatically. You'll have to check the docs to
> verify. This is to safeguard data. The BBWC protects unwritten data in the
> controller cache only, not the drives' caches. It won't negatively affect
> performance if the drives' caches are enabled. On the contrary, it would
> probably increase performance a bit. It's simply less safe having them enabled
> in the event of a crash.
>
> After rereading your original post I don't think there's any issue here anyway.
> You stated you have 24 drives in 2 arrays (although you didn't state if all the
> disks are on one P600 or two).
>
Just one P600.
> This was the important part I was looking for. It's apparently not a cache
> issue then, unless the utility is lying or querying the wrong controller or
> something.
>
> Nothing relevant in dmesg or any other logs? No errors of any kind? Does
> iostat reveal anything even slightly odd?
Nothing interesting in dmesg. iostat looks pretty dead on average.
During the dd write its doing about ~7tps according to iostat.
> I also just noticed you're testing writes with a 1k block size. That seems
> awefully small. Does the write throughput increase any when you test with a
> 4k/8k/16k block size?
Yes, it the throughput does increase with larger block sizes. I was
able to get ~13MB/s with 16k block size, still terrible however.
> BTW, this is an old machine. PCI-X is dead. Did this slow write trouble just
> start recently? What has changed since it previously worked fine?
The array is new and newly implemented.
> You're making it very difficult to assist you by not providing basic
> troubleshooting information. I.e.
>
> What has changed since the system functioned properly?
> When did it change?
> Did it ever work properly?
> Etc.
>
> God I hate pulling teeth... :)
No, it has never worked properly. Also, I want to stress that I am only
having performance issues with one logical volume, the others seem fine.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-18 14:16 ` Cory Coager
@ 2011-01-19 9:59 ` Stan Hoeppner
2011-01-25 14:22 ` Cory Coager
2011-01-25 6:21 ` Michael Monnerie
1 sibling, 1 reply; 15+ messages in thread
From: Stan Hoeppner @ 2011-01-19 9:59 UTC (permalink / raw)
To: xfs
Cory Coager put forth on 1/18/2011 8:16 AM:
> No, it has never worked properly. Also, I want to stress that I am only having
> performance issues with one logical volume, the others seem fine.
Then, logically, there is something different about this logical volume than the
others. All of them reside atop the same volume group, atop the same two
physical RAID6 arrays, correct? Since I'm not quite tired of playing dentist (yet):
1. Were all of the LVs created with the same parameters? If so, can you
demonstrate verification of this to us?
2. Are all of them formatted with XFS? Were all formatted with the same XFS
parameters? If so, can you demonstrate verification of this to us?
3. Are you encrypting, at some level, the one LV that is showing low performance?
Cory: "The two arrays were added to a volume group and multiple logical volumes
were created."
4. Was this volume group preexisting? Are there other storage devices in this
volume group, or _only_ the RAID6 arrays?
5. Have you attempted deleting and recreating the LV with the performance issue?
6. How many total logical volumes are in this volume group?
7. What Linux distribution are you using? What kernel version?
We are not magicians here Cory. We need as much data from you as possible or we
can't help you. I thought I made this clear earlier. You need to gather as
much relevant data from that box as you can and present it here if you're
serious about solving this issue.
I get the feeling you just don't really care. In which case, why did you even
ask for help in the first place? Troubleshooting this issue requires your
_full_ participation and effort.
In these situations, it is most often the OP who solves his/her own issue, after
providing enough information here that we can point the OP in the right
direction. The key here is "providing enough information".
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-18 14:16 ` Cory Coager
2011-01-19 9:59 ` Stan Hoeppner
@ 2011-01-25 6:21 ` Michael Monnerie
2011-01-25 9:48 ` Mathieu AVILA
2011-01-25 14:27 ` Cory Coager
1 sibling, 2 replies; 15+ messages in thread
From: Michael Monnerie @ 2011-01-25 6:21 UTC (permalink / raw)
To: xfs; +Cc: Cory Coager, Stan Hoeppner
[-- Attachment #1.1: Type: Text/Plain, Size: 1248 bytes --]
On Dienstag, 18. Januar 2011 Cory Coager wrote:
> Also, I want to stress that I am only having performance issues with
> one logical volume, the others seem fine.
You're getting 40MB/s write speed and says that's fine? I get more
performance from a single SATA desktop drive. Your setup seems to suck
extremely somewhere.
It already starts with your dd speed. I started it on a virtualized VM
on an old, overloaded server and get:
# dd if=/dev/zero of=test.dd bs=1k count=1M
1048576+0 Datensätze ein
1048576+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 10,9992 s, 97,6 MB/s
This is a hardware RAID 6 with just 8x 10krpm WD SATA drives. Try to
look into general I/O problems. Maybe you can try to boot from an actual
linux CD with kernel >2.6.30 to see if that helps performance.
Maybe one or more disks in your array are dead and the controller is
crying for replacement?
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
// ****** Radiointerview zum Thema Spam ******
// http://www.it-podcast.at/archiv.html#podcast-100716
//
// Haus zu verkaufen: http://zmi.at/langegg/
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-25 6:21 ` Michael Monnerie
@ 2011-01-25 9:48 ` Mathieu AVILA
2011-01-25 14:25 ` Cory Coager
2011-01-25 14:27 ` Cory Coager
1 sibling, 1 reply; 15+ messages in thread
From: Mathieu AVILA @ 2011-01-25 9:48 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 1305 bytes --]
Le 25/01/2011 07:21, Michael Monnerie a écrit :
> On Dienstag, 18. Januar 2011 Cory Coager wrote:
>> Also, I want to stress that I am only having performance issues with
>> one logical volume, the others seem fine.
> You're getting 40MB/s write speed and says that's fine? I get more
> performance from a single SATA desktop drive. Your setup seems to suck
> extremely somewhere.
>
> It already starts with your dd speed. I started it on a virtualized VM
> on an old, overloaded server and get:
>
> # dd if=/dev/zero of=test.dd bs=1k count=1M
> 1048576+0 Datensätze ein
> 1048576+0 Datensätze aus
> 1073741824 Bytes (1,1 GB) kopiert, 10,9992 s, 97,6 MB/s
>
I would say this is melted with cache hit, isn't it ? (although I agree
it depends on the server's memory dedicated to FS cache)
A real cache hit on a real server would give me at least 500 MB/s
(actually I get something around 1 GB/s)
When it starts to be disk-constrained, on a standard SATA disk, I get
from 75 MB/s to 140 MB/s depending on the position of the AG I hit.
--
*Mathieu Avila*
IT & Integration Engineer
mathieu.avila@opencubetech.com
OpenCube Technologies http://www.opencubetech.com
Parc Technologique du Canal, 9 avenue de l'Europe
31520 Ramonville St Agne - FRANCE
Tel. : +33 (0) 561 285 606 - Fax : +33 (0) 561 285 635
[-- Attachment #1.2: Type: text/html, Size: 2120 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-19 9:59 ` Stan Hoeppner
@ 2011-01-25 14:22 ` Cory Coager
0 siblings, 0 replies; 15+ messages in thread
From: Cory Coager @ 2011-01-25 14:22 UTC (permalink / raw)
To: Stan Hoeppner; +Cc: xfs
On 01/19/2011 04:59 AM, Stan Hoeppner wrote:
> Then, logically, there is something different about this logical volume than the
> others. All of them reside atop the same volume group, atop the same two
> physical RAID6 arrays, correct? Since I'm not quite tired of playing dentist (yet):
>
> 1. Were all of the LVs created with the same parameters? If so, can you
> demonstrate verification of this to us?
Yes...
pvcreate --metadatacopies 2 /dev/cciss/c1d0p1
pvcreate --metadatacopies 2 /dev/cciss/c1d1p1
vgcreate vg0 /dev/cciss/c1d0p1 /dev/cciss/c1d1p1
lvcreate -n shared -L 2.1T /dev/vg0
lvcreate -n homes -L 810G /dev/vg0
> 2. Are all of them formatted with XFS? Were all formatted with the same XFS
> parameters? If so, can you demonstrate verification of this to us?
mkfs.xfs -L homes -i attr=2,size=1024 -l
version=2,size=128m,lazy-count=1 /dev/vg0/homes
mkfs.xfs -L shared -i attr=2,size=1024 -l
version=2,size=128m,lazy-count=1 /dev/vg0/shared
> 3. Are you encrypting, at some level, the one LV that is showing low performance?
>
> Cory: "The two arrays were added to a volume group and multiple logical volumes
> were created."
No encryption
> 4. Was this volume group preexisting? Are there other storage devices in this
> volume group, or _only_ the RAID6 arrays?
Everything is new, hardware, LV's, file systems...
> 5. Have you attempted deleting and recreating the LV with the performance issue?
No and I don't have the room to recreate this LV
> 6. How many total logical volumes are in this volume group?
6
> 7. What Linux distribution are you using? What kernel version?
SLES 10 SP2, 2.6.16.60-0.21-bigsmp i686
>
> We are not magicians here Cory. We need as much data from you as possible or we
> can't help you. I thought I made this clear earlier. You need to gather as
> much relevant data from that box as you can and present it here if you're
> serious about solving this issue.
>
> I get the feeling you just don't really care. In which case, why did you even
> ask for help in the first place? Troubleshooting this issue requires your
> _full_ participation and effort.
>
> In these situations, it is most often the OP who solves his/her own issue, after
> providing enough information here that we can point the OP in the right
> direction. The key here is "providing enough information".
Sorry I'm not trying to withhold information. Whatever you need to know
just ask and I'll be happy to provide it.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-25 9:48 ` Mathieu AVILA
@ 2011-01-25 14:25 ` Cory Coager
2011-01-28 6:22 ` Michael Monnerie
0 siblings, 1 reply; 15+ messages in thread
From: Cory Coager @ 2011-01-25 14:25 UTC (permalink / raw)
To: Mathieu AVILA; +Cc: xfs
On 01/25/2011 04:48 AM, Mathieu AVILA wrote
>> You're getting 40MB/s write speed and says that's fine? I get more
>> performance from a single SATA desktop drive. Your setup seems to suck
>> extremely somewhere.
>>
>> It already starts with your dd speed. I started it on a virtualized VM
>> on an old, overloaded server and get:
>>
>> # dd if=/dev/zero of=test.dd bs=1k count=1M
>> 1048576+0 Datensätze ein
>> 1048576+0 Datensätze aus
>> 1073741824 Bytes (1,1 GB) kopiert, 10,9992 s, 97,6 MB/s
>>
>
> I would say this is melted with cache hit, isn't it ? (although I
> agree it depends on the server's memory dedicated to FS cache)
>
> A real cache hit on a real server would give me at least 500 MB/s
> (actually I get something around 1 GB/s)
> When it starts to be disk-constrained, on a standard SATA disk, I get
> from 75 MB/s to 140 MB/s depending on the position of the AG I hit.
I agree, it should be a lot faster even on the LV's that are functioning
fine. The disk arrays and disks are new but the rest of the hardware is
pretty old, that might be part of the problem. It is a ProLiant DL385 G1.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-25 6:21 ` Michael Monnerie
2011-01-25 9:48 ` Mathieu AVILA
@ 2011-01-25 14:27 ` Cory Coager
1 sibling, 0 replies; 15+ messages in thread
From: Cory Coager @ 2011-01-25 14:27 UTC (permalink / raw)
To: Michael Monnerie; +Cc: Stan Hoeppner, xfs
On 01/25/2011 01:21 AM, Michael Monnerie wrote:
> You're getting 40MB/s write speed and says that's fine? I get more
> performance from a single SATA desktop drive. Your setup seems to suck
> extremely somewhere.
>
> It already starts with your dd speed. I started it on a virtualized VM
> on an old, overloaded server and get:
>
> # dd if=/dev/zero of=test.dd bs=1k count=1M
> 1048576+0 Datensätze ein
> 1048576+0 Datensätze aus
> 1073741824 Bytes (1,1 GB) kopiert, 10,9992 s, 97,6 MB/s
>
> This is a hardware RAID 6 with just 8x 10krpm WD SATA drives. Try to
> look into general I/O problems. Maybe you can try to boot from an actual
> linux CD with kernel>2.6.30 to see if that helps performance.
>
> Maybe one or more disks in your array are dead and the controller is
> crying for replacement?
I just checked the array configuration utility, no drives are dead.
Unfortunately this server is in production and I am not physical next to
the machine so I won't be able to test with a LiveCD.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-25 14:25 ` Cory Coager
@ 2011-01-28 6:22 ` Michael Monnerie
2011-01-28 13:08 ` Cory Coager
0 siblings, 1 reply; 15+ messages in thread
From: Michael Monnerie @ 2011-01-28 6:22 UTC (permalink / raw)
To: xfs; +Cc: Cory Coager
[-- Attachment #1.1: Type: Text/Plain, Size: 765 bytes --]
On Dienstag, 25. Januar 2011 Cory Coager wrote:
> It is a ProLiant DL385 G1.
OK, so the speed you get could be just normal. You got a box that still
says "Compaq" on it, right? Just after Christmas I virtualized a
DL380 G1 to a DL 385 G6, and it's much faster now, even when
virtualized.
It's not really needed to talk about performance when you use such old
hardware. Be happy when it works without problems ;-)
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
// ****** Radiointerview zum Thema Spam ******
// http://www.it-podcast.at/archiv.html#podcast-100716
//
// Haus zu verkaufen: http://zmi.at/langegg/
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: extremely slow write performance plaintext
2011-01-28 6:22 ` Michael Monnerie
@ 2011-01-28 13:08 ` Cory Coager
0 siblings, 0 replies; 15+ messages in thread
From: Cory Coager @ 2011-01-28 13:08 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On 01/28/2011 01:22 AM, Michael Monnerie wrote:
> OK, so the speed you get could be just normal. You got a box that still
> says "Compaq" on it, right? Just after Christmas I virtualized a
> DL380 G1 to a DL 385 G6, and it's much faster now, even when
> virtualized.
>
> It's not really needed to talk about performance when you use such old
> hardware. Be happy when it works without problems ;-)
Yeah, we should be getting new hardware soon. Hopefully this issue will
go away then.
------------------------------------------------------------------------
The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-01-28 13:05 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-13 21:22 extremely slow write performance plaintext Cory Coager
2011-01-13 22:35 ` Emmanuel Florac
2011-01-14 0:17 ` Cory Coager
2011-01-14 19:51 ` Stan Hoeppner
2011-01-14 20:48 ` Cory Coager
2011-01-14 22:02 ` Stan Hoeppner
2011-01-18 14:16 ` Cory Coager
2011-01-19 9:59 ` Stan Hoeppner
2011-01-25 14:22 ` Cory Coager
2011-01-25 6:21 ` Michael Monnerie
2011-01-25 9:48 ` Mathieu AVILA
2011-01-25 14:25 ` Cory Coager
2011-01-28 6:22 ` Michael Monnerie
2011-01-28 13:08 ` Cory Coager
2011-01-25 14:27 ` Cory Coager
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.