All of lore.kernel.org
 help / color / mirror / Atom feed
* Intel X25-M MLC SSD benchmarks
@ 2008-12-10 21:15 ` Raz Ben-Yehuda
  2008-12-10 21:39   ` Matthew Wilcox
  2008-12-12 18:10   ` Greg Freemyer
  0 siblings, 2 replies; 14+ messages in thread
From: Raz Ben-Yehuda @ 2008-12-10 21:15 UTC (permalink / raw)
  To: linux-ide, linux-scsi

Hello
I am wondering if anyone tried Intel new disks. I benchmark them and I
am a bit confused.
According to the spec a single disk should provide 70MB/s write and 250
MB/s read. Reads are ok. I am reaching this number, but writes are bad.
With writes I am getting 20MB/s.
I am using a dd for the test, and a deadline-line scheduler. 
 
Thank you
Raz


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-10 21:15 ` Intel X25-M MLC SSD benchmarks Raz Ben-Yehuda
@ 2008-12-10 21:39   ` Matthew Wilcox
  2008-12-10 22:12     ` Raz Ben-Yehuda
  2008-12-12 18:10   ` Greg Freemyer
  1 sibling, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2008-12-10 21:39 UTC (permalink / raw)
  To: Raz Ben-Yehuda; +Cc: linux-ide, linux-scsi

On Wed, Dec 10, 2008 at 11:15:12PM +0200, Raz Ben-Yehuda wrote:
> Hello
> I am wondering if anyone tried Intel new disks. I benchmark them and I
> am a bit confused.
> According to the spec a single disk should provide 70MB/s write and 250
> MB/s read. Reads are ok. I am reaching this number, but writes are bad.
> With writes I am getting 20MB/s.
> I am using a dd for the test, and a deadline-line scheduler. 

What command exactly are you using, and have you tried using the no-op
elevator instead of deadline?  Also, what controller is it hooked up to?

-- 
Matthew Wilcox				Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: Intel X25-M MLC SSD benchmarks
  2008-12-10 21:39   ` Matthew Wilcox
@ 2008-12-10 22:12     ` Raz Ben-Yehuda
  2008-12-11  3:44       ` Matthew Wilcox
  2008-12-12  3:27       ` Eric D. Mudama
  0 siblings, 2 replies; 14+ messages in thread
From: Raz Ben-Yehuda @ 2008-12-10 22:12 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-ide, linux-scsi

 
-----Original Message-----
From: linux-ide-owner@vger.kernel.org
[mailto:linux-ide-owner@vger.kernel.org] On Behalf Of Matthew Wilcox
Sent: Wednesday, December 10, 2008 11:40 PM
To: Raz Ben-Yehuda
Cc: linux-ide@vger.kernel.org; linux-scsi@vger.kernel.org
Subject: Re: Intel X25-M MLC SSD benchmarks

On Wed, Dec 10, 2008 at 11:15:12PM +0200, Raz Ben-Yehuda wrote:
> Hello
> I am wondering if anyone tried Intel new disks. I benchmark them and I
> am a bit confused.
> According to the spec a single disk should provide 70MB/s write and
250
> MB/s read. Reads are ok. I am reaching this number, but writes are
bad.
> With writes I am getting 20MB/s.
> I am using a dd for the test, and a deadline-line scheduler. 

What command exactly are you using, and have you tried using the no-op
elevator instead of deadline?  Also, what controller is it hooked up to?
I first thank you for your reply.
I did not want to dive into details because it does not matter. Whether
noop,deadline, deadline parameters...
As for the controller I used 4 different controllers. Adaptec,AHCI and
Intel as Integrated chips on the 1025W-UR supermicro motherboard, and a
4-th controller SuperMicro UIO Adaptec aac card.
All gave same results for most dd writes commands.
dd if=/dev/zero of=/dev/sda bs=1M count=1000 oflag=direct , and many
other variants such erase block size ( 128K ) , several erase block size
and so on. Kernel is 2.6.18-8.el5.
I used all on a supermicro 1025W-UR. Disks have a SAS interface, 80GB. 
Also, I would like to note, I have 8 disks in array, while each one
perform READS 250 MB/s, together I degrade to 200 MB/s each. As for
writes I always reach 20 MB/s at best, from a single disk or 20x8 in
array.  
A disk in /proc/scsi/scsi identifies like this: 

Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: INTEL SSDSA2MH08 Rev: 045C
  Type:   Direct-Access                    ANSI SCSI revision: 05

And this is how my poor iostat looks:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.13   24.78    0.00   75.09


Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
hda               0.00         0.00         0.00          0          0
sda              33.00         0.00     11136.00          0      11136
sdb             126.00         0.00     42624.00          0      42624
sdc              87.00         0.00     29568.00          0      29568
sdd             122.00         0.00     41728.00          0      41728
sde             121.00         0.00     41344.00          0      41344
sdf             121.00         0.00     41448.00          0      41448
sdg             109.00         0.00     36736.00          0      36736
sdh              48.00         0.00     49152.00          0      49152

and this is lspci:

00:00.0 Host bridge: Intel Corporation 5400 Chipset Memory Controller
Hub (rev 20)
00:01.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 1
(rev 20)
00:03.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 3
(rev 20)
00:05.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 5
(rev 20)
00:07.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 7
(rev 20)
00:09.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 9
(rev 20)
00:0f.0 System peripheral: Intel Corporation 5400 Chipset QuickData
Technology Device (rev 20)
00:10.0 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev
20)
00:10.1 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev
20)
00:10.2 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev
20)
00:10.3 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev
20)
00:10.4 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev
20)
00:11.0 Host bridge: Intel Corporation 5400 Chipset CE/SF Registers (rev
20)
00:15.0 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev
20)
00:15.1 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev
20)
00:16.0 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev
20)
00:16.1 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev
20)
00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #1 (rev 09)
00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #2 (rev 09)
00:1d.2 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #3 (rev 09)
00:1d.3 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
UHCI USB Controller #4 (rev 09)
00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset
EHCI USB2 Controller (rev 09)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d9)
00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC
Interface Controller (rev 09)
00:1f.1 IDE interface: Intel Corporation 631xESB/632xESB IDE Controller
(rev 09)
00:1f.2 RAID bus controller: Intel Corporation 631xESB/632xESB SATA RAID
Controller (rev 09)
00:1f.3 SMBus: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus
Controller (rev 09)
01:00.0 Ethernet controller: Intel Corporation 82598EB 10 Gigabit AT CX4
Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82598EB 10 Gigabit AT CX4
Network Connection (rev 01)
02:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
Upstream Port (rev 01)
02:00.3 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express to
PCI-X Bridge (rev 01)
03:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express
Downstream Port E1 (rev 01)
06:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09)
07:00.0 Ethernet controller: Intel Corporation 82598EB 10 Gigabit AT CX4
Network Connection (rev 01)
07:00.1 Ethernet controller: Intel Corporation 82598EB 10 Gigabit AT CX4
Network Connection (rev 01)
08:00.0 Ethernet controller: Intel Corporation 82575EB Gigabit Network
Connection (rev 02)
08:00.1 Ethernet controller: Intel Corporation 82575EB Gigabit Network
Connection (rev 02)
09:01.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)



[



-- 
Matthew Wilcox				Intel Open Source Technology
Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-10 22:12     ` Raz Ben-Yehuda
@ 2008-12-11  3:44       ` Matthew Wilcox
  2008-12-12  3:27       ` Eric D. Mudama
  1 sibling, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2008-12-11  3:44 UTC (permalink / raw)
  To: Raz Ben-Yehuda; +Cc: linux-ide, linux-scsi

On Thu, Dec 11, 2008 at 12:12:37AM +0200, Raz Ben-Yehuda wrote:
> I did not want to dive into details because it does not matter. Whether
> noop,deadline, deadline parameters...
> As for the controller I used 4 different controllers. Adaptec,AHCI and
> Intel as Integrated chips on the 1025W-UR supermicro motherboard, and a
> 4-th controller SuperMicro UIO Adaptec aac card.
> All gave same results for most dd writes commands.
> dd if=/dev/zero of=/dev/sda bs=1M count=1000 oflag=direct , and many
> other variants such erase block size ( 128K ) , several erase block size
> and so on. Kernel is 2.6.18-8.el5.

OK, I suspect you aren't giving the drive enough work to do for it to
perform at its best.  Try doing something like this:

for i in $(seq 0 9); do \
	dd if=/dev/zero of=/dev/sda bs=1M count=1000 oflag=direct \
		seek=$(($i * 1000)) & \
done

> I used all on a supermicro 1025W-UR. Disks have a SAS interface, 80GB. 
> Also, I would like to note, I have 8 disks in array, while each one
> perform READS 250 MB/s, together I degrade to 200 MB/s each. As for

That doesn't surprise me; you're probably hitting a limitation either of 
the array or the cable itself.  A SAS cable can run up to 6Gbps, which
will be around 600MB/s.  So three drives should be able to saturate your
SAS cable.  If you're using an x4 link, that goes up to 2400MB/s which
should be ample for 8 drives ... maybe you're using a 3Gbps cable which
would limit each drive to 150MB/s.

-- 
Matthew Wilcox				Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-10 22:12     ` Raz Ben-Yehuda
  2008-12-11  3:44       ` Matthew Wilcox
@ 2008-12-12  3:27       ` Eric D. Mudama
  2008-12-12  4:55         ` Eric D. Mudama
  2008-12-12 12:21         ` Raz Ben-Yehuda
  1 sibling, 2 replies; 14+ messages in thread
From: Eric D. Mudama @ 2008-12-12  3:27 UTC (permalink / raw)
  To: Raz Ben-Yehuda; +Cc: Matthew Wilcox, linux-ide, linux-scsi

On 12/10/08, Raz Ben-Yehuda <razb@bitband.com> wrote:
> All gave same results for most dd writes commands.
> dd if=/dev/zero of=/dev/sda bs=1M count=1000 oflag=direct , and many
> other variants such erase block size ( 128K ) , several erase block size
> and so on. Kernel is 2.6.18-8.el5.
> I used all on a supermicro 1025W-UR. Disks have a SAS interface, 80GB.
> Also, I would like to note, I have 8 disks in array, while each one
> perform READS 250 MB/s, together I degrade to 200 MB/s each. As for
> writes I always reach 20 MB/s at best, from a single disk or 20x8 in
> array.

Out of curiosity, is write cache enabled or disabled on these devices?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-12  3:27       ` Eric D. Mudama
@ 2008-12-12  4:55         ` Eric D. Mudama
  2008-12-12  7:15           ` Bart Van Assche
  2008-12-12 12:15           ` Matthew Wilcox
  2008-12-12 12:21         ` Raz Ben-Yehuda
  1 sibling, 2 replies; 14+ messages in thread
From: Eric D. Mudama @ 2008-12-12  4:55 UTC (permalink / raw)
  To: Raz Ben-Yehuda; +Cc: Matthew Wilcox, linux-ide, linux-scsi

On 12/11/08, Eric D. Mudama <edmudama@gmail.com> wrote:
> Out of curiosity, is write cache enabled or disabled on these devices?

Oops, I just noticed the oflag=direct, that can have a huge
performance difference depending on the block size.

What numbers are you getting without oflag=direct?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-12  4:55         ` Eric D. Mudama
@ 2008-12-12  7:15           ` Bart Van Assche
  2008-12-12 14:25             ` Eric D. Mudama
  2008-12-12 12:15           ` Matthew Wilcox
  1 sibling, 1 reply; 14+ messages in thread
From: Bart Van Assche @ 2008-12-12  7:15 UTC (permalink / raw)
  To: Eric D. Mudama; +Cc: Raz Ben-Yehuda, Matthew Wilcox, linux-ide, linux-scsi

On Fri, Dec 12, 2008 at 5:55 AM, Eric D. Mudama <edmudama@gmail.com> wrote:
> On 12/11/08, Eric D. Mudama <edmudama@gmail.com> wrote:
>> Out of curiosity, is write cache enabled or disabled on these devices?
>
> Oops, I just noticed the oflag=direct, that can have a huge
> performance difference depending on the block size.
>
> What numbers are you getting without oflag=direct?

IMHO repeating the measurements without oflag=direct does not make
sense: measurements with oflag=direct tell something about the
performance of the SSD and the software layers on top of it. Without
oflag=direct writes are buffered by Linux and data is written
asynchronously to the SSD.

Bart.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-12  4:55         ` Eric D. Mudama
  2008-12-12  7:15           ` Bart Van Assche
@ 2008-12-12 12:15           ` Matthew Wilcox
  1 sibling, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2008-12-12 12:15 UTC (permalink / raw)
  To: Eric D. Mudama; +Cc: Raz Ben-Yehuda, linux-ide, linux-scsi

On Thu, Dec 11, 2008 at 09:55:56PM -0700, Eric D. Mudama wrote:
> On 12/11/08, Eric D. Mudama <edmudama@gmail.com> wrote:
> > Out of curiosity, is write cache enabled or disabled on these devices?
> 
> Oops, I just noticed the oflag=direct, that can have a huge
> performance difference depending on the block size.
> 
> What numbers are you getting without oflag=direct?

Without the oflag=direct option, you're measuring the performance of the
Linux page cache.  That's not very interesting.

-- 
Matthew Wilcox				Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: Intel X25-M MLC SSD benchmarks
  2008-12-12  3:27       ` Eric D. Mudama
  2008-12-12  4:55         ` Eric D. Mudama
@ 2008-12-12 12:21         ` Raz Ben-Yehuda
  1 sibling, 0 replies; 14+ messages in thread
From: Raz Ben-Yehuda @ 2008-12-12 12:21 UTC (permalink / raw)
  To: Eric D. Mudama; +Cc: linux-ide, linux-scsi



-----Original Message-----
From: Eric D. Mudama [mailto:edmudama@gmail.com] 
Sent: Friday, December 12, 2008 5:28 AM
To: Raz Ben-Yehuda
Cc: Matthew Wilcox; linux-ide@vger.kernel.org;
linux-scsi@vger.kernel.org
Subject: Re: Intel X25-M MLC SSD benchmarks

On 12/10/08, Raz Ben-Yehuda <razb@bitband.com> wrote:
> All gave same results for most dd writes commands.
> dd if=/dev/zero of=/dev/sda bs=1M count=1000 oflag=direct , and many
> other variants such erase block size ( 128K ) , several erase block
size
> and so on. Kernel is 2.6.18-8.el5.
> I used all on a supermicro 1025W-UR. Disks have a SAS interface, 80GB.
> Also, I would like to note, I have 8 disks in array, while each one
> perform READS 250 MB/s, together I degrade to 200 MB/s each. As for
> writes I always reach 20 MB/s at best, from a single disk or 20x8 in
> array.

>>Out of curiosity, is write cache enabled or disabled on these devices?
Does not matter. In both cases same low numbers. Write cache depends on
the controller defaults ( Adpaptec AHCI, Intel provides different
default).


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-12  7:15           ` Bart Van Assche
@ 2008-12-12 14:25             ` Eric D. Mudama
  2008-12-12 15:20               ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Eric D. Mudama @ 2008-12-12 14:25 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Raz Ben-Yehuda, Matthew Wilcox, linux-ide, linux-scsi

On Fri, Dec 12, 2008 at 12:15 AM, Bart Van Assche
<bart.vanassche@gmail.com> wrote:
>
> On Fri, Dec 12, 2008 at 5:55 AM, Eric D. Mudama <edmudama@gmail.com> wrote:
> > On 12/11/08, Eric D. Mudama <edmudama@gmail.com> wrote:
> >> Out of curiosity, is write cache enabled or disabled on these devices?
> >
> > Oops, I just noticed the oflag=direct, that can have a huge
> > performance difference depending on the block size.
> >
> > What numbers are you getting without oflag=direct?
>
> IMHO repeating the measurements without oflag=direct does not make
> sense: measurements with oflag=direct tell something about the
> performance of the SSD and the software layers on top of it. Without
> oflag=direct writes are buffered by Linux and data is written
> asynchronously to the SSD.

Oops, I guess I assumed that by writing a few gigabytes of data it
wouldn't matter that much.  I'll run the test again in a bit once I'm
off this solaris machine.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-12 14:25             ` Eric D. Mudama
@ 2008-12-12 15:20               ` Matthew Wilcox
  2008-12-12 15:55                 ` Eric D. Mudama
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2008-12-12 15:20 UTC (permalink / raw)
  To: Eric D. Mudama; +Cc: Bart Van Assche, Raz Ben-Yehuda, linux-ide, linux-scsi

On Fri, Dec 12, 2008 at 07:25:18AM -0700, Eric D. Mudama wrote:
> Oops, I guess I assumed that by writing a few gigabytes of data it
> wouldn't matter that much.  I'll run the test again in a bit once I'm
> off this solaris machine.

You need to rescale for 2008; people often have 4GB or more RAM in their
machines ... hell, I have one machine here from 2002 with 14GB in it
(I don't power it up very often because it's too noisy).  The test was
only writing 1GB of data, which would fit in the page cache of my laptop,
never mind the kind of machine that's likely to have an 8-way array
attached to it.

-- 
Matthew Wilcox				Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-12 15:20               ` Matthew Wilcox
@ 2008-12-12 15:55                 ` Eric D. Mudama
  2008-12-12 17:12                   ` Raz Ben-Yehuda
  0 siblings, 1 reply; 14+ messages in thread
From: Eric D. Mudama @ 2008-12-12 15:55 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Bart Van Assche, Raz Ben-Yehuda, linux-ide, linux-scsi

On 12/12/08, Matthew Wilcox <matthew@wil.cx> wrote:
> On Fri, Dec 12, 2008 at 07:25:18AM -0700, Eric D. Mudama wrote:
>> Oops, I guess I assumed that by writing a few gigabytes of data it
>> wouldn't matter that much.  I'll run the test again in a bit once I'm
>> off this solaris machine.
>
> You need to rescale for 2008; people often have 4GB or more RAM in their
> machines ... hell, I have one machine here from 2002 with 14GB in it
> (I don't power it up very often because it's too noisy).  The test was
> only writing 1GB of data, which would fit in the page cache of my laptop,
> never mind the kind of machine that's likely to have an 8-way array
> attached to it.

I just retested on a linux box at work, and got 79 MB/s on both the
X18-M and X25-M, and 197MB/s on an X25-E.  All were done with bs=1M
count=1000 oflag=direct.

--eric

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: Intel X25-M MLC SSD benchmarks
  2008-12-12 15:55                 ` Eric D. Mudama
@ 2008-12-12 17:12                   ` Raz Ben-Yehuda
  0 siblings, 0 replies; 14+ messages in thread
From: Raz Ben-Yehuda @ 2008-12-12 17:12 UTC (permalink / raw)
  To: Eric D. Mudama; +Cc: linux-ide, linux-scsi



-----Original Message-----
From: Eric D. Mudama [mailto:edmudama@gmail.com] 
Sent: Friday, December 12, 2008 5:55 PM
To: Matthew Wilcox
Cc: Bart Van Assche; Raz Ben-Yehuda; linux-ide@vger.kernel.org;
linux-scsi@vger.kernel.org
Subject: Re: Intel X25-M MLC SSD benchmarks

On 12/12/08, Matthew Wilcox <matthew@wil.cx> wrote:
> On Fri, Dec 12, 2008 at 07:25:18AM -0700, Eric D. Mudama wrote:
>> Oops, I guess I assumed that by writing a few gigabytes of data it
>> wouldn't matter that much.  I'll run the test again in a bit once I'm
>> off this solaris machine.
>
> You need to rescale for 2008; people often have 4GB or more RAM in
their
> machines ... hell, I have one machine here from 2002 with 14GB in it
> (I don't power it up very often because it's too noisy).  The test was
> only writing 1GB of data, which would fit in the page cache of my
laptop,
> never mind the kind of machine that's likely to have an 8-way array
> attached to it.

>>I just retested on a linux box at work, and got 79 MB/s on both the
>>X18-M and X25-M, and 197MB/s on an X25-E.  All were done with bs=1M
>>count=1000 oflag=direct.

What is the serial number of your x25.M ? 
what controller ? 
How new is it ? 
Please dd all disk and then retest, just to be sure you used all erase
blocks. 

--<<eric

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Intel X25-M MLC SSD benchmarks
  2008-12-10 21:15 ` Intel X25-M MLC SSD benchmarks Raz Ben-Yehuda
  2008-12-10 21:39   ` Matthew Wilcox
@ 2008-12-12 18:10   ` Greg Freemyer
  1 sibling, 0 replies; 14+ messages in thread
From: Greg Freemyer @ 2008-12-12 18:10 UTC (permalink / raw)
  To: Raz Ben-Yehuda; +Cc: linux-ide, linux-scsi

On Wed, Dec 10, 2008 at 4:15 PM, Raz Ben-Yehuda <razb@bitband.com> wrote:
> Hello
> I am wondering if anyone tried Intel new disks. I benchmark them and I
> am a bit confused.
> According to the spec a single disk should provide 70MB/s write and 250
> MB/s read. Reads are ok. I am reaching this number, but writes are bad.
> With writes I am getting 20MB/s.
> I am using a dd for the test, and a deadline-line scheduler.
>
> Thank you
> Raz

FYI:
Some recent kernel's have performance bugs in /dev/zero.  (The bug may
only exist ins SuSE kernels, not sure.)

Be sure to baseline it before you trust it for any benchmarks.

My current machine/kernel is plenty fast to use but you need to be
sure and it only takes a minute.

# dd if=/dev/zero of=/dev/null bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 33.4257 s, 3.1 GB/s

Greg
-- 
Greg Freemyer
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2008-12-12 18:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <AclbDGOLOskD7sbdTh+kIygApfPXZg==>
2008-12-10 21:15 ` Intel X25-M MLC SSD benchmarks Raz Ben-Yehuda
2008-12-10 21:39   ` Matthew Wilcox
2008-12-10 22:12     ` Raz Ben-Yehuda
2008-12-11  3:44       ` Matthew Wilcox
2008-12-12  3:27       ` Eric D. Mudama
2008-12-12  4:55         ` Eric D. Mudama
2008-12-12  7:15           ` Bart Van Assche
2008-12-12 14:25             ` Eric D. Mudama
2008-12-12 15:20               ` Matthew Wilcox
2008-12-12 15:55                 ` Eric D. Mudama
2008-12-12 17:12                   ` Raz Ben-Yehuda
2008-12-12 12:15           ` Matthew Wilcox
2008-12-12 12:21         ` Raz Ben-Yehuda
2008-12-12 18:10   ` Greg Freemyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.