All of lore.kernel.org
 help / color / mirror / Atom feed
* Port Multipliers
@ 2009-09-10 12:56 Drew
  2009-09-10 16:11 ` Majed B.
  0 siblings, 1 reply; 35+ messages in thread
From: Drew @ 2009-09-10 12:56 UTC (permalink / raw)
  To: Linux RAID Mailing List

Hi,

I've been reading up on port multipliers and I was wondering if
anyone's had any experience with them they'd like to share. From what
I've read of them I would expect the performance to be slower but how
much slower are they in the real world?

I ask because I'm exploring options for my organization and we're
looking at some large JBOD enclosures that can handle up to 15 SATA
drives and I'm trying to minimize performance impact on the drives
while at the same time minimize the amount of cabling between the
openfiler and the enclosure. The server using the enclosure will be
running openfiler as a backend iSCSI target(?) for a couple of VMware
ESXi hosts and while the hosts won't be running anything heavy duty,
mainly just filesharing & email, I'm hoping to keep things running as
fast as possible.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 12:56 Port Multipliers Drew
@ 2009-09-10 16:11 ` Majed B.
  2009-09-10 18:14   ` Drew
  2009-09-10 18:35   ` Drew
  0 siblings, 2 replies; 35+ messages in thread
From: Majed B. @ 2009-09-10 16:11 UTC (permalink / raw)
  To: Drew; +Cc: Linux RAID Mailing List

If you're looking at port multipliers, you need to find PCI-Express
modules if you want them to be fast. The PCI ones are gonna be very
slow when you have more than 2 disks per card.

An alternative would be buying server motherboards that have 10+
ports. I found a few on newegg before. Here're some:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131287
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131239

They have 6 SATA & 8 SAS (SATA disks work on SAS ports, but SAS disks
don't work on SATA ports) 3 PCI-E ports! (If you get a 4-port PCI-e
card, then you get 12 ports in addition to the built-in 14).

You might want to checkout Supermicro's offerings as well.

I hope this helps.

On Thu, Sep 10, 2009 at 3:56 PM, Drew<drew.kay@gmail.com> wrote:
> Hi,
>
> I've been reading up on port multipliers and I was wondering if
> anyone's had any experience with them they'd like to share. From what
> I've read of them I would expect the performance to be slower but how
> much slower are they in the real world?
>
> I ask because I'm exploring options for my organization and we're
> looking at some large JBOD enclosures that can handle up to 15 SATA
> drives and I'm trying to minimize performance impact on the drives
> while at the same time minimize the amount of cabling between the
> openfiler and the enclosure. The server using the enclosure will be
> running openfiler as a backend iSCSI target(?) for a couple of VMware
> ESXi hosts and while the hosts won't be running anything heavy duty,
> mainly just filesharing & email, I'm hoping to keep things running as
> fast as possible.
>
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 16:11 ` Majed B.
@ 2009-09-10 18:14   ` Drew
  2009-09-10 18:32     ` Majed B.
  2009-09-10 19:14     ` Richard Scobie
  2009-09-10 18:35   ` Drew
  1 sibling, 2 replies; 35+ messages in thread
From: Drew @ 2009-09-10 18:14 UTC (permalink / raw)
  To: Majed B.; +Cc: Linux RAID Mailing List

> If you're looking at port multipliers, you need to find PCI-Express
> modules if you want them to be fast. The PCI ones are gonna be very
> slow when you have more than 2 disks per card.

The existing server we're planning to re-purpose for this is an IBM
xSeries with two free PCI-X/100 slots and dual PCIe/x4 slots.

> You might want to checkout Supermicro's offerings as well.

I'd love to be able to upgrade the servers but the bean counters won't
authorize new servers until the existing kit have been fully amortized
and/or we're overloading the existing units. Given the main ESXi host
is a 8 core IBM x445 that's about 60% loaded and I have 3 years worth
of amortization left I can't add new servers.

Storage is another matter, we're pushing to about 80% of our current
400GB capacity and with our OEM suppliers (we sell commercial trucks)
advising us to expect more of our documentation and training going
electronic, I'm looking at shelves worth of manuals and training
material being made electronic over the next few years and given our
data transmission costs are much higher (~$10/GB) then local storage,
we're looking at local storage.

We've considered upgrading our SCSI drives but the per GB cost
difference of SATA vs SCSI drives just isn't worth it. We pay about
$1-$2/GB for SCSI vs $0.25-$0.50/GB for server grade SATA.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:14   ` Drew
@ 2009-09-10 18:32     ` Majed B.
  2009-09-10 18:48       ` Drew
  2009-09-10 19:14     ` Richard Scobie
  1 sibling, 1 reply; 35+ messages in thread
From: Majed B. @ 2009-09-10 18:32 UTC (permalink / raw)
  To: Drew; +Cc: Linux RAID Mailing List

While on newegg, I saw Syba PCI-X cards that work on PCI-X 100MHz,
64-bit, so you can get about 250MB/s per disk, since according to
wikipedia, PCI-X @ 133MHz gives about 1064MB/s.

So you get 8 ports from this card. Look for others under "port multiplier."

I've dealt with IBM xSeries boxes. The normal tower chassis can house
8 disks only. I don't know how you're gonna squeeze more disks into
it!

On Thu, Sep 10, 2009 at 9:14 PM, Drew<drew.kay@gmail.com> wrote:
>> If you're looking at port multipliers, you need to find PCI-Express
>> modules if you want them to be fast. The PCI ones are gonna be very
>> slow when you have more than 2 disks per card.
>
> The existing server we're planning to re-purpose for this is an IBM
> xSeries with two free PCI-X/100 slots and dual PCIe/x4 slots.
>
>> You might want to checkout Supermicro's offerings as well.
>
> I'd love to be able to upgrade the servers but the bean counters won't
> authorize new servers until the existing kit have been fully amortized
> and/or we're overloading the existing units. Given the main ESXi host
> is a 8 core IBM x445 that's about 60% loaded and I have 3 years worth
> of amortization left I can't add new servers.
>
> Storage is another matter, we're pushing to about 80% of our current
> 400GB capacity and with our OEM suppliers (we sell commercial trucks)
> advising us to expect more of our documentation and training going
> electronic, I'm looking at shelves worth of manuals and training
> material being made electronic over the next few years and given our
> data transmission costs are much higher (~$10/GB) then local storage,
> we're looking at local storage.
>
> We've considered upgrading our SCSI drives but the per GB cost
> difference of SATA vs SCSI drives just isn't worth it. We pay about
> $1-$2/GB for SCSI vs $0.25-$0.50/GB for server grade SATA.
>
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
>



-- 
       Majed B.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 16:11 ` Majed B.
  2009-09-10 18:14   ` Drew
@ 2009-09-10 18:35   ` Drew
  2009-09-10 18:44     ` Majed B.
  2009-09-10 18:45     ` Mikael Abrahamsson
  1 sibling, 2 replies; 35+ messages in thread
From: Drew @ 2009-09-10 18:35 UTC (permalink / raw)
  To: Majed B.; +Cc: Linux RAID Mailing List

> If you're looking at port multipliers, you need to find PCI-Express
> modules if you want them to be fast. The PCI ones are gonna be very
> slow when you have more than 2 disks per card.

I'm definitely going to use the PCIX/PCIe slots for the Host Adapter.

What I'm wondering is if I use a HBA and Port Multiplier that support
FIS based switching, say a Sil 3124 & 3726, how much of a loss in data
transfer rate can I expect from the RAID array built off the PM as
opposed to each disk plugged in separately?

An example configuration I'm looking at is a Sil3124 4 port HBA with
three of the ports having Sil3726 5to1 PMs attached. Each PM then has
four disks hung off the PM. If I create a RAID5 array for example on
each PM, what sort of speed degradation would I be looking at compared
to making a RAID5 off just the 3124?

-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:35   ` Drew
@ 2009-09-10 18:44     ` Majed B.
  2009-09-15 17:56       ` Doug Ledford
  2009-09-10 18:45     ` Mikael Abrahamsson
  1 sibling, 1 reply; 35+ messages in thread
From: Majed B. @ 2009-09-10 18:44 UTC (permalink / raw)
  To: Drew; +Cc: Linux RAID Mailing List

The maximum throughput you'll get is the PCI bus's speed. Make sure to
note which version your server has.

The silicon image controller will be your bottleneck here, but I don't
have any numbers to say how much of a loss you'll be at. You'd have to
search around for those who already benchmarked their systems, or
buy/request a card to test it out.

If you do get a card and test it, make sure that you report back to us
and update the wiki: http://linux-raid.osdl.org/index.php/Performance

On Thu, Sep 10, 2009 at 9:35 PM, Drew<drew.kay@gmail.com> wrote:
>> If you're looking at port multipliers, you need to find PCI-Express
>> modules if you want them to be fast. The PCI ones are gonna be very
>> slow when you have more than 2 disks per card.
>
> I'm definitely going to use the PCIX/PCIe slots for the Host Adapter.
>
> What I'm wondering is if I use a HBA and Port Multiplier that support
> FIS based switching, say a Sil 3124 & 3726, how much of a loss in data
> transfer rate can I expect from the RAID array built off the PM as
> opposed to each disk plugged in separately?
>
> An example configuration I'm looking at is a Sil3124 4 port HBA with
> three of the ports having Sil3726 5to1 PMs attached. Each PM then has
> four disks hung off the PM. If I create a RAID5 array for example on
> each PM, what sort of speed degradation would I be looking at compared
> to making a RAID5 off just the 3124?
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
>



-- 
       Majed B.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:35   ` Drew
  2009-09-10 18:44     ` Majed B.
@ 2009-09-10 18:45     ` Mikael Abrahamsson
  1 sibling, 0 replies; 35+ messages in thread
From: Mikael Abrahamsson @ 2009-09-10 18:45 UTC (permalink / raw)
  To: Linux RAID Mailing List

On Thu, 10 Sep 2009, Drew wrote:

> each PM, what sort of speed degradation would I be looking at compared 
> to making a RAID5 off just the 3124?

My biggest beef with some of the PMPs out there are the caveats that come 
with some of them, such as "you have to have a drive in port 0 otherwise 
all the drives on that PMP goes haywire".

So whatever solution you go for, test it properly so it works in your 
environment. Try hotswapping all drives, make sure you understand what the 
OS reaction is to combinations of drives being there and not, try 
rebooting in degraded modes etc.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:32     ` Majed B.
@ 2009-09-10 18:48       ` Drew
  2009-09-10 18:53         ` Majed B.
  0 siblings, 1 reply; 35+ messages in thread
From: Drew @ 2009-09-10 18:48 UTC (permalink / raw)
  To: Majed B.; +Cc: Linux RAID Mailing List

> I've dealt with IBM xSeries boxes. The normal tower chassis can house
> 8 disks only. I don't know how you're gonna squeeze more disks into
> it!

Not planning to. :-)

One option I was looking at was something along the lines of Addonic's
"Storage Rack" with 5SA hotswap trays for the disks. The box can
handle up to 15 disks over one SATA multilane card though the use of 3
PMs internally. With that example I'd be using a PCI-X/133 card.


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:48       ` Drew
@ 2009-09-10 18:53         ` Majed B.
  0 siblings, 0 replies; 35+ messages in thread
From: Majed B. @ 2009-09-10 18:53 UTC (permalink / raw)
  To: Drew; +Cc: Linux RAID Mailing List

You may find this interesting ;)
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

On Thu, Sep 10, 2009 at 9:48 PM, Drew<drew.kay@gmail.com> wrote:
>> I've dealt with IBM xSeries boxes. The normal tower chassis can house
>> 8 disks only. I don't know how you're gonna squeeze more disks into
>> it!
>
> Not planning to. :-)
>
> One option I was looking at was something along the lines of Addonic's
> "Storage Rack" with 5SA hotswap trays for the disks. The box can
> handle up to 15 disks over one SATA multilane card though the use of 3
> PMs internally. With that example I'd be using a PCI-X/133 card.
>
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
>



-- 
       Majed B.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:14   ` Drew
  2009-09-10 18:32     ` Majed B.
@ 2009-09-10 19:14     ` Richard Scobie
  1 sibling, 0 replies; 35+ messages in thread
From: Richard Scobie @ 2009-09-10 19:14 UTC (permalink / raw)
  To: Drew; +Cc: Majed B., Linux RAID Mailing List

Drew wrote:
>>If you're looking at port multipliers, you need to find PCI-Express
>>modules if you want them to be fast. The PCI ones are gonna be very
>>slow when you have more than 2 disks per card.
> 
> 
> The existing server we're planning to re-purpose for this is an IBM
> xSeries with two free PCI-X/100 slots and dual PCIe/x4 slots.

Your best bet might be to use one of these HBA's:
 
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3801e/index.html

although you would need to check whether it functions OK in an x4 slot - 
obviously the maximum performance will not be reached, but if you are 
only using one output you should be OK.

This would then be connected to an enclosure like this:

http://www.aicipc.com/ProductDetail.aspx?ref=XJ1000%20series%20-%203U%2016-bay

which contains a port expander to all the drives.

I have been running 2 similar setups (although using the LSISAS3442E-R 
version of the HBA - around $US230 in an x8 slot), for over a year 
without any trouble, with the drives configured as an md RAID6.

The HBA's can have an alternate firmware loaded which removes the 
onboard hardware RAID0,1 functionality.

Regards,

Richard

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-10 18:44     ` Majed B.
@ 2009-09-15 17:56       ` Doug Ledford
  2009-09-15 18:12         ` Majed B.
                           ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Doug Ledford @ 2009-09-15 17:56 UTC (permalink / raw)
  To: Majed B.; +Cc: Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 3617 bytes --]

On Sep 10, 2009, at 2:44 PM, Majed B. wrote:
> The maximum throughput you'll get is the PCI bus's speed. Make sure to
> note which version your server has.
>
> The silicon image controller will be your bottleneck here, but I don't
> have any numbers to say how much of a loss you'll be at. You'd have to
> search around for those who already benchmarked their systems, or
> buy/request a card to test it out.

I've actually been doing some of those benchmarks here.  Given a  
Silicon Image 3124 card in a x1 PCI-e slot, my maximum throughput  
should be about 250MB/s (PCI-e limitation).  My drives behind the pm  
are all capable of about 80MB/s, and I have 4 drives.  What I've found  
is that when accessing one drive by itself, I get 80MB/s.  When  
accessing more than one drive, I get a total of about 120MB/s, but  
it's divided by however many drives I'm accessing.  So, two drives is  
roughly 60MB/s each, 3 drives about 40MB/s each, and 4 drives about  
30MB/s each.

This is then complicated by whether or not you have motherboard ports  
in the same raid array.  As the motherboard ports all get simultaneous  
drive speed more or less (up to 500MB/s aggregate in my test machine  
anyway), it's worth noting that the motherboard drives slow down to  
whatever speed you are getting on the drives behind the pm whenever  
they are combined.  So, even if 5 drives on the motherboard could do  
500MB/s total, 100MB/s each, if they are combined with 4 drives behind  
a pm at 30MB/s each, they switch down to 30MB/s each as well, and the  
combined total would then become 9 * 30MB/s for 270MB/s, considerably  
slower than just the 5 drives on the motherboard by themselves.   
However, if all your drives are behind pms, then I would expect to get  
a fairly linear speed increase as you increase the number of pms.  You  
can then control how fast the overall array is by controlling how many  
drives are behind each pm up to the point that you reach PCI bus or  
memory or CPU bottlenecks.

> If you do get a card and test it, make sure that you report back to us
> and update the wiki: http://linux-raid.osdl.org/index.php/Performance
>
> On Thu, Sep 10, 2009 at 9:35 PM, Drew<drew.kay@gmail.com> wrote:
>>> If you're looking at port multipliers, you need to find PCI-Express
>>> modules if you want them to be fast. The PCI ones are gonna be very
>>> slow when you have more than 2 disks per card.
>>
>> I'm definitely going to use the PCIX/PCIe slots for the Host Adapter.
>>
>> What I'm wondering is if I use a HBA and Port Multiplier that support
>> FIS based switching, say a Sil 3124 & 3726, how much of a loss in  
>> data
>> transfer rate can I expect from the RAID array built off the PM as
>> opposed to each disk plugged in separately?
>>
>> An example configuration I'm looking at is a Sil3124 4 port HBA with
>> three of the ports having Sil3726 5to1 PMs attached. Each PM then has
>> four disks hung off the PM. If I create a RAID5 array for example on
>> each PM, what sort of speed degradation would I be looking at  
>> compared
>> to making a RAID5 off just the 3124?
>>
>> --
>> Drew
>>
>> "Nothing in life is to be feared. It is only to be understood."
>> --Marie Curie
>>
>
>
>
> -- 
>       Majed B.
> --
> To unsubscribe from this list: send the line "unsubscribe linux- 
> raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 17:56       ` Doug Ledford
@ 2009-09-15 18:12         ` Majed B.
  2009-09-15 19:55           ` Doug Ledford
  2009-09-15 20:28         ` Greg Freemyer
  2009-09-16 15:34         ` John Robinson
  2 siblings, 1 reply; 35+ messages in thread
From: Majed B. @ 2009-09-15 18:12 UTC (permalink / raw)
  To: Doug Ledford; +Cc: Drew, Linux RAID Mailing List

Well, just because the PCI-e 1x bus can do 250 MB/s, it doesn't mean
that the Port Multiplier (PM) can reach that speed, hence me telling
you to test the card itself with 1 disk to see its max speed, then add
another and so on.

Some PMs can communicate with each other. Check the specification
sheet to see if your PM can do that. If that's the case, keep your
disks of one array connected to PMs of the same chip, and use the
built-in ports of the motherboard for another array or just normal
disks.

On Tue, Sep 15, 2009 at 8:56 PM, Doug Ledford <dledford@redhat.com> wrote:
> On Sep 10, 2009, at 2:44 PM, Majed B. wrote:
>>
>> The maximum throughput you'll get is the PCI bus's speed. Make sure to
>> note which version your server has.
>>
>> The silicon image controller will be your bottleneck here, but I don't
>> have any numbers to say how much of a loss you'll be at. You'd have to
>> search around for those who already benchmarked their systems, or
>> buy/request a card to test it out.
>
> I've actually been doing some of those benchmarks here.  Given a Silicon
> Image 3124 card in a x1 PCI-e slot, my maximum throughput should be about
> 250MB/s (PCI-e limitation).  My drives behind the pm are all capable of
> about 80MB/s, and I have 4 drives.  What I've found is that when accessing
> one drive by itself, I get 80MB/s.  When accessing more than one drive, I
> get a total of about 120MB/s, but it's divided by however many drives I'm
> accessing.  So, two drives is roughly 60MB/s each, 3 drives about 40MB/s
> each, and 4 drives about 30MB/s each.
>
> This is then complicated by whether or not you have motherboard ports in the
> same raid array.  As the motherboard ports all get simultaneous drive speed
> more or less (up to 500MB/s aggregate in my test machine anyway), it's worth
> noting that the motherboard drives slow down to whatever speed you are
> getting on the drives behind the pm whenever they are combined.  So, even if
> 5 drives on the motherboard could do 500MB/s total, 100MB/s each, if they
> are combined with 4 drives behind a pm at 30MB/s each, they switch down to
> 30MB/s each as well, and the combined total would then become 9 * 30MB/s for
> 270MB/s, considerably slower than just the 5 drives on the motherboard by
> themselves.  However, if all your drives are behind pms, then I would expect
> to get a fairly linear speed increase as you increase the number of pms.
>  You can then control how fast the overall array is by controlling how many
> drives are behind each pm up to the point that you reach PCI bus or memory
> or CPU bottlenecks.
>
>> If you do get a card and test it, make sure that you report back to us
>> and update the wiki: http://linux-raid.osdl.org/index.php/Performance
>>
>> On Thu, Sep 10, 2009 at 9:35 PM, Drew<drew.kay@gmail.com> wrote:
>>>>
>>>> If you're looking at port multipliers, you need to find PCI-Express
>>>> modules if you want them to be fast. The PCI ones are gonna be very
>>>> slow when you have more than 2 disks per card.
>>>
>>> I'm definitely going to use the PCIX/PCIe slots for the Host Adapter.
>>>
>>> What I'm wondering is if I use a HBA and Port Multiplier that support
>>> FIS based switching, say a Sil 3124 & 3726, how much of a loss in data
>>> transfer rate can I expect from the RAID array built off the PM as
>>> opposed to each disk plugged in separately?
>>>
>>> An example configuration I'm looking at is a Sil3124 4 port HBA with
>>> three of the ports having Sil3726 5to1 PMs attached. Each PM then has
>>> four disks hung off the PM. If I create a RAID5 array for example on
>>> each PM, what sort of speed degradation would I be looking at compared
>>> to making a RAID5 off just the 3124?
>>>
>>> --
>>> Drew
>>>
>>> "Nothing in life is to be feared. It is only to be understood."
>>> --Marie Curie
>>>
>>
>>
>>
>> --
>>      Majed B.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> --
>
> Doug Ledford <dledford@redhat.com>
>
> GPG KeyID: CFBFF194
> http://people.redhat.com/dledford
>
> InfiniBand Specific RPMS
> http://people.redhat.com/dledford/Infiniband
>
>
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 18:12         ` Majed B.
@ 2009-09-15 19:55           ` Doug Ledford
  2009-09-15 20:08             ` Majed B.
  0 siblings, 1 reply; 35+ messages in thread
From: Doug Ledford @ 2009-09-15 19:55 UTC (permalink / raw)
  To: Majed B.; +Cc: Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1198 bytes --]

On Sep 15, 2009, at 2:12 PM, Majed B. wrote:
> Well, just because the PCI-e 1x bus can do 250 MB/s, it doesn't mean
> that the Port Multiplier (PM) can reach that speed, hence me telling
> you to test the card itself with 1 disk to see its max speed, then add
> another and so on.

You didn't tell me, you told Drew.  And I wasn't reporting test  
results I got in response to your instructions, these tests have  
already been done (and well more in fact).  I was just relaying what I  
found on this card and this setup for Drew's benefit.

> Some PMs can communicate with each other. Check the specification
> sheet to see if your PM can do that. If that's the case, keep your
> disks of one array connected to PMs of the same chip, and use the
> built-in ports of the motherboard for another array or just normal
> disks.

This is a test machine I built specifically for testing bare ports  
versus pm setups.  I destroy and create new raid arrays on it all the  
time, and none of them are for real use, just benchmarking.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 19:55           ` Doug Ledford
@ 2009-09-15 20:08             ` Majed B.
  0 siblings, 0 replies; 35+ messages in thread
From: Majed B. @ 2009-09-15 20:08 UTC (permalink / raw)
  To: Doug Ledford; +Cc: Drew, Linux RAID Mailing List

My bad :)

On Tue, Sep 15, 2009 at 10:55 PM, Doug Ledford <dledford@redhat.com> wrote:
> On Sep 15, 2009, at 2:12 PM, Majed B. wrote:
>>
>> Well, just because the PCI-e 1x bus can do 250 MB/s, it doesn't mean
>> that the Port Multiplier (PM) can reach that speed, hence me telling
>> you to test the card itself with 1 disk to see its max speed, then add
>> another and so on.
>
> You didn't tell me, you told Drew.  And I wasn't reporting test results I
> got in response to your instructions, these tests have already been done
> (and well more in fact).  I was just relaying what I found on this card and
> this setup for Drew's benefit.
>
>> Some PMs can communicate with each other. Check the specification
>> sheet to see if your PM can do that. If that's the case, keep your
>> disks of one array connected to PMs of the same chip, and use the
>> built-in ports of the motherboard for another array or just normal
>> disks.
>
> This is a test machine I built specifically for testing bare ports versus pm
> setups.  I destroy and create new raid arrays on it all the time, and none
> of them are for real use, just benchmarking.
>
> --
>
> Doug Ledford <dledford@redhat.com>
>
> GPG KeyID: CFBFF194
> http://people.redhat.com/dledford
>
> InfiniBand Specific RPMS
> http://people.redhat.com/dledford/Infiniband
>
>
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 17:56       ` Doug Ledford
  2009-09-15 18:12         ` Majed B.
@ 2009-09-15 20:28         ` Greg Freemyer
  2009-09-15 20:34           ` Doug Ledford
  2009-09-16 15:34         ` John Robinson
  2 siblings, 1 reply; 35+ messages in thread
From: Greg Freemyer @ 2009-09-15 20:28 UTC (permalink / raw)
  To: Doug Ledford; +Cc: Majed B., Drew, Linux RAID Mailing List

>
> I've actually been doing some of those benchmarks here.  Given a Silicon
> Image 3124 card in a x1 PCI-e slot, my maximum throughput should be about
> 250MB/s (PCI-e limitation).  My drives behind the pm are all capable of
> about 80MB/s, and I have 4 drives.  What I've found is that when accessing
> one drive by itself, I get 80MB/s.  When accessing more than one drive, I
> get a total of about 120MB/s, but it's divided by however many drives I'm
> accessing.  So, two drives is roughly 60MB/s each, 3 drives about 40MB/s
> each, and 4 drives about 30MB/s each.
>
Doug,

I hate to ask the obvious, but you do have a 3Gbit/sec connection
between the controller and the PM, right?

I only ask because your 120MB/sec is about right for a 1.5Gbit/sec
connection.  I was under the impression you should max out closer to
250MB / sec with a good controller and PM and a 3.Gbit/sec connection.
 I have not done any testing myself.

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 20:28         ` Greg Freemyer
@ 2009-09-15 20:34           ` Doug Ledford
  2009-09-15 20:49             ` Richard Scobie
  0 siblings, 1 reply; 35+ messages in thread
From: Doug Ledford @ 2009-09-15 20:34 UTC (permalink / raw)
  To: Greg Freemyer; +Cc: Majed B., Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1571 bytes --]

On Sep 15, 2009, at 4:28 PM, Greg Freemyer wrote:
>> I've actually been doing some of those benchmarks here.  Given a  
>> Silicon
>> Image 3124 card in a x1 PCI-e slot, my maximum throughput should be  
>> about
>> 250MB/s (PCI-e limitation).  My drives behind the pm are all  
>> capable of
>> about 80MB/s, and I have 4 drives.  What I've found is that when  
>> accessing
>> one drive by itself, I get 80MB/s.  When accessing more than one  
>> drive, I
>> get a total of about 120MB/s, but it's divided by however many  
>> drives I'm
>> accessing.  So, two drives is roughly 60MB/s each, 3 drives about  
>> 40MB/s
>> each, and 4 drives about 30MB/s each.
>>
> Doug,
>
> I hate to ask the obvious, but you do have a 3Gbit/sec connection
> between the controller and the PM, right?

According to the kernel dmesg output, yes, I have a 3GBit/s  
connection.  However, I had the very same niggling doubts as you, and  
I don't have a SATA bus analyzer to prove it to myself.

> I only ask because your 120MB/sec is about right for a 1.5Gbit/sec
> connection.  I was under the impression you should max out closer to
> 250MB / sec with a good controller and PM and a 3.Gbit/sec connection.
> I have not done any testing myself.

I agree with this sentiment 100%.  I don't have a good answer for why  
it topped out where it did, and that's one of the things I'm still  
trying to get an answer to.


--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 20:34           ` Doug Ledford
@ 2009-09-15 20:49             ` Richard Scobie
  2009-09-15 21:29               ` Doug Ledford
  2009-09-15 21:52               ` David Rees
  0 siblings, 2 replies; 35+ messages in thread
From: Richard Scobie @ 2009-09-15 20:49 UTC (permalink / raw)
  To: Doug Ledford; +Cc: Greg Freemyer, Majed B., Drew, Linux RAID Mailing List

Doug Ledford wrote:

> I agree with this sentiment 100%.  I don't have a good answer for why  
> it topped out where it did, and that's one of the things I'm still  
> trying to get an answer to.

I can also confirm this sub par performance on the Sil 3124 - max 
throughput of around 120MB/s.

If your motherboard is able to set the "PCIe Max Payload Size" you may 
be able to improve things.

See Note 3 here:

http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Silicon_Image_3124

Regards,

Richard

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 20:49             ` Richard Scobie
@ 2009-09-15 21:29               ` Doug Ledford
  2009-09-15 23:32                 ` Drew
  2009-09-15 21:52               ` David Rees
  1 sibling, 1 reply; 35+ messages in thread
From: Doug Ledford @ 2009-09-15 21:29 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Greg Freemyer, Majed B., Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 790 bytes --]

On Sep 15, 2009, at 4:49 PM, Richard Scobie wrote:
> Doug Ledford wrote:
>
>> I agree with this sentiment 100%.  I don't have a good answer for  
>> why  it topped out where it did, and that's one of the things I'm  
>> still  trying to get an answer to.
>
> I can also confirm this sub par performance on the Sil 3124 - max  
> throughput of around 120MB/s.
>
> If your motherboard is able to set the "PCIe Max Payload Size" you  
> may be able to improve things.
>
> See Note 3 here:
>
> http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Silicon_Image_3124


Nice, that answers a number of my questions ;-)

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 20:49             ` Richard Scobie
  2009-09-15 21:29               ` Doug Ledford
@ 2009-09-15 21:52               ` David Rees
  2009-09-16  0:31                 ` Doug Ledford
  1 sibling, 1 reply; 35+ messages in thread
From: David Rees @ 2009-09-15 21:52 UTC (permalink / raw)
  To: Richard Scobie
  Cc: Doug Ledford, Greg Freemyer, Majed B., Drew, Linux RAID Mailing List

On Tue, Sep 15, 2009 at 1:49 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Doug Ledford wrote:
>> I agree with this sentiment 100%.  I don't have a good answer for why  it
>> topped out where it did, and that's one of the things I'm still  trying to
>> get an answer to.
>
> I can also confirm this sub par performance on the Sil 3124 - max throughput
> of around 120MB/s.
>
> If your motherboard is able to set the "PCIe Max Payload Size" you may be
> able to improve things.
>
> See Note 3 here:
>
> http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Silicon_Image_3124

Another one here with a Sil3124 and max 120MB/s.

With the port multiplier I've got, I've had to disable NCQ to get
things to behave when accessing multiple drives - otherwise access to
the enclosure would lock up under moderate/heavy concurrent disk
access.

The multipler appears to be a Sil4726.  The array was built on a
budget so the drives in the multiplier are a mix of drives - some are
1.5Mbps, some are 3.0Mbps and not all support NCQ.  Not sure how it
behaves with 100% NCQ capable drives.

-Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 21:29               ` Doug Ledford
@ 2009-09-15 23:32                 ` Drew
  2009-09-16  1:26                   ` Doug Ledford
  0 siblings, 1 reply; 35+ messages in thread
From: Drew @ 2009-09-15 23:32 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Richard Scobie, Greg Freemyer, Majed B., Linux RAID Mailing List

Thanks for the input.

Sounds from your testing like PMs can deliver the sorts of speeds that
are adequate for our needs. Have you done any testing as far as md
RAID using member disks from each PM?

Given we're expecting a mix of online and archival data going onto
this enclosure I was thinking about making up RAID arrays composed of
disks from each PM for online use and arrays composed of disks from a
PM for archival use.

I'm sorry if I keep throwing questions out without doing my own
testing. As I alluded to earlier I don't have an R&D budget for
testing so I have to be reasonably sure of my system before I can get
authorization to purchase kit.

-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 21:52               ` David Rees
@ 2009-09-16  0:31                 ` Doug Ledford
  2009-09-16  1:01                   ` Majed B.
  0 siblings, 1 reply; 35+ messages in thread
From: Doug Ledford @ 2009-09-16  0:31 UTC (permalink / raw)
  To: David Rees
  Cc: Richard Scobie, Greg Freemyer, Majed B., Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1596 bytes --]

On Sep 15, 2009, at 5:52 PM, David Rees wrote:
> On Tue, Sep 15, 2009 at 1:49 PM, Richard Scobie  
> <richard@sauce.co.nz> wrote:
>> Doug Ledford wrote:
>>> I agree with this sentiment 100%.  I don't have a good answer for  
>>> why  it
>>> topped out where it did, and that's one of the things I'm still   
>>> trying to
>>> get an answer to.
>>
>> I can also confirm this sub par performance on the Sil 3124 - max  
>> throughput
>> of around 120MB/s.
>>
>> If your motherboard is able to set the "PCIe Max Payload Size" you  
>> may be
>> able to improve things.
>>
>> See Note 3 here:
>>
>> http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Silicon_Image_3124
>
> Another one here with a Sil3124 and max 120MB/s.
>
> With the port multiplier I've got, I've had to disable NCQ to get
> things to behave when accessing multiple drives - otherwise access to
> the enclosure would lock up under moderate/heavy concurrent disk
> access.
>
> The multipler appears to be a Sil4726.  The array was built on a
> budget so the drives in the multiplier are a mix of drives - some are
> 1.5Mbps, some are 3.0Mbps and not all support NCQ.  Not sure how it
> behaves with 100% NCQ capable drives.


My port multiplier is a Sil3726, so very similar.  However, my drives  
are all more or less identical and are all NCQ capable.  I've been  
able to beat on them for days at a time under non-stop load and not  
had a problem.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16  0:31                 ` Doug Ledford
@ 2009-09-16  1:01                   ` Majed B.
  2009-09-16  1:28                     ` Doug Ledford
  0 siblings, 1 reply; 35+ messages in thread
From: Majed B. @ 2009-09-16  1:01 UTC (permalink / raw)
  To: Doug Ledford
  Cc: David Rees, Richard Scobie, Greg Freemyer, Drew, Linux RAID Mailing List

I think someone mentioned in the mailing list that the Linux kernel
does sort commands before sending them to the disks, so if the disk
tries to sort, and its algorithm isn't that good, the performance
drops and hence disabling them is a good idea. I believe it's also
mentioned in here: http://linux-raid.osdl.org/index.php/Performance

On Wed, Sep 16, 2009 at 3:31 AM, Doug Ledford <dledford@redhat.com> wrote:
> On Sep 15, 2009, at 5:52 PM, David Rees wrote:
>>
>> On Tue, Sep 15, 2009 at 1:49 PM, Richard Scobie <richard@sauce.co.nz>
>> wrote:
>>>
>>> Doug Ledford wrote:
>>>>
>>>> I agree with this sentiment 100%.  I don't have a good answer for why
>>>>  it
>>>> topped out where it did, and that's one of the things I'm still  trying
>>>> to
>>>> get an answer to.
>>>
>>> I can also confirm this sub par performance on the Sil 3124 - max
>>> throughput
>>> of around 120MB/s.
>>>
>>> If your motherboard is able to set the "PCIe Max Payload Size" you may be
>>> able to improve things.
>>>
>>> See Note 3 here:
>>>
>>>
>>> http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Silicon_Image_3124
>>
>> Another one here with a Sil3124 and max 120MB/s.
>>
>> With the port multiplier I've got, I've had to disable NCQ to get
>> things to behave when accessing multiple drives - otherwise access to
>> the enclosure would lock up under moderate/heavy concurrent disk
>> access.
>>
>> The multipler appears to be a Sil4726.  The array was built on a
>> budget so the drives in the multiplier are a mix of drives - some are
>> 1.5Mbps, some are 3.0Mbps and not all support NCQ.  Not sure how it
>> behaves with 100% NCQ capable drives.
>
>
> My port multiplier is a Sil3726, so very similar.  However, my drives are
> all more or less identical and are all NCQ capable.  I've been able to beat
> on them for days at a time under non-stop load and not had a problem.
>
> --
>
> Doug Ledford <dledford@redhat.com>
>
> GPG KeyID: CFBFF194
> http://people.redhat.com/dledford
>
> InfiniBand Specific RPMS
> http://people.redhat.com/dledford/Infiniband
>
>
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 23:32                 ` Drew
@ 2009-09-16  1:26                   ` Doug Ledford
  0 siblings, 0 replies; 35+ messages in thread
From: Doug Ledford @ 2009-09-16  1:26 UTC (permalink / raw)
  To: Drew; +Cc: Richard Scobie, Greg Freemyer, Majed B., Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1977 bytes --]

On Sep 15, 2009, at 7:32 PM, Drew wrote:
> Thanks for the input.
>
> Sounds from your testing like PMs can deliver the sorts of speeds that
> are adequate for our needs.

Given a decent SATA port (like the Sil3132) and a motherboard that  
allows this port to run at full speed (one that can up the max PCI-e  
payload limit above 128 bytes), then yes.  I'm sure other port types  
out there would work without the PCI-e requirement, but I don't have  
any of those for testing.

> Have you done any testing as far as md
> RAID using member disks from each PM?

I've only got one PM at the moment.

> Given we're expecting a mix of online and archival data going onto
> this enclosure I was thinking about making up RAID arrays composed of
> disks from each PM for online use and arrays composed of disks from a
> PM for archival use.

Well, I might suggest something like doing 3 disks from each PM as  
part of the archive and one disk from each PM as the online storage.   
In a scenario like that, the archive system will run slower than the  
online system, but as long as the online system and archive system  
aren't currently fighting for bandwidth, the online system will get  
full speed (assuming the online disks can't go faster than 120MB/s  
each).  And that's without having to find a motherboard that lets you  
set the PCI-e payload size.  Another option is the online disks can be  
internal disks, with at most one disk per PM.  That array would be  
blindingly fast.  You can then make the archive array(s) use the  
remaining PM connected drives.

> I'm sorry if I keep throwing questions out without doing my own
> testing. As I alluded to earlier I don't have an R&D budget for
> testing so I have to be reasonably sure of my system before I can get
> authorization to purchase kit.


--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16  1:01                   ` Majed B.
@ 2009-09-16  1:28                     ` Doug Ledford
  2009-09-16  1:45                       ` Majed B.
  0 siblings, 1 reply; 35+ messages in thread
From: Doug Ledford @ 2009-09-16  1:28 UTC (permalink / raw)
  To: Majed B.
  Cc: David Rees, Richard Scobie, Greg Freemyer, Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 915 bytes --]

On Sep 15, 2009, at 9:01 PM, Majed B. wrote:
> I think someone mentioned in the mailing list that the Linux kernel
> does sort commands before sending them to the disks, so if the disk
> tries to sort, and its algorithm isn't that good, the performance
> drops and hence disabling them is a good idea. I believe it's also
> mentioned in here: http://linux-raid.osdl.org/index.php/Performance


It depends on the elevator in use.  And regardless, I have yet to see  
a raid5 array ever perform better with queueing turned off instead of  
on.  Although, in many cases, very large queue depths don't help  
much.  Testing I've done showed that only a 4 to 8 queue depth is  
sufficient to get 95% or better of the performance benefit of queueing.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16  1:28                     ` Doug Ledford
@ 2009-09-16  1:45                       ` Majed B.
  2009-09-16 11:11                         ` Tom Carlson
  2009-09-16 14:25                         ` Doug Ledford
  0 siblings, 2 replies; 35+ messages in thread
From: Majed B. @ 2009-09-16  1:45 UTC (permalink / raw)
  To: Doug Ledford
  Cc: David Rees, Richard Scobie, Greg Freemyer, Drew, Linux RAID Mailing List

Regarding payloads, I've recently bought an EVGA motherboard off
newegg for $120 and it supports upping the payload to 4096 bytes.

Newegg link: http://www.newegg.com/Product/Product.aspx?Item=N82E16813188035
Manual guide: http://www.evga.com/support/manuals/files/113-YW-E115.pdf

The motherboard above has 8 SATA ports, built-in VGA (256MB, if you
care), 1x Gbit LAN, 4x RAM DIMMs and a few more options. I use it for
my primary array: 8x1TB disks.

ASUS gaming motherboards allow changing the payload as well.

On Wed, Sep 16, 2009 at 4:28 AM, Doug Ledford <dledford@redhat.com> wrote:
> On Sep 15, 2009, at 9:01 PM, Majed B. wrote:
>>
>> I think someone mentioned in the mailing list that the Linux kernel
>> does sort commands before sending them to the disks, so if the disk
>> tries to sort, and its algorithm isn't that good, the performance
>> drops and hence disabling them is a good idea. I believe it's also
>> mentioned in here: http://linux-raid.osdl.org/index.php/Performance
>
>
> It depends on the elevator in use.  And regardless, I have yet to see a
> raid5 array ever perform better with queueing turned off instead of on.
>  Although, in many cases, very large queue depths don't help much.  Testing
> I've done showed that only a 4 to 8 queue depth is sufficient to get 95% or
> better of the performance benefit of queueing.
>
> --
>
> Doug Ledford <dledford@redhat.com>
>
> GPG KeyID: CFBFF194
> http://people.redhat.com/dledford
>
> InfiniBand Specific RPMS
> http://people.redhat.com/dledford/Infiniband
>
>
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16  1:45                       ` Majed B.
@ 2009-09-16 11:11                         ` Tom Carlson
  2009-09-16 14:28                           ` Doug Ledford
  2009-09-16 14:25                         ` Doug Ledford
  1 sibling, 1 reply; 35+ messages in thread
From: Tom Carlson @ 2009-09-16 11:11 UTC (permalink / raw)
  To: Majed B.
  Cc: Doug Ledford, David Rees, Richard Scobie, Greg Freemyer, Drew,
	Linux RAID Mailing List

Hi,

I've had a slightly bad experience with port multipliers. I have a
PCI-e x1 JMB362 on the host end and a SiI 3726 connected to it. (I
think. It's a 1-5 PM). I have 5 disks connected in raid5 and get some
fairly appalling write speeds, well below what I'd expect even for
raid5 writes. Reads too are fairly slow...

$ dd if=/dev/zero of=./blah bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 47.4814 s, 11.3 MB/s

$ dd if=./bigfile.iso of=/dev/null
8474857+0 records in
8474857+0 records out
4339126272 bytes (4.3 GB) copied, 144.667 s, 30.0 MB/s

Obviously this isn't the most scientific of tests... :-) but it does
show slowness with this particular combination.

I'm tempted to go buy a SiI 3132 based controller and compare the results.

T


2009/9/16 Majed B. <majedb@gmail.com>:
> Regarding payloads, I've recently bought an EVGA motherboard off
> newegg for $120 and it supports upping the payload to 4096 bytes.
>
> Newegg link: http://www.newegg.com/Product/Product.aspx?Item=N82E16813188035
> Manual guide: http://www.evga.com/support/manuals/files/113-YW-E115.pdf
>
> The motherboard above has 8 SATA ports, built-in VGA (256MB, if you
> care), 1x Gbit LAN, 4x RAM DIMMs and a few more options. I use it for
> my primary array: 8x1TB disks.
>
> ASUS gaming motherboards allow changing the payload as well.
>
> On Wed, Sep 16, 2009 at 4:28 AM, Doug Ledford <dledford@redhat.com> wrote:
>> On Sep 15, 2009, at 9:01 PM, Majed B. wrote:
>>>
>>> I think someone mentioned in the mailing list that the Linux kernel
>>> does sort commands before sending them to the disks, so if the disk
>>> tries to sort, and its algorithm isn't that good, the performance
>>> drops and hence disabling them is a good idea. I believe it's also
>>> mentioned in here: http://linux-raid.osdl.org/index.php/Performance
>>
>>
>> It depends on the elevator in use.  And regardless, I have yet to see a
>> raid5 array ever perform better with queueing turned off instead of on.
>>  Although, in many cases, very large queue depths don't help much.  Testing
>> I've done showed that only a 4 to 8 queue depth is sufficient to get 95% or
>> better of the performance benefit of queueing.
>>
>> --
>>
>> Doug Ledford <dledford@redhat.com>
>>
>> GPG KeyID: CFBFF194
>> http://people.redhat.com/dledford
>>
>> InfiniBand Specific RPMS
>> http://people.redhat.com/dledford/Infiniband
>>
>>
>>
>>
>>
>
>
>
> --
>       Majed B.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16  1:45                       ` Majed B.
  2009-09-16 11:11                         ` Tom Carlson
@ 2009-09-16 14:25                         ` Doug Ledford
  2009-09-16 16:44                           ` Majed B.
  1 sibling, 1 reply; 35+ messages in thread
From: Doug Ledford @ 2009-09-16 14:25 UTC (permalink / raw)
  To: Majed B.
  Cc: David Rees, Richard Scobie, Greg Freemyer, Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1040 bytes --]

On Sep 15, 2009, at 9:45 PM, Majed B. wrote:
> Regarding payloads, I've recently bought an EVGA motherboard off
> newegg for $120 and it supports upping the payload to 4096 bytes.
>
> Newegg link: http://www.newegg.com/Product/Product.aspx?Item=N82E16813188035
> Manual guide: http://www.evga.com/support/manuals/files/113-YW- 
> E115.pdf
>
> The motherboard above has 8 SATA ports, built-in VGA (256MB, if you
> care), 1x Gbit LAN, 4x RAM DIMMs and a few more options. I use it for
> my primary array: 8x1TB disks.
>
> ASUS gaming motherboards allow changing the payload as well.


So far I've not found a single motherboard that supports this *and*  
uses AMD CPUs.  This appears to be an Intel feature only.  I'm sure my  
manager would prefer if I can get this performance upgrade with only a  
motherboard swap instead of a motherboard, CPU, and possibly RAM swap.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16 11:11                         ` Tom Carlson
@ 2009-09-16 14:28                           ` Doug Ledford
  2009-09-16 14:51                             ` Greg Freemyer
  2009-09-16 15:35                             ` Leslie Rhorer
  0 siblings, 2 replies; 35+ messages in thread
From: Doug Ledford @ 2009-09-16 14:28 UTC (permalink / raw)
  To: Tom Carlson
  Cc: Majed B.,
	David Rees, Richard Scobie, Greg Freemyer, Drew,
	Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1117 bytes --]

On Sep 16, 2009, at 7:11 AM, Tom Carlson wrote:
> Hi,
>
> I've had a slightly bad experience with port multipliers. I have a
> PCI-e x1 JMB362 on the host end and a SiI 3726 connected to it. (I
> think. It's a 1-5 PM). I have 5 disks connected in raid5 and get some
> fairly appalling write speeds, well below what I'd expect even for
> raid5 writes. Reads too are fairly slow...
>
> $ dd if=/dev/zero of=./blah bs=1M count=512
> 512+0 records in
> 512+0 records out
> 536870912 bytes (537 MB) copied, 47.4814 s, 11.3 MB/s
>
> $ dd if=./bigfile.iso of=/dev/null
> 8474857+0 records in
> 8474857+0 records out
> 4339126272 bytes (4.3 GB) copied, 144.667 s, 30.0 MB/s
>
> Obviously this isn't the most scientific of tests... :-) but it does
> show slowness with this particular combination.
>
> I'm tempted to go buy a SiI 3132 based controller and compare the  
> results.


I would, those numbers look *really* bad compared to what they could be.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16 14:28                           ` Doug Ledford
@ 2009-09-16 14:51                             ` Greg Freemyer
  2009-09-16 18:02                               ` Tom Carlson
  2009-09-16 15:35                             ` Leslie Rhorer
  1 sibling, 1 reply; 35+ messages in thread
From: Greg Freemyer @ 2009-09-16 14:51 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Tom Carlson, Majed B.,
	David Rees, Richard Scobie, Drew, Linux RAID Mailing List

On Wed, Sep 16, 2009 at 10:28 AM, Doug Ledford <dledford@redhat.com> wrote:
> On Sep 16, 2009, at 7:11 AM, Tom Carlson wrote:
>>
>> Hi,
>>
>> I've had a slightly bad experience with port multipliers. I have a
>> PCI-e x1 JMB362 on the host end and a SiI 3726 connected to it. (I
>> think. It's a 1-5 PM). I have 5 disks connected in raid5 and get some
>> fairly appalling write speeds, well below what I'd expect even for
>> raid5 writes. Reads too are fairly slow...
>>
>> $ dd if=/dev/zero of=./blah bs=1M count=512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes (537 MB) copied, 47.4814 s, 11.3 MB/s
>>
>> $ dd if=./bigfile.iso of=/dev/null
>> 8474857+0 records in
>> 8474857+0 records out
>> 4339126272 bytes (4.3 GB) copied, 144.667 s, 30.0 MB/s
>>
>> Obviously this isn't the most scientific of tests... :-) but it does
>> show slowness with this particular combination.
>>
>> I'm tempted to go buy a SiI 3132 based controller and compare the results.
>
>
> I would, those numbers look *really* bad compared to what they could be.
>
> --
>
> Doug Ledford <dledford@redhat.com>
>

The wiki at <http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Hardware_support>
has at least a couple comments about PMP throughput.

If there is not a better place, maybe that wiki could have a PMP
section added and slowly start to be a good source of info for
building a PMP based solution.

Greg

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-15 17:56       ` Doug Ledford
  2009-09-15 18:12         ` Majed B.
  2009-09-15 20:28         ` Greg Freemyer
@ 2009-09-16 15:34         ` John Robinson
  2009-09-16 16:21           ` Doug Ledford
  2 siblings, 1 reply; 35+ messages in thread
From: John Robinson @ 2009-09-16 15:34 UTC (permalink / raw)
  To: Doug Ledford; +Cc: Linux RAID Mailing List

On 15/09/2009 18:56, Doug Ledford wrote:
> On Sep 10, 2009, at 2:44 PM, Majed B. wrote:
>> The maximum throughput you'll get is the PCI bus's speed. Make sure to
>> note which version your server has.
>>
>> The silicon image controller will be your bottleneck here, but I don't
>> have any numbers to say how much of a loss you'll be at. You'd have to
>> search around for those who already benchmarked their systems, or
>> buy/request a card to test it out.
> 
> I've actually been doing some of those benchmarks here.  Given a Silicon 
> Image 3124 card in a x1 PCI-e slot, my maximum throughput should be 
> about 250MB/s (PCI-e limitation).  My drives behind the pm are all 
> capable of about 80MB/s, and I have 4 drives.  What I've found is that 
> when accessing one drive by itself, I get 80MB/s.  When accessing more 
> than one drive, I get a total of about 120MB/s, but it's divided by 
> however many drives I'm accessing.  So, two drives is roughly 60MB/s 
> each, 3 drives about 40MB/s each, and 4 drives about 30MB/s each.

Were you using a SiI3124-1 (1.5Gbps, they claim 150MB/s) or SiI3124-2 
(3Gbps/300MB/s)? What throughput can you get using all 4 channels of the 
SiI3124 simultaneously, not using the port multiplier - does that top 
out at 120MB/s too?

And have you done similar testing of your port multiplier hanging off a 
motherboard SATA port? Do you get anywhere nearer the 3Gbps?

Just interested...

Cheers,

John.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Port Multipliers
  2009-09-16 14:28                           ` Doug Ledford
  2009-09-16 14:51                             ` Greg Freemyer
@ 2009-09-16 15:35                             ` Leslie Rhorer
  1 sibling, 0 replies; 35+ messages in thread
From: Leslie Rhorer @ 2009-09-16 15:35 UTC (permalink / raw)
  To: linux-raid



> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Doug Ledford
> Sent: Wednesday, September 16, 2009 9:28 AM
> To: Tom Carlson
> Cc: Majed B.; David Rees; Richard Scobie; Greg Freemyer; Drew; Linux RAID
> Mailing List
> Subject: Re: Port Multipliers
> 
> On Sep 16, 2009, at 7:11 AM, Tom Carlson wrote:
> > Hi,
> >
> > I've had a slightly bad experience with port multipliers. I have a
> > PCI-e x1 JMB362 on the host end and a SiI 3726 connected to it. (I
> > think. It's a 1-5 PM). I have 5 disks connected in raid5 and get some
> > fairly appalling write speeds, well below what I'd expect even for
> > raid5 writes. Reads too are fairly slow...
> >
> > $ dd if=/dev/zero of=./blah bs=1M count=512
> > 512+0 records in
> > 512+0 records out
> > 536870912 bytes (537 MB) copied, 47.4814 s, 11.3 MB/s
> >
> > $ dd if=./bigfile.iso of=/dev/null
> > 8474857+0 records in
> > 8474857+0 records out
> > 4339126272 bytes (4.3 GB) copied, 144.667 s, 30.0 MB/s
> >
> > Obviously this isn't the most scientific of tests... :-) but it does
> > show slowness with this particular combination.
> >
> > I'm tempted to go buy a SiI 3132 based controller and compare the
> > results.
> 
> 
> I would, those numbers look *really* bad compared to what they could be.

	Um, yeah. No kidding.  I did the same tests on a very "low rent"
system using a mid-range Asus / AMD 64 x 2 motherboard and a $45 three port
Chinese clone SiI 3124 interface card feeding a ten disk RAID6 array:

RAID-Server:/RAID/Server-Main/Temp# dd if=/dev/zero of=./blah bs=1M
count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 16.0076 s, 33.5 MB/s
RAID-Server:/RAID/Server-Main/Temp# dd if=Test_HD.TiVo of=/dev/null
3952838+1 records in
3952838+1 records out
2023853135 bytes (2.0 GB) copied, 19.4791 s, 104 MB/s

	And from cached data:

RAID-Server:/RAID/Server-Main/Temp# dd if=./blah of=/dev/null
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 1.51728 s, 354 MB/s

	Doing ordinary daily rsync backups between two similar systems
across a 1000BaseT LAN I regularly hit peaks of 75 MB/s with sustained rates
well above 50 MB/s.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16 15:34         ` John Robinson
@ 2009-09-16 16:21           ` Doug Ledford
  0 siblings, 0 replies; 35+ messages in thread
From: Doug Ledford @ 2009-09-16 16:21 UTC (permalink / raw)
  To: John Robinson; +Cc: Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 1387 bytes --]

On Sep 16, 2009, at 11:34 AM, John Robinson wrote:
> Were you using a SiI3124-1 (1.5Gbps, they claim 150MB/s) or  
> SiI3124-2 (3Gbps/300MB/s)?

Actually, it's a 3132 (which is just the PCI-e version of the 3124-2).

> What throughput can you get using all 4 channels of the SiI3124  
> simultaneously, not using the port multiplier - does that top out at  
> 120MB/s too?

It's only got two ports enabled, and they're both eSATA ports.  So, I  
can't really answer that question.  However, given that these are  
known to top out at 120MB/s in a PCI-e slot that doesn't support  
payload size increase, I would guess it would.

> And have you done similar testing of your port multiplier hanging  
> off a motherboard SATA port? Do you get anywhere nearer the 3Gbps?


Yes, I tested an eSATA port on one motherboard and an internal SATA  
port to eSATA setup on another motherboard.  In both cases, the  
motherboards used the ahci driver for the port under test, and in both  
cases the ahci ports didn't support FIS command switching.  As a  
result, they both performed even worse, capping out at around 80MB/s.   
FIS switching is more or less mandatory if you want good performance  
of a port multiplier link.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16 14:25                         ` Doug Ledford
@ 2009-09-16 16:44                           ` Majed B.
  2009-09-16 16:56                             ` Doug Ledford
  0 siblings, 1 reply; 35+ messages in thread
From: Majed B. @ 2009-09-16 16:44 UTC (permalink / raw)
  To: Doug Ledford
  Cc: David Rees, Richard Scobie, Greg Freemyer, Drew, Linux RAID Mailing List

Doug,

This may answer your question:
http://forums.amd.com/forum/messageview.cfm?catid=203&threadid=117391

On Wed, Sep 16, 2009 at 5:25 PM, Doug Ledford <dledford@redhat.com> wrote:
> On Sep 15, 2009, at 9:45 PM, Majed B. wrote:
>>
>> Regarding payloads, I've recently bought an EVGA motherboard off
>> newegg for $120 and it supports upping the payload to 4096 bytes.
>>
>> Newegg link:
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16813188035
>> Manual guide: http://www.evga.com/support/manuals/files/113-YW-E115.pdf
>>
>> The motherboard above has 8 SATA ports, built-in VGA (256MB, if you
>> care), 1x Gbit LAN, 4x RAM DIMMs and a few more options. I use it for
>> my primary array: 8x1TB disks.
>>
>> ASUS gaming motherboards allow changing the payload as well.
>
>
> So far I've not found a single motherboard that supports this *and* uses AMD
> CPUs.  This appears to be an Intel feature only.  I'm sure my manager would
> prefer if I can get this performance upgrade with only a motherboard swap
> instead of a motherboard, CPU, and possibly RAM swap.
>
> --
>
> Doug Ledford <dledford@redhat.com>
>
> GPG KeyID: CFBFF194
> http://people.redhat.com/dledford
>
> InfiniBand Specific RPMS
> http://people.redhat.com/dledford/Infiniband
>
>
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16 16:44                           ` Majed B.
@ 2009-09-16 16:56                             ` Doug Ledford
  0 siblings, 0 replies; 35+ messages in thread
From: Doug Ledford @ 2009-09-16 16:56 UTC (permalink / raw)
  To: Majed B.
  Cc: David Rees, Richard Scobie, Greg Freemyer, Drew, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 365 bytes --]

On Sep 16, 2009, at 12:44 PM, Majed B. wrote:
> Doug,
>
> This may answer your question:
> http://forums.amd.com/forum/messageview.cfm?catid=203&threadid=117391


That certainly does, thank you.

--

Doug Ledford <dledford@redhat.com>

GPG KeyID: CFBFF194
http://people.redhat.com/dledford

InfiniBand Specific RPMS
http://people.redhat.com/dledford/Infiniband





[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 203 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Port Multipliers
  2009-09-16 14:51                             ` Greg Freemyer
@ 2009-09-16 18:02                               ` Tom Carlson
  0 siblings, 0 replies; 35+ messages in thread
From: Tom Carlson @ 2009-09-16 18:02 UTC (permalink / raw)
  To: Greg Freemyer
  Cc: Doug Ledford, Majed B.,
	David Rees, Richard Scobie, Drew, Linux RAID Mailing List

> The wiki at <http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Hardware_support>
> has at least a couple comments about PMP throughput.

Well, the JMB360/362 are, according to that wiki page, AHCI-flavour
devices. Perhaps somebody else with another AHCI-driven PM capable
controller could do some tests too?

I have had a quick scout around the internet, a 2-port eSATA sii 3132
(PCI-e x1) seems to cost around £20. If it can do 120MB instead of 20
then I think it really would be worth getting :)

T
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2009-09-16 18:02 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-10 12:56 Port Multipliers Drew
2009-09-10 16:11 ` Majed B.
2009-09-10 18:14   ` Drew
2009-09-10 18:32     ` Majed B.
2009-09-10 18:48       ` Drew
2009-09-10 18:53         ` Majed B.
2009-09-10 19:14     ` Richard Scobie
2009-09-10 18:35   ` Drew
2009-09-10 18:44     ` Majed B.
2009-09-15 17:56       ` Doug Ledford
2009-09-15 18:12         ` Majed B.
2009-09-15 19:55           ` Doug Ledford
2009-09-15 20:08             ` Majed B.
2009-09-15 20:28         ` Greg Freemyer
2009-09-15 20:34           ` Doug Ledford
2009-09-15 20:49             ` Richard Scobie
2009-09-15 21:29               ` Doug Ledford
2009-09-15 23:32                 ` Drew
2009-09-16  1:26                   ` Doug Ledford
2009-09-15 21:52               ` David Rees
2009-09-16  0:31                 ` Doug Ledford
2009-09-16  1:01                   ` Majed B.
2009-09-16  1:28                     ` Doug Ledford
2009-09-16  1:45                       ` Majed B.
2009-09-16 11:11                         ` Tom Carlson
2009-09-16 14:28                           ` Doug Ledford
2009-09-16 14:51                             ` Greg Freemyer
2009-09-16 18:02                               ` Tom Carlson
2009-09-16 15:35                             ` Leslie Rhorer
2009-09-16 14:25                         ` Doug Ledford
2009-09-16 16:44                           ` Majed B.
2009-09-16 16:56                             ` Doug Ledford
2009-09-16 15:34         ` John Robinson
2009-09-16 16:21           ` Doug Ledford
2009-09-10 18:45     ` Mikael Abrahamsson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.