All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: A little RAID experiment
Date: Wed, 18 Jul 2012 22:08:36 -0500	[thread overview]
Message-ID: <50077A34.5070304@hardwarefreak.com> (raw)
In-Reply-To: <CAAxjCEzF3nTFoedyKf1o5Nv4yPUJkgvC8nCJcx_2dDx8xqWtWA@mail.gmail.com>

Sorry for any potential dups.  Mail log shows this msg was accepted 3.5
hours ago but it hasn't spit back to me yet and no bounce.  Resending.

On 7/18/2012 7:37 AM, Stefan Ring wrote:
>> At least I have some multi-threaded results from the other two machines:
>>
>> LSI:
>>
>> 4 threads
>>
>> [   2s] reads: 0.00 MB/s writes: 63.08 MB/s fsyncs: 0.00/s response
>> time: 0.452ms (95%)
>> [   4s] reads: 0.00 MB/s writes: 34.26 MB/s fsyncs: 0.00/s response
>> time: 1.660ms (95%)
> 
> And because of the bad formatting:
> https://github.com/Ringdingcoder/sysbench/blob/master/mail2.txt

And this is why people publishing real, useable benchmark results
publish all specs of the hardware/software environment being tested.  I
think I've mentioned once or twice how critical accurate/complete
information is.

Looking at the table linked above, two things become clear:

1.  The array spindle config of the 3 systems is wildly different.

   a.  P400  = 6x  10K  SAS  RAID6
   b.  P2000 = 12x 7.2k SATA RAID6
   c.  LSI   = unknown

2.  The LSI outperforms the other two by a wide margin, yet we know
nothing of the disks attached.  At first blush, ans assuming disk config
is similar to the other two systems, the controller firmware *appears*
to perform magic.  But without knowing the spindle config of the LSI we
simply can't draw any conclusions yet.

This benchmark test seems to involve no or little metadata IO, so few
RMW cycles, and RAID6 doesn't kill us.  So if the LSI has the common 24
bay 2.5" JBOD shelf attached, with 2 spares and 22x 15K SAS drives (20
stripe spindles) in RAID6, this alone may fully explain the performance
gap, due to 6.7x the seek performance against the 6x 10k drives (4
spindles) in RAID6 on the P400.  This would also equal 4x the seek
performance of the 12 disks (10 spindles) of the P2000.

Given the results for the P2000, it seems clear that the LUN you're
hitting is not striped across 10 spindles.  It would seem that the 12
drives have been split up into two or more RAID arrays, probably 2x 6
drive RAID6s, and your test LUN sits on one of them, yielding 4x 7.2k
stripe spindles.  If it spanned 10 of 12 drives in a RAID6, it shouldn't
stall as shown in your data.  The "tell" here is that the P2000 with 10
7.2k drives has 1.7x the seek performance of the 4 spindles in your
P400, which outruns the P2000 once cache is full.  The P2000 controller
has over 4x the write cache of the P400, which is clearly demonstrated
in your data:

>From 2s to 8s, the P2000 averages ~25MB/s throughput with sub 10ms
latency.  At 10s and up, latency jumps to multiple *seconds* and
throughput drops to "zero".  This clearly shows that when cache is full
and must flush, the drives are simply overwhelmed.  10x 7.2k striped
SATA spindles would not perform this badly.  Thus it seems clear your
LUN sits on only 4 of the 12 spindles.

The cached performance of the P2000 is about 50% of the LSI, and the LSI
has 4x less cache memory.  This could be due to cache mirroring between
the two controllers eating 50% of the cache RAM bandwidth.

So in summary, it would be nice to know the disk config of the LSI.
Once we have complete hardware information, it may likely turn out that
the bulk of the performance differences simply come down to what disks
are attached to each controller.  BTW, you provided lspci output of the
chip on the RAID card.  Please provide the actual model# of the LSI
card.  Dozens of LSI and OEM cards on the market have used the SAS1078
ASIC.  The card you have may not even be an LSI card, or may even be
embedded.  We can't tell from the info given.

The devil is always in the details Stefan. ;)

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-07-19 15:07 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-25  8:07 A little RAID experiment Stefan Ring
2012-04-25 14:17 ` Roger Willcocks
2012-04-25 16:23   ` Stefan Ring
2012-04-27 14:03     ` Stan Hoeppner
2012-04-26  8:53   ` Stefan Ring
2012-04-27 15:10     ` Stan Hoeppner
2012-04-27 15:28     ` Joe Landman
2012-04-28  4:42       ` Stan Hoeppner
2012-04-27 13:50 ` Stan Hoeppner
2012-05-01 10:46   ` Stefan Ring
2012-05-30 11:07     ` Stefan Ring
2012-05-31  1:30       ` Stan Hoeppner
2012-05-31  6:44         ` Stefan Ring
2012-07-16 19:57 ` Stefan Ring
2012-07-16 20:03   ` Stefan Ring
2012-07-16 20:05     ` Stefan Ring
2012-07-16 21:27   ` Stan Hoeppner
2012-07-16 21:58     ` Stefan Ring
2012-07-17  1:39       ` Stan Hoeppner
2012-07-17  5:26         ` Dave Chinner
2012-07-18  2:18           ` Stan Hoeppner
2012-07-18  6:44             ` Stefan Ring
2012-07-18  7:09               ` Stan Hoeppner
2012-07-18  7:22                 ` Stefan Ring
2012-07-18 10:24                   ` Stan Hoeppner
2012-07-18 12:32                     ` Stefan Ring
2012-07-18 12:37                       ` Stefan Ring
2012-07-19  3:08                         ` Stan Hoeppner [this message]
2012-07-25  9:29                           ` Stefan Ring
2012-07-25 10:00                             ` Stan Hoeppner
2012-07-25 10:08                               ` Stefan Ring
2012-07-25 11:00                                 ` Stan Hoeppner
2012-07-26  8:32                             ` Dave Chinner
2012-09-11 16:37                               ` Stefan Ring
2012-07-16 22:16     ` Stefan Ring
2012-10-10 14:57 ` Stefan Ring
2012-10-10 21:27   ` Dave Chinner
2012-10-10 22:01     ` Stefan Ring
2012-04-26 22:33 Richard Scobie
2012-04-27 21:30 ` Emmanuel Florac
2012-04-28  4:15   ` Richard Scobie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50077A34.5070304@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.