All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joe Landman <landman@scalableinformatics.com>
To: xfs@oss.sgi.com
Subject: Re: A little RAID experiment
Date: Fri, 27 Apr 2012 11:28:53 -0400	[thread overview]
Message-ID: <4F9ABB35.4020806@scalableinformatics.com> (raw)
In-Reply-To: <CAAxjCExzLXGMw2O32kR8xGS6-EpfwfKm-zkM0Vy58bryWygZuQ@mail.gmail.com>

On 04/26/2012 04:53 AM, Stefan Ring wrote:


> I just want to stress that our machine with the SmartArray controller
> is not a cheap old dusty leftover, but a recently-bought (December
> 2011) not exactly cheap Blade server, and that’s all you get from HP.

We have an anecdote about something akin to this which happened 2 years 
ago.  A potential customer was testing a <insert large multi-letter 
acronym brand name here> machine to run a specific set of software which 
tightly coupled to its disks.  Performance was terrible.  Our partner 
(the software vendor) contacted us and asked us to help.  We'd suggested 
that the partner loan them the machine they had bought from us 2 years 
earlier (at the time) and try that.

Our 2 year old machine (actually 2 generations back at the time of test, 
now 5 generations behind our current kit) wound up being more than an 
order of magnitude faster than the (then) latest and greatest kit from 
<insert large multi-letter acronym brand name here>.

The lesson is this.  Latest and greatest doesn't mean fastest.  Design, 
and implementation matter.  Brand names don't.

To this day, we still see machines being pushed out with PCIx technology 
for networking, or disk, or ...

... and customers buy it up, for reasons that have little to do with 
performance, suitability to the task, etc.

If you need performance, its important to focus some effort upon 
locating systems/vendors capable of performing where you need them to 
perform.  Otherwise you may wind up with a warmed over web server with 
some random card and a few "fast" disks tossed in.

I don't mean to be blunt, but this is basically what you were sold. 
Note also, I see this in cluster file system bits all the time.  We get 
calls from people, who describe a design, and ask us for help making 
them go fast.  We discover that they've made some deep fundamental 
design decisions very poorly (usually upon the basis of what <insert 
large multi-letter acronym brand name here> told them were options), and 
there was no way to get between point A (their per unit performance) and 
point B (what they were hoping for as an aggregate system performance).

At the most basic level, your performance will be modulated by your 
slowest performing part.  You can put infinitely fast disks on a slow 
controller, and your performance will be terrible.  You can put slow 
disks on a very fast controller, and you will likely have better luck.

/Hoping this lesson is not lost ...

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2012-04-27 15:28 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-25  8:07 A little RAID experiment Stefan Ring
2012-04-25 14:17 ` Roger Willcocks
2012-04-25 16:23   ` Stefan Ring
2012-04-27 14:03     ` Stan Hoeppner
2012-04-26  8:53   ` Stefan Ring
2012-04-27 15:10     ` Stan Hoeppner
2012-04-27 15:28     ` Joe Landman [this message]
2012-04-28  4:42       ` Stan Hoeppner
2012-04-27 13:50 ` Stan Hoeppner
2012-05-01 10:46   ` Stefan Ring
2012-05-30 11:07     ` Stefan Ring
2012-05-31  1:30       ` Stan Hoeppner
2012-05-31  6:44         ` Stefan Ring
2012-07-16 19:57 ` Stefan Ring
2012-07-16 20:03   ` Stefan Ring
2012-07-16 20:05     ` Stefan Ring
2012-07-16 21:27   ` Stan Hoeppner
2012-07-16 21:58     ` Stefan Ring
2012-07-17  1:39       ` Stan Hoeppner
2012-07-17  5:26         ` Dave Chinner
2012-07-18  2:18           ` Stan Hoeppner
2012-07-18  6:44             ` Stefan Ring
2012-07-18  7:09               ` Stan Hoeppner
2012-07-18  7:22                 ` Stefan Ring
2012-07-18 10:24                   ` Stan Hoeppner
2012-07-18 12:32                     ` Stefan Ring
2012-07-18 12:37                       ` Stefan Ring
2012-07-19  3:08                         ` Stan Hoeppner
2012-07-25  9:29                           ` Stefan Ring
2012-07-25 10:00                             ` Stan Hoeppner
2012-07-25 10:08                               ` Stefan Ring
2012-07-25 11:00                                 ` Stan Hoeppner
2012-07-26  8:32                             ` Dave Chinner
2012-09-11 16:37                               ` Stefan Ring
2012-07-16 22:16     ` Stefan Ring
2012-10-10 14:57 ` Stefan Ring
2012-10-10 21:27   ` Dave Chinner
2012-10-10 22:01     ` Stefan Ring
2012-04-26 22:33 Richard Scobie
2012-04-27 21:30 ` Emmanuel Florac
2012-04-28  4:15   ` Richard Scobie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F9ABB35.4020806@scalableinformatics.com \
    --to=landman@scalableinformatics.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.