All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rudy Zijlstra <rudy@grumpydevil.homelinux.org>
To: Ed W <lists@wildgooses.com>
Cc: Stan Hoeppner <stan@hardwarefreak.com>, linux-raid@vger.kernel.org
Subject: Re: HBA Adaptor advice
Date: Sat, 21 May 2011 13:29:09 +0200	[thread overview]
Message-ID: <4DD7A205.8070106@grumpydevil.homelinux.org> (raw)
In-Reply-To: <4DD79F4E.7000509@wildgooses.com>

Hi Ed,

I understand your thinking. There is one big cost not mentioned in this 
calculation though:
- what is the cost if the data is lost/corrupt?

compared to that cost, how relevant is the cost of a proper card?

I am getting the feeling of "penny wise, pound foolish"

Now that mind set, of course, describes many a business....

Cheers,


Rudy

On 05/21/2011 01:17 PM, Ed W wrote:
> Hi Stan
>
> Thanks for the time in composing your reply
>
>    
>> I'm curious why you are convinced that you need BBWC, or even simply WC,
>> on an HBA used for md RAID.
>>      
> In the past I have used battery backed cards and where the write speed
> is "fsync constrained" the writeback cache makes the app performance fly
> at perhaps 10-100x the speed
>
> So for example postfix delivery speeds and mysql write performance are
> examples of applications which generate regular fsyncs.  The whole app
> pauses for basically the seek time of the drive head and performance is
> bounded by seek time (assuming spinning media).
>
> If we add a writeback cache then it would appear that you take a couple
> of "green" 2TB drives and suddenly your desktop server acquires short
> term performance which matches a bunch of high end drives? (noted only
> in bursts, after some seconds you catch up with the drives IOPs).  For
> my basically "small server" requirements this gives me a big boost in
> the feeling of interactivity for perhaps less than the price of a couple
> of those high end drives
>
>
>    
>>   I'm also curious as to why you are so
>> adamant about _not_ using the RAID ASIC on an HBA, given that it will
>> take much greater advantage of the BBWC than md RAID will.
>>      
> Only for a single reason: Its a small office server and I want the
> flexibility to move the drives to a different card (eg failed server,
> failed card or something else).  Buying a spare card changes the
> dynamics quite a bit when the whole server (sans raid card) only costs
> £1,000 ish?
>
>
>    You may be
>    
>> interested to know:
>>
>> 1.  When BBWC is enabled, all internal drive caches must be disabled.
>>      Otherwise you eliminate the design benefit of the BBU, and may as
>>      well not have one.
>>      
> Yes, I hadn't thought of that.  Good point!
>
>    
>> 2.  w/md RAID on an HBA, if you have a good UPS and don't suffer
>>      kernel panics, crashes, etc, you can disable barrier support in
>>      your FS and you can use the drive caches.
>>      
> I don't buy this...
>
> Note we are discussing "long tail events" here. ie catastrophic events
> which occur very infrequently. At this point experience is everything
> and I concede limited experience, you likely have more, but I'm going to
> claim that these events are sufficiently rare that your experience
> probably still isn't sufficient to draw proper conclusions...
>
> In my limited experience hardware is pretty reliable and goes bad
> rarely.  However, my estimate is that powercables fall out, PSUs fail
> and UPSs go bad at least as often as the power fails?
>
> Obviously it's application dependent, some may tolerate small dataloss
> in the event of powerdown, but I should think most people want a
> guarantee that the system is "recoverable" in the event of sudden
> powerdown.
>
> I think disabling barriers might not be the best way to avoid fsync
> delays, compared with the incremental cost of adding BBU writeback
> cache? (basically the same thing, but smaller chance of failure)
>
>
>    
>> For a stable system with good UPS and auto shutdown configured, BBWC is
>> totally overrated.  If the system never takes a nose dive from power
>> drop, and doesn't crash due to software or hardware failure, then BBWC
>> is a useless $200-1000 option.
>>      
> It depends on the application, but I claim that there is a fairly
> significant chance of hard unexpected powerdown even with a good UPS.
> You still are at risk from cables getting pulled, UPSs failing, etc
>
> I think in a properly setup datacenter (racked) environment then it's
> easier to control these accidents.  Cables can be tied in, layers of
> power backup can be managed, it becomes efficient to add quality
> surge/lightning protection, etc.  However, there is a large proportion
> of the market that have a few machines in an office and now it's much
> harder to stop the cleaner tripping over the UPS, or hiding it under
> boxes of paper until it melts due to overheating...
>
>
>    
>> If your current reasoning for wanting write cache on the HBA is
>> performance, then forget about the write cache as you don't need it with
>> md RAID.  If you want the BBWC combo for safety as your system isn't
>> stable or you have a crappy or no UPS, then forgo md RAID and use the
>> hardware RAID and BBWC combo.
>>      
> I want BB writeback cache purely to get the performance of effectively
> disabling fsync, but without the loss of protection which occurs if you
> do so.
>
>
>    
>> One last point:  If you're bargain hunting, especially if looking at
>> used gear on Ebay, that mindset is antithetical to proper system
>> integration, especially when talking about a RAID card BBU.
>>      
> I think there are few businesses who actually don't care about budget.
> Everything is about optimisation of cost vs performance vs reliability.
>   Like everything else, my question is really about the tradeoff of a
> small incremental spend, which in turn might generate a substantial
> performance increase for certain classes of application.  Largely I'm
> thinking about performance tradeoffs for small office servers priced in
> the £500-3,000 kind of range (not "proper" high end storage devices)
>
> I think at that kind of level it makes sense to look for bargains,
> especially if you are adding servers in small quantities, eg singles or
> pairs.
>
>
>    
>> If you buy
>> a use card, the first thing you muse do is chuck the BBU and order a new
>> one,
>>      
> Agreed
>
>
>    
>> Buy 12:
>> http://www.seagate.com/ww/v/index.jsp?name=st91000640ss-constellation2-6gbs-sas-1-tb-hd&vgnextoid=ff13c5b2933d9210VgnVCM1000001a48090aRCRD&vgnextchannel=f424072516d8c010VgnVCM100000dd04090aRCRD&locale=en-US&reqPage=Support#tTabContentSpecifications
>>      
> Out of curiosity I check the power consumption and reliability numbers
> of the 3.5" "Green" drives and it's not so clear cut that the 2.5"
> drives outperform?
>
>
> Thanks for your thoughts - I think this thread has been very
> constructive - still very interested to hear good/bad reports of
> specific cards - perhaps someone might archive it into some kind of list?
>
> Cheers
>
> Ed W
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>    

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2011-05-21 11:29 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-19 12:26 HBA Adaptor advice Ed W
2011-05-19 12:36 ` Roman Mamedov
2011-05-19 12:43   ` Mathias Burén
2011-05-19 14:06 ` Michael Sallaway
2011-05-19 19:10 ` Thomas Harold
2011-05-19 21:12   ` Rudy Zijlstra
2011-05-19 21:07 ` Brad Campbell
2011-05-20 20:58   ` Tobias McNulty
2011-05-20 21:23     ` Brad Campbell
2011-05-20  2:08 ` Andy Smith
2011-05-20  5:30   ` Stan Hoeppner
2011-05-21  9:52     ` Ed W
2011-05-20  7:33   ` Ed W
2011-05-20 10:21     ` Stan Hoeppner
2011-05-21 11:17       ` Ed W
2011-05-21 11:29         ` Rudy Zijlstra [this message]
2011-05-21 11:54           ` Ed W
2011-05-21 17:37             ` Leslie Rhorer
2011-05-22  9:41             ` Stan Hoeppner
2011-05-22 10:03               ` Rudy Zijlstra
2011-05-23  9:32                 ` Ed W
2011-05-21 17:05           ` Leslie Rhorer
2011-05-22  9:04         ` Stan Hoeppner
2011-05-22 10:09           ` Brad Campbell
2011-05-22 19:25             ` Stan Hoeppner
2011-05-22 20:57               ` Tobias McNulty
2011-05-22 21:13                 ` Johannes Truschnigg
2011-05-23  9:48                   ` Ed W
2011-05-23 10:44                     ` John Robinson
2011-05-22 23:19               ` Brad Campbell
2011-05-23  4:09                 ` Roman Mamedov
2011-05-23  5:54                   ` Brad Campbell
2011-05-23  6:08                     ` Roman Mamedov
2011-05-23 10:42                     ` Stan Hoeppner
2011-05-23 11:35                       ` David Brown
2011-05-23  6:54                 ` Stan Hoeppner
2011-05-23  7:23                   ` Brad Campbell
2011-05-22 23:44               ` Brad Campbell
2011-05-23  0:07                 ` Brad Campbell
2011-05-23  5:30                   ` Stefan /*St0fF*/ Hübner
2011-05-23 10:18                     ` Ed W
2011-05-23  9:58                 ` Stan Hoeppner
2011-05-23 10:33                   ` Ed W
2011-05-23 11:21                     ` Stan Hoeppner
2011-05-20 12:18     ` Joe Landman
2011-05-20 12:34       ` Roman Mamedov
2011-05-20 12:36         ` Mathias Burén
2011-05-20 12:48         ` Joe Landman
2011-05-20 13:21       ` Ed W
2011-05-20 14:23         ` Joe Landman
2011-05-20 20:01       ` Andy Smith
2011-05-20 20:12         ` Stan Hoeppner
2011-05-20 20:24         ` Drew
2011-05-20 20:58           ` Stan Hoeppner
     [not found]             ` <4DD7A100.2010807@wildgooses.com>
2011-05-22  8:13               ` Stan Hoeppner
2011-05-23  2:11 Jim Schatzman
2011-05-23  3:39 ` Tobias McNulty
2011-05-23 10:42   ` Ed W
2011-05-23 11:14 HBA Adaptor Advice Ed W
2011-05-23 11:55 ` Joe Landman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DD7A205.8070106@grumpydevil.homelinux.org \
    --to=rudy@grumpydevil.homelinux.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=lists@wildgooses.com \
    --cc=stan@hardwarefreak.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.