All of lore.kernel.org
 help / color / mirror / Atom feed
* WD Red vs Black drives for RAID1
@ 2015-11-16 16:28 John Stoffel
  2015-11-16 17:05 ` Another Sillyname
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: John Stoffel @ 2015-11-16 16:28 UTC (permalink / raw)
  To: Linux-RAID


Guys,

I'm starting to get tons of errors on my various mixed 1 and 2Tb
drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
It's time to start replacing them and I think I want to either go with
the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
SSDs to use with lvmcache for speedup.

Any comments?

John

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 16:28 WD Red vs Black drives for RAID1 John Stoffel
@ 2015-11-16 17:05 ` Another Sillyname
  2015-11-16 17:35   ` John Stoffel
  2015-11-16 17:27 ` Jens-U. Mozdzen
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Another Sillyname @ 2015-11-16 17:05 UTC (permalink / raw)
  To: Linux-RAID

Some idea of apps and required response times would likely get a
better response.

On 16 November 2015 at 16:28, John Stoffel <john@stoffel.org> wrote:
>
> Guys,
>
> I'm starting to get tons of errors on my various mixed 1 and 2Tb
> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
> It's time to start replacing them and I think I want to either go with
> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
> SSDs to use with lvmcache for speedup.
>
> Any comments?
>
> John
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 16:28 WD Red vs Black drives for RAID1 John Stoffel
  2015-11-16 17:05 ` Another Sillyname
@ 2015-11-16 17:27 ` Jens-U. Mozdzen
  2015-11-16 17:32   ` John Stoffel
  2015-11-16 17:45 ` Robert L Mathews
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Jens-U. Mozdzen @ 2015-11-16 17:27 UTC (permalink / raw)
  To: John Stoffel; +Cc: Linux-RAID

Hi John,

Zitat von John Stoffel <john@stoffel.org>:
> Guys,
>
> I'm starting to get tons of errors on my various mixed 1 and 2Tb
> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
> It's time to start replacing them and I think I want to either go with
> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
> SSDs to use with lvmcache for speedup.
>
> Any comments?

How are the drives to be attached to the server?

We started with a bunch of 1TB WD Reds (2.5") connected to a  
SuperMicro server (2028TP-DECR, with 12 disk bays) via SAS3  
extender... bad choice. We saw random hangs under various loads,  
letting disks drop out of the RAID6. SuperMicro support blames the  
disks as such ("not enterprise-grade"), WD responded that the SAS  
extender is the source of trouble, despite being said to support SATA  
drives. SCTERC was set to 7 seconds.

WD's response matches our own observations: Using the same drives in a  
non-extender environment (older SuperMicro servers) gives us no  
trouble at all.

We found these WD Reds to be a bit slow, but really liked the power  
consumption / heat aspects of the drives and of course the price per  
GB. As we paired the disks with SSD caching, actual disk speed was no  
issue in our case.

Regards,
Jens


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 17:27 ` Jens-U. Mozdzen
@ 2015-11-16 17:32   ` John Stoffel
  2015-11-16 17:44     ` Jens-U. Mozdzen
  0 siblings, 1 reply; 15+ messages in thread
From: John Stoffel @ 2015-11-16 17:32 UTC (permalink / raw)
  To: Jens-U. Mozdzen; +Cc: John Stoffel, Linux-RAID

>>>>> "Jens-U" == Jens-U Mozdzen <jmozdzen@nde.ag> writes:

Jens-U> Hi John,
Jens-U> Zitat von John Stoffel <john@stoffel.org>:
>> Guys,
>> 
>> I'm starting to get tons of errors on my various mixed 1 and 2Tb
>> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
>> It's time to start replacing them and I think I want to either go with
>> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
>> SSDs to use with lvmcache for speedup.
>> 
>> Any comments?

Jens-U> How are the drives to be attached to the server?

I'm planning on just hooking them into the:

  Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008
  PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

PCIe controller I have in the system.   This is strictly my home
server, not anything special.  Except to me.  :-)

Jens-U> We started with a bunch of 1TB WD Reds (2.5") connected to a
Jens-U> SuperMicro server (2028TP-DECR, with 12 disk bays) via SAS3
Jens-U> extender... bad choice. We saw random hangs under various
Jens-U> loads, letting disks drop out of the RAID6. SuperMicro support
Jens-U> blames the disks as such ("not enterprise-grade"), WD
Jens-U> responded that the SAS extender is the source of trouble,
Jens-U> despite being said to support SATA drives. SCTERC was set to 7
Jens-U> seconds.

This is my other complaint, it's damn hard to know SCTERC support from
the vendor specifications documents.  They're practically useless.  

Jens-U> WD's response matches our own observations: Using the same
Jens-U> drives in a non-extender environment (older SuperMicro
Jens-U> servers) gives us no trouble at all.

Jens-U> We found these WD Reds to be a bit slow, but really liked the
Jens-U> power consumption / heat aspects of the drives and of course
Jens-U> the price per GB. As we paired the disks with SSD caching,
Jens-U> actual disk speed was no issue in our case.

Were you using lvmcache?  How did you like it?  Any problems or
issues?  SSD prices are down enough now to make it really tempting to
just get a pair of big 4Tb drives and then the smaller SSDs for
caching, but I'm concerned about reliability and durability.  Which is
why I tend to triple mirror my RAID1 drives...


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 17:05 ` Another Sillyname
@ 2015-11-16 17:35   ` John Stoffel
  0 siblings, 0 replies; 15+ messages in thread
From: John Stoffel @ 2015-11-16 17:35 UTC (permalink / raw)
  To: Another Sillyname; +Cc: Linux-RAID


Another> Some idea of apps and required response times would likely
Another> get a better response.

Sorry, it's purely a home NFS/KVM server.  A couple of constant VMs
running, but I spin up test VMs fairly frequently to test things out
and play with new setups.

I'm not looking for killer performance, after all it's an AMD Penom II
X4 server!  The CPU is actually quite enough for my needs, it's the
disk that's starting to get old and crufty.

So instead of just getting a bunch of 2Tb disks, maybe it's time to
get fewer large disks in RAID1 paired with 500Gb SSDs (for boot/OS and
lvmcache) to get the system booting.

I'm more concerned with durability and resiliency, than I am with
absolute disk space and performance.  Which is why I'm looking at
fewer spindles.

Does this clarify things?  

John

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 17:32   ` John Stoffel
@ 2015-11-16 17:44     ` Jens-U. Mozdzen
  2015-11-17  5:04       ` Brad Campbell
  0 siblings, 1 reply; 15+ messages in thread
From: Jens-U. Mozdzen @ 2015-11-16 17:44 UTC (permalink / raw)
  To: John Stoffel; +Cc: Linux-RAID

Hi John,

Zitat von John Stoffel <john@stoffel.org>:
>>>>>> "Jens-U" == Jens-U Mozdzen <jmozdzen@nde.ag> writes:
>
> Jens-U> Hi John,
> Jens-U> Zitat von John Stoffel <john@stoffel.org>:
>>> Guys,
>>>
>>> I'm starting to get tons of errors on my various mixed 1 and 2Tb
>>> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
>>> It's time to start replacing them and I think I want to either go with
>>> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
>>> SSDs to use with lvmcache for speedup.
>>>
>>> Any comments?
>
> Jens-U> How are the drives to be attached to the server?
>
> I'm planning on just hooking them into the:
>
>   Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008
>   PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

according to WD support, hooking the Reds to the SAS adapter directly  
should be no problem. It's said to be the extender to cause the trouble.

> [...]
> Jens-U> We found these WD Reds to be a bit slow, but really liked the
> Jens-U> power consumption / heat aspects of the drives and of course
> Jens-U> the price per GB. As we paired the disks with SSD caching,
> Jens-U> actual disk speed was no issue in our case.
>
> Were you using lvmcache?  How did you like it?  Any problems or
> issues?  SSD prices are down enough now to make it really tempting to
> just get a pair of big 4Tb drives and then the smaller SSDs for
> caching, but I'm concerned about reliability and durability.  Which is
> why I tend to triple mirror my RAID1 drives...

we're using bcache, which is working nicely for us, but required lots  
of work to get there (bug fixes are mostly on the corresponding  
mailing list, not upstream. And there were some nasty bugs, indeed).

We're using both read & write caching, with really positive results:  
iowait without caching easily is above 25% on the machine, but drops  
down to 4% with SSD caching. Since even when moving dirty buffers from  
SSD to HDD, the SSD cache responds to most of the read requests, user  
experience is fairly good.

We've set up RAID6 for the HDD backing store and a 2-SSD-RAID1 for the  
cache... and on top of each logical volume we have DRBD replication to  
a backup server (which originally was for running backups, but served  
nicely when the RAID6 went down).

The SSD cache is 128GB, with typically less than 4GB dirty cache lines  
- so plenty of read cache, too.

Regards,
Jens


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 16:28 WD Red vs Black drives for RAID1 John Stoffel
  2015-11-16 17:05 ` Another Sillyname
  2015-11-16 17:27 ` Jens-U. Mozdzen
@ 2015-11-16 17:45 ` Robert L Mathews
  2015-11-16 19:50   ` John Stoffel
  2015-11-16 18:07 ` Wols Lists
  2015-11-16 18:28 ` Phil Turmel
  4 siblings, 1 reply; 15+ messages in thread
From: Robert L Mathews @ 2015-11-16 17:45 UTC (permalink / raw)
  To: Linux-RAID

On 11/16/15 8:28 AM, John Stoffel wrote:

> I'm starting to get tons of errors on my various mixed 1 and 2Tb
> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
> It's time to start replacing them and I think I want to either go with
> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
> SSDs to use with lvmcache for speedup.

I have no comment on the Red vs Black, but I do have experience with a
caching setup that's similar to this, but simpler.

Replacing one disk of a triple RAID1 array with an SSD, and marking the
other two spinning disks "write-mostly", vastly improves the performance
of the entire array in a read-heavy environment, with no extra caching
layer required.

It drops the read latency to almost zero in all cases, as you would
expect. But it also improves the write latency significantly, because
when a write occurs, it will never be queued behind a spinning disk
read: the spinning disks are more likely to be idle when they receive
the writes.

In our case, where the problem was mostly high latencies from disk seeks
in a read-heavy environment (not slow throughput reading/writing large
files), adding a single SSD reduced the overall average combined
read/write "await" latency by more than 50%.

I considered this preferable to an extra-layer caching solution because:
1) Reads of *all* files are from the SSD, not just some files; 2) It's
conceptually simpler than an extra caching layer so there's less to go
wrong; 3) It didn't even require a reboot to implement with hot-swap
disks; 4) Our eventual goal was to replace all spinning disks in the
arrays with SSDs as they reach their lifetime anyway, and it would be
extra work to remove the caching layer when that was done.
(Interestingly, when we did later replace the other two spinning disks
with SSDs, it made less difference than adding the first SSD.)

If your environment is write-heavy, a cache layer to intercept all
writes may make more sense, of course.

-- 
Robert L Mathews, Tiger Technologies, http://www.tigertech.net/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 16:28 WD Red vs Black drives for RAID1 John Stoffel
                   ` (2 preceding siblings ...)
  2015-11-16 17:45 ` Robert L Mathews
@ 2015-11-16 18:07 ` Wols Lists
  2015-11-16 18:28 ` Phil Turmel
  4 siblings, 0 replies; 15+ messages in thread
From: Wols Lists @ 2015-11-16 18:07 UTC (permalink / raw)
  To: John Stoffel, Linux-RAID

On 16/11/15 16:28, John Stoffel wrote:
> 
> Guys,
> 
> I'm starting to get tons of errors on my various mixed 1 and 2Tb
> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
> It's time to start replacing them and I think I want to either go with
> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
> SSDs to use with lvmcache for speedup.
> 
> Any comments?
> 
I'm running Seagate Barracudas in a mirror (probably similar to the
Blacks). I haven't come across reports of problems IN A MIRROR
CONFIGURATION.

However, I want to go Raid 5 (or 6) at some point, and all the advice is
DON'T BUY DESKTOP DRIVES (ie Barracudas, Blacks, Greens) if that's the
route you're planning on going down. So I've got to replace my
Barracudas :-(

If you want to go 5 or 6 (which might get you better response speeds too
- I don't know), then Reds are your only choice. (Or Seagate NAS,
because I'm a Seagate guy that's the route I might go.)

Because desktop drives don't support proper error recovery, it's all too
easy for what should be a little problem to trash the array - if you
follow the list I'd say well over half the "help my array is trashed"
threads here are caused because the person used desktop drives.

The price difference isn't *that* much - I suspect a lot of people here
will say if reliability trumps performance, pay extra for Red or NAS drives.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 16:28 WD Red vs Black drives for RAID1 John Stoffel
                   ` (3 preceding siblings ...)
  2015-11-16 18:07 ` Wols Lists
@ 2015-11-16 18:28 ` Phil Turmel
  2015-11-16 19:52   ` John Stoffel
  4 siblings, 1 reply; 15+ messages in thread
From: Phil Turmel @ 2015-11-16 18:28 UTC (permalink / raw)
  To: John Stoffel, Linux-RAID

On 11/16/2015 11:28 AM, John Stoffel wrote:
> 
> Guys,
> 
> I'm starting to get tons of errors on my various mixed 1 and 2Tb
> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
> It's time to start replacing them and I think I want to either go with
> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
> SSDs to use with lvmcache for speedup.
> 
> Any comments?

The data sheet for the Blacks implies that they do *not* have TLER, also
known as ERC.  This is vital for proper operation out-of-the-box in any
Linux Raid environment, with the exception of raid0.

Search the archives for "timeout mismatch" for detailed explanations why
this is important.

The Red family does have ERC and will work properly.

Phil


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 17:45 ` Robert L Mathews
@ 2015-11-16 19:50   ` John Stoffel
  0 siblings, 0 replies; 15+ messages in thread
From: John Stoffel @ 2015-11-16 19:50 UTC (permalink / raw)
  To: Robert L Mathews; +Cc: Linux-RAID

>>>>> "Robert" == Robert L Mathews <lists@tigertech.com> writes:

Robert> On 11/16/15 8:28 AM, John Stoffel wrote:
>> I'm starting to get tons of errors on my various mixed 1 and 2Tb
>> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
>> It's time to start replacing them and I think I want to either go with
>> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
>> SSDs to use with lvmcache for speedup.

Robert> I have no comment on the Red vs Black, but I do have
Robert> experience with a caching setup that's similar to this, but
Robert> simpler.

Robert> Replacing one disk of a triple RAID1 array with an SSD, and
Robert> marking the other two spinning disks "write-mostly", vastly
Robert> improves the performance of the entire array in a read-heavy
Robert> environment, with no extra caching layer required.

This is a great idea, and I'd go this route myself since I already
triple mirror my importand disks, but since I've already got 3Tb (1Tb
x 3, 2Tb x 3) disks in my setup, I'm looking for:

A) more space
B) cost is a prime factor
C) robust reliability

So my investigation of bcache and lvmcache has me leaning towards
lvmcache, if only because I can add it in without having to re-do my
entire setup and migrate data around.

For example, if I take out two disks, a 1Tb and 2Tb and then add in a
pair of 4Tb disks mirrored, I can then migrate my LVs over (and take
the downtime on one VolGroup with the 1Tb disks since it's less used
data...) and keep the system up and running.

Then I can shutdown, remove the 4 old disks, put in the 2 x 500gb
SSDs, and then bring things up, move stuff around, add lvmcache live,
etc.

Robert> It drops the read latency to almost zero in all cases, as you
Robert> would expect. But it also improves the write latency
Robert> significantly, because when a write occurs, it will never be
Robert> queued behind a spinning disk read: the spinning disks are
Robert> more likely to be idle when they receive the writes.

Robert> In our case, where the problem was mostly high latencies from
Robert> disk seeks in a read-heavy environment (not slow throughput
Robert> reading/writing large files), adding a single SSD reduced the
Robert> overall average combined read/write "await" latency by more
Robert> than 50%.

I'm more of a home NAS setup with my doing compiles, mail, light web
development, backups using bacula, mysql, KVMs, etc.  So it's a fairly
mixed and low stress environment.  But I'm now getting bombarded with
all kinds of warnings about bad blocks and I'm losing multiple
disks.... so it's time to seriously look into replacements.  

Robert> I considered this preferable to an extra-layer caching
Robert> solution because: 1) Reads of *all* files are from the SSD,
Robert> not just some files; 2) It's conceptually simpler than an
Robert> extra caching layer so there's less to go wrong; 3) It didn't
Robert> even require a reboot to implement with hot-swap disks; 4) Our
Robert> eventual goal was to replace all spinning disks in the arrays
Robert> with SSDs as they reach their lifetime anyway, and it would be
Robert> extra work to remove the caching layer when that was done.
Robert> (Interestingly, when we did later replace the other two
Robert> spinning disks with SSDs, it made less difference than adding
Robert> the first SSD.)

All these points are excellent.  It all founders on the cost of a 3Tb
SSD.  :-)

Robert> If your environment is write-heavy, a cache layer to intercept all
Robert> writes may make more sense, of course.

Robert> -- 
Robert> Robert L Mathews, Tiger Technologies, http://www.tigertech.net/
Robert> --
Robert> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
Robert> the body of a message to majordomo@vger.kernel.org
Robert> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 18:28 ` Phil Turmel
@ 2015-11-16 19:52   ` John Stoffel
  2015-11-16 20:02     ` Phil Turmel
  0 siblings, 1 reply; 15+ messages in thread
From: John Stoffel @ 2015-11-16 19:52 UTC (permalink / raw)
  To: Phil Turmel; +Cc: John Stoffel, Linux-RAID

>>>>> "Phil" == Phil Turmel <philip@turmel.org> writes:

Phil> On 11/16/2015 11:28 AM, John Stoffel wrote:
>> 
>> Guys,
>> 
>> I'm starting to get tons of errors on my various mixed 1 and 2Tb
>> drives I have in a bunch of RAID 1 mirrors, generally triple mirrors.
>> It's time to start replacing them and I think I want to either go with
>> the WD Black 4Tb or the WD Red 4Tb drives.  And with a pair of 500Gb
>> SSDs to use with lvmcache for speedup.
>> 
>> Any comments?

Phil> The data sheet for the Blacks implies that they do *not* have
Phil> TLER, also known as ERC.  This is vital for proper operation
Phil> out-of-the-box in any Linux Raid environment, with the exception
Phil> of raid0.

So I like the 5 year warranttee on the blacks, but it does look like
the REDs are the way to go.  And I think I'll also go with splitting
my data between seagate and WD and possibly Hitachi (I know, they've
been bought by WD) to make a three way RAID 1 mirror across 4Tb
drives.  Yes, I'd get more room out of RAID5, but I'm not that silly,
and I don't need to move to 4 x 4Tb in RAID6 either.

Who knows... still pricing things out.

John

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 19:52   ` John Stoffel
@ 2015-11-16 20:02     ` Phil Turmel
  2015-11-16 20:16       ` John Stoffel
  2015-11-16 20:55       ` Wols Lists
  0 siblings, 2 replies; 15+ messages in thread
From: Phil Turmel @ 2015-11-16 20:02 UTC (permalink / raw)
  To: John Stoffel; +Cc: Linux-RAID

On 11/16/2015 02:52 PM, John Stoffel wrote:

> So I like the 5 year warranttee on the blacks, but it does look like
> the REDs are the way to go.  And I think I'll also go with splitting
> my data between seagate and WD and possibly Hitachi (I know, they've
> been bought by WD) to make a three way RAID 1 mirror across 4Tb
> drives.  Yes, I'd get more room out of RAID5, but I'm not that silly,
> and I don't need to move to 4 x 4Tb in RAID6 either.

Seagate was the brand that screwed me first with the industry-wide
deletion of ERC support in desktop drives.  Hitachi held onto it the
longest.  Whatever you consider, read the data sheets carefully to
ensure they have ERC support.  Google the model number along with
'linux-raid' and 'scterc' to see our past experiences with specific drives.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 20:02     ` Phil Turmel
@ 2015-11-16 20:16       ` John Stoffel
  2015-11-16 20:55       ` Wols Lists
  1 sibling, 0 replies; 15+ messages in thread
From: John Stoffel @ 2015-11-16 20:16 UTC (permalink / raw)
  To: Phil Turmel; +Cc: John Stoffel, Linux-RAID

>>>>> "Phil" == Phil Turmel <philip@turmel.org> writes:

Phil> On 11/16/2015 02:52 PM, John Stoffel wrote:
>> So I like the 5 year warranttee on the blacks, but it does look like
>> the REDs are the way to go.  And I think I'll also go with splitting
>> my data between seagate and WD and possibly Hitachi (I know, they've
>> been bought by WD) to make a three way RAID 1 mirror across 4Tb
>> drives.  Yes, I'd get more room out of RAID5, but I'm not that silly,
>> and I don't need to move to 4 x 4Tb in RAID6 either.

Phil> Seagate was the brand that screwed me first with the
Phil> industry-wide deletion of ERC support in desktop drives.
Phil> Hitachi held onto it the longest.  Whatever you consider, read
Phil> the data sheets carefully to ensure they have ERC support.
Phil> Google the model number along with 'linux-raid' and 'scterc' to
Phil> see our past experiences with specific drives.

Yeah, I'm hoping I can find the Hitachis at a good price, but right
now it's looking like the WD REDs are the best price right now.  I'm
just leary of getting too many from the same vendor in case I get a
bad batch.

But it might be ok to get 2 x WD REDs and one of the Seagate NAS
drives to do a triple mirror.  And then wait before I do the pair of
SSDs for lvmcache, though maybe just going with a pair of 64Gb ones
would be enough for my needs.  I really don't change all that many
files a night.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 20:02     ` Phil Turmel
  2015-11-16 20:16       ` John Stoffel
@ 2015-11-16 20:55       ` Wols Lists
  1 sibling, 0 replies; 15+ messages in thread
From: Wols Lists @ 2015-11-16 20:55 UTC (permalink / raw)
  To: Phil Turmel, John Stoffel; +Cc: Linux-RAID

On 16/11/15 20:02, Phil Turmel wrote:
> Seagate was the brand that screwed me first with the industry-wide
> deletion of ERC support in desktop drives.  Hitachi held onto it the
> longest.  Whatever you consider, read the data sheets carefully to
> ensure they have ERC support.  Google the model number along with
> 'linux-raid' and 'scterc' to see our past experiences with specific drives.

Oddly enough, when I was looking at the model number for the Seagate NAS
drives, I noticed they started with HDS ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: WD Red vs Black drives for RAID1
  2015-11-16 17:44     ` Jens-U. Mozdzen
@ 2015-11-17  5:04       ` Brad Campbell
  0 siblings, 0 replies; 15+ messages in thread
From: Brad Campbell @ 2015-11-17  5:04 UTC (permalink / raw)
  To: Jens-U. Mozdzen, John Stoffel; +Cc: Linux-RAID

On 17/11/15 01:44, Jens-U. Mozdzen wrote:
> Hi John,
>
> Zitat von John Stoffel <john@stoffel.org>:

>> Jens-U> How are the drives to be attached to the server?
>>
>> I'm planning on just hooking them into the:
>>
>>   Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008
>>   PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
>
> according to WD support, hooking the Reds to the SAS adapter directly
> should be no problem. It's said to be the extender to cause the trouble.

I have 5 reds and 9 greens (all with TLER) connected to some of those 
controllers (except mine are rev 02). I have those drives in a 14 way 
RAID6, and I get some odd (non-terminal) errors on my monthly scrubs but 
nothing in normal use.

I think *my* problem is cheap cables to the backplane, but as it only 
occurs once a month during a scrub and a retry always succeeds I've not 
been bothered to do anything about it.

Errors like this :
[3385803.162623] sd 9:0:5:0: [sdr] UNKNOWN(0x2003) Result: hostbyte=0x00 
driverbyte=0x08
[3385803.193353] sd 9:0:5:0: [sdr] Sense Key : 0x3 [current]
[3385803.224289] sd 9:0:5:0: [sdr] ASC=0x11 ASCQ=0x0
[3385803.255393] sd 9:0:5:0: [sdr] CDB: opcode=0x28 28 00 24 84 65 00 00 
00 80 00
[3385803.287287] blk_update_request: critical medium error, dev sdr, 
sector 612656384

I have an array of SAS drives on one controller and I don't see those 
issues. It only happens on the SATA drives.

When setting up this system a few years ago I did borrow a SAS expander 
to play with, bit I encountered some odd issues with the SATA drives (WD 
Green) on the expander and ended up going with 3 controllers instead.

I've just been replacing the Greens with Reds when they start to fail. 
All in all I'm really happy with the Reds, and my next major hardware 
refresh will see the 14 current drives replaced with 6 6TB Reds.

The performance difference between the 5400 drives and the 7200 drives 
in an array turns out to be bugger all, plus the slower drives run 
cooler and use less power. I'd still be using Greens if they hadn't 
removed TLER.


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-11-17  5:04 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-16 16:28 WD Red vs Black drives for RAID1 John Stoffel
2015-11-16 17:05 ` Another Sillyname
2015-11-16 17:35   ` John Stoffel
2015-11-16 17:27 ` Jens-U. Mozdzen
2015-11-16 17:32   ` John Stoffel
2015-11-16 17:44     ` Jens-U. Mozdzen
2015-11-17  5:04       ` Brad Campbell
2015-11-16 17:45 ` Robert L Mathews
2015-11-16 19:50   ` John Stoffel
2015-11-16 18:07 ` Wols Lists
2015-11-16 18:28 ` Phil Turmel
2015-11-16 19:52   ` John Stoffel
2015-11-16 20:02     ` Phil Turmel
2015-11-16 20:16       ` John Stoffel
2015-11-16 20:55       ` Wols Lists

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.