All of lore.kernel.org
 help / color / mirror / Atom feed
* Which Disks can fail?
@ 2011-06-21 10:24 Jonathan Tripathy
  2011-06-21 10:45 ` NeilBrown
  0 siblings, 1 reply; 7+ messages in thread
From: Jonathan Tripathy @ 2011-06-21 10:24 UTC (permalink / raw)
  To: linux-raid

Hi Everyone,

Use md's "single process" RAID10 with the standard near layout (which is 
apperently the same as RAID1+0 in industry), which 2 drives could fail 
without loosing the array?

This is what I have:

Number   Major   Minor   RaidDevice State
        0       8        5        0      active sync   /dev/sda5
        1       8       21        1      active sync   /dev/sdb5
        2       8       37        2      active sync   /dev/sdc5
        3       8       53        3      active sync   /dev/sdd5

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Which Disks can fail?
  2011-06-21 10:24 Which Disks can fail? Jonathan Tripathy
@ 2011-06-21 10:45 ` NeilBrown
  2011-06-21 10:56   ` Jonathan Tripathy
  0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2011-06-21 10:45 UTC (permalink / raw)
  To: Jonathan Tripathy; +Cc: linux-raid

On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy <jonnyt@abpni.co.uk>
wrote:

> Hi Everyone,
> 
> Use md's "single process" RAID10 with the standard near layout (which is 
> apperently the same as RAID1+0 in industry), which 2 drives could fail 
> without loosing the array?
> 
> This is what I have:
> 
> Number   Major   Minor   RaidDevice State
>         0       8        5        0      active sync   /dev/sda5
>         1       8       21        1      active sync   /dev/sdb5
>         2       8       37        2      active sync   /dev/sdc5
>         3       8       53        3      active sync   /dev/sdd5
> 
> Thanks
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Run

   man 4 md

 search for "RAID10"

 read what you find, and if it doesn't make sense, ask again.
 If it does make sense, post your answer and feel free to ask for
 confirmation.


NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Which Disks can fail?
  2011-06-21 10:45 ` NeilBrown
@ 2011-06-21 10:56   ` Jonathan Tripathy
  2011-06-21 11:42     ` NeilBrown
  0 siblings, 1 reply; 7+ messages in thread
From: Jonathan Tripathy @ 2011-06-21 10:56 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


On 21/06/2011 11:45, NeilBrown wrote:
> On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy<jonnyt@abpni.co.uk>
> wrote:
>
>> Hi Everyone,
>>
>> Use md's "single process" RAID10 with the standard near layout (which is
>> apperently the same as RAID1+0 in industry), which 2 drives could fail
>> without loosing the array?
>>
>> This is what I have:
>>
>> Number   Major   Minor   RaidDevice State
>>          0       8        5        0      active sync   /dev/sda5
>>          1       8       21        1      active sync   /dev/sdb5
>>          2       8       37        2      active sync   /dev/sdc5
>>          3       8       53        3      active sync   /dev/sdd5
>>
>> Thanks
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Run
>
>     man 4 md
>
>   search for "RAID10"
>
>   read what you find, and if it doesn't make sense, ask again.
>   If it does make sense, post your answer and feel free to ask for
>   confirmation.
>
>
> NeilBrown
Sorry, it still doesn't make much sense to me I'm afraid.

In fact, it's confused me more - since I'm using "near", does that means 
that the "copy" (I'm using near=2) of a given trunk may lie on the same 
disk, leading to *no redundancy*??

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Which Disks can fail?
  2011-06-21 10:56   ` Jonathan Tripathy
@ 2011-06-21 11:42     ` NeilBrown
  2011-06-21 11:57       ` Jonathan Tripathy
  2011-06-21 15:31       ` Jonathan Tripathy
  0 siblings, 2 replies; 7+ messages in thread
From: NeilBrown @ 2011-06-21 11:42 UTC (permalink / raw)
  To: Jonathan Tripathy; +Cc: linux-raid

On Tue, 21 Jun 2011 11:56:41 +0100 Jonathan Tripathy <jonnyt@abpni.co.uk>
wrote:

> 
> On 21/06/2011 11:45, NeilBrown wrote:
> > On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy<jonnyt@abpni.co.uk>
> > wrote:
> >
> >> Hi Everyone,
> >>
> >> Use md's "single process" RAID10 with the standard near layout (which is
> >> apperently the same as RAID1+0 in industry), which 2 drives could fail
> >> without loosing the array?
> >>
> >> This is what I have:
> >>
> >> Number   Major   Minor   RaidDevice State
> >>          0       8        5        0      active sync   /dev/sda5
> >>          1       8       21        1      active sync   /dev/sdb5
> >>          2       8       37        2      active sync   /dev/sdc5
> >>          3       8       53        3      active sync   /dev/sdd5
> >>
> >> Thanks
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Run
> >
> >     man 4 md
> >
> >   search for "RAID10"
> >
> >   read what you find, and if it doesn't make sense, ask again.
> >   If it does make sense, post your answer and feel free to ask for
> >   confirmation.
> >
> >
> > NeilBrown
> Sorry, it still doesn't make much sense to me I'm afraid.
> 
> In fact, it's confused me more - since I'm using "near", does that means 
> that the "copy" (I'm using near=2) of a given trunk may lie on the same 
> disk, leading to *no redundancy*??

Clearly I need to improve the man page...  (suggestions welcome).

How do you read it that the copies of a given chunk may lie on the same disk.
I read:

       When  'near'  replicas are chosen, the multiple copies of a given chunk
       are laid out consecutively across the stripes of the array, so the  two
       copies of a datablock will likely be at the same offset on two adjacent
       devices.

"laid out consecutively across the stripes of the array" might be a bit
obscure..  A stripe is one chunk on each device, so when chunks a laid out
consecutively across a stripe, they would be one chunk per device.

Then "likely be at the same offset on two adjacent devices" should make this
clearer.  It is only "likely" because if you have an odd number of devices,
then the 2 copies of one chunk could be
  a/ at offset X on the last device
  b/ at offset X+chunk on the first device

but in general, they are on "adjacent devices"

So in answer to your original question, sda5 and sdb5 will have the same
data, and sdc5 and sdd5 will also have the same data.

NeilBrown


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Which Disks can fail?
  2011-06-21 11:42     ` NeilBrown
@ 2011-06-21 11:57       ` Jonathan Tripathy
  2011-06-21 15:31       ` Jonathan Tripathy
  1 sibling, 0 replies; 7+ messages in thread
From: Jonathan Tripathy @ 2011-06-21 11:57 UTC (permalink / raw)
  To: NeilBrown, linux-raid


On 21/06/2011 12:42, NeilBrown wrote:
> On Tue, 21 Jun 2011 11:56:41 +0100 Jonathan Tripathy<jonnyt@abpni.co.uk>
> wrote:
>
>> On 21/06/2011 11:45, NeilBrown wrote:
>>> On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy<jonnyt@abpni.co.uk>
>>> wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> Use md's "single process" RAID10 with the standard near layout (which is
>>>> apperently the same as RAID1+0 in industry), which 2 drives could fail
>>>> without loosing the array?
>>>>
>>>> This is what I have:
>>>>
>>>> Number   Major   Minor   RaidDevice State
>>>>           0       8        5        0      active sync   /dev/sda5
>>>>           1       8       21        1      active sync   /dev/sdb5
>>>>           2       8       37        2      active sync   /dev/sdc5
>>>>           3       8       53        3      active sync   /dev/sdd5
>>>>
>>>> Thanks
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Run
>>>
>>>      man 4 md
>>>
>>>    search for "RAID10"
>>>
>>>    read what you find, and if it doesn't make sense, ask again.
>>>    If it does make sense, post your answer and feel free to ask for
>>>    confirmation.
>>>
>>>
>>> NeilBrown
>> Sorry, it still doesn't make much sense to me I'm afraid.
>>
>> In fact, it's confused me more - since I'm using "near", does that means
>> that the "copy" (I'm using near=2) of a given trunk may lie on the same
>> disk, leading to *no redundancy*??
> Clearly I need to improve the man page...  (suggestions welcome).
>
> How do you read it that the copies of a given chunk may lie on the same disk.
> I read:
>
>         When  'near'  replicas are chosen, the multiple copies of a given chunk
>         are laid out consecutively across the stripes of the array, so the  two
>         copies of a datablock will likely be at the same offset on two adjacent
>         devices.
>
> "laid out consecutively across the stripes of the array" might be a bit
> obscure..  A stripe is one chunk on each device, so when chunks a laid out
> consecutively across a stripe, they would be one chunk per device.
>
> Then "likely be at the same offset on two adjacent devices" should make this
> clearer.  It is only "likely" because if you have an odd number of devices,
> then the 2 copies of one chunk could be
>    a/ at offset X on the last device
>    b/ at offset X+chunk on the first device
>
> but in general, they are on "adjacent devices"
>
> So in answer to your original question, sda5 and sdb5 will have the same
> data, and sdc5 and sdd5 will also have the same data.
>
> NeilBrown
>
Hi Neil,

It was the lines in the "far" sections that made me have my doubts:

         "When  âfarâ  replicas  are  chosen,  the multiple copies of a 
given chunk are laid out quite distant from each other.  The
        first copy of all data blocks will be striped across the early 
part of all drives in RAID0 fashion, and then the next copy
        of all blocks will be striped across a later section of all 
drives, *always ensuring that all copies of any given block are
        on different drives.*"

The highlighted part made me think that there would be a chance that 
chunks would be on the same drive in near

Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Which Disks can fail?
  2011-06-21 11:42     ` NeilBrown
  2011-06-21 11:57       ` Jonathan Tripathy
@ 2011-06-21 15:31       ` Jonathan Tripathy
  2011-06-23  7:02         ` NeilBrown
  1 sibling, 1 reply; 7+ messages in thread
From: Jonathan Tripathy @ 2011-06-21 15:31 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


On 21/06/2011 12:42, NeilBrown wrote:
> On Tue, 21 Jun 2011 11:56:41 +0100 Jonathan Tripathy<jonnyt@abpni.co.uk>
> wrote:
>
>> On 21/06/2011 11:45, NeilBrown wrote:
>>> On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy<jonnyt@abpni.co.uk>
>>> wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> Use md's "single process" RAID10 with the standard near layout (which is
>>>> apperently the same as RAID1+0 in industry), which 2 drives could fail
>>>> without loosing the array?
>>>>
>>>> This is what I have:
>>>>
>>>> Number   Major   Minor   RaidDevice State
>>>>           0       8        5        0      active sync   /dev/sda5
>>>>           1       8       21        1      active sync   /dev/sdb5
>>>>           2       8       37        2      active sync   /dev/sdc5
>>>>           3       8       53        3      active sync   /dev/sdd5
>>>>
>>>> Thanks
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Run
>>>
>>>      man 4 md
>>>
>>>    search for "RAID10"
>>>
>>>    read what you find, and if it doesn't make sense, ask again.
>>>    If it does make sense, post your answer and feel free to ask for
>>>    confirmation.
>>>
>>>
>>> NeilBrown
>> Sorry, it still doesn't make much sense to me I'm afraid.
>>
>> In fact, it's confused me more - since I'm using "near", does that means
>> that the "copy" (I'm using near=2) of a given trunk may lie on the same
>> disk, leading to *no redundancy*??
> Clearly I need to improve the man page...  (suggestions welcome).
>
> How do you read it that the copies of a given chunk may lie on the same disk.
> I read:
>
>         When  'near'  replicas are chosen, the multiple copies of a given chunk
>         are laid out consecutively across the stripes of the array, so the  two
>         copies of a datablock will likely be at the same offset on two adjacent
>         devices.
>
> "laid out consecutively across the stripes of the array" might be a bit
> obscure..  A stripe is one chunk on each device, so when chunks a laid out
> consecutively across a stripe, they would be one chunk per device.
>
> Then "likely be at the same offset on two adjacent devices" should make this
> clearer.  It is only "likely" because if you have an odd number of devices,
> then the 2 copies of one chunk could be
>    a/ at offset X on the last device
>    b/ at offset X+chunk on the first device
>
> but in general, they are on "adjacent devices"
>
> So in answer to your original question, sda5 and sdb5 will have the same
> data, and sdc5 and sdd5 will also have the same data.
>
> NeilBrown
>
Thanks for your help, Neil :)

So, just to confirm 2 drives could fail in my array, as long as the two 
drives weren't sda5 and sdb5, or sdc5 and sdd5. Is that correct?

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Which Disks can fail?
  2011-06-21 15:31       ` Jonathan Tripathy
@ 2011-06-23  7:02         ` NeilBrown
  0 siblings, 0 replies; 7+ messages in thread
From: NeilBrown @ 2011-06-23  7:02 UTC (permalink / raw)
  To: Jonathan Tripathy; +Cc: linux-raid

On Tue, 21 Jun 2011 16:31:49 +0100 Jonathan Tripathy <jonnyt@abpni.co.uk>
wrote:

> So, just to confirm 2 drives could fail in my array, as long as the two 
> drives weren't sda5 and sdb5, or sdc5 and sdd5. Is that correct?

Correct.

NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2011-06-23  7:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-21 10:24 Which Disks can fail? Jonathan Tripathy
2011-06-21 10:45 ` NeilBrown
2011-06-21 10:56   ` Jonathan Tripathy
2011-06-21 11:42     ` NeilBrown
2011-06-21 11:57       ` Jonathan Tripathy
2011-06-21 15:31       ` Jonathan Tripathy
2011-06-23  7:02         ` NeilBrown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.