All of lore.kernel.org
 help / color / mirror / Atom feed
* swidth in RAID
@ 2013-06-30 18:43 aurfalien
  2013-06-30 19:08 ` Peter Grandi
  2013-06-30 21:42 ` Stan Hoeppner
  0 siblings, 2 replies; 11+ messages in thread
From: aurfalien @ 2013-06-30 18:43 UTC (permalink / raw)
  To: xfs

Hi,

I understand swidth should = #data disks.

And the docs say for RAID 6 of 8 disks, that means 6.

But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.

So shouldn't swidth equal disks in raid when its concerning distributed parity raid?

- aurf
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-06-30 18:43 swidth in RAID aurfalien
@ 2013-06-30 19:08 ` Peter Grandi
  2013-06-30 21:42 ` Stan Hoeppner
  1 sibling, 0 replies; 11+ messages in thread
From: Peter Grandi @ 2013-06-30 19:08 UTC (permalink / raw)
  To: Linux fs XFS

> I understand swidth should = #data disks.  And the docs say
> for RAID 6 of 8 disks, that means 6. [ ... ] 8 disks/spindles
> working for you and a bit of parity on each. So shouldn't
> swidth equal disks in raid when its concerning distributed
> parity raid?

The main goal is trying to the reduce the probability of
read-modify-write.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-06-30 18:43 swidth in RAID aurfalien
  2013-06-30 19:08 ` Peter Grandi
@ 2013-06-30 21:42 ` Stan Hoeppner
  2013-06-30 22:36   ` aurfalien
  2013-07-01  1:38   ` Dave Chinner
  1 sibling, 2 replies; 11+ messages in thread
From: Stan Hoeppner @ 2013-06-30 21:42 UTC (permalink / raw)
  To: xfs

On 6/30/2013 1:43 PM, aurfalien wrote:

> I understand swidth should = #data disks.

No.  "swidth" is a byte value specifying the number of 512 byte blocks
in the data stripe.

"sw" is #data disks.

> And the docs say for RAID 6 of 8 disks, that means 6.
> 
> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
> 
> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?

No.  Lets try visual aids.

Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
Separate 24 blues (data) and 8 reds (parity).

Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
write, this time rotating two cups (drives) to the right.  Now drop
blues into 5-2 and reds into 3-4.  You've written your third stripe,
rotating by two cups (disks) again.

This is pretty much how RAID6 works.  Each time we wrote we dropped 8
m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
Every RAID stripe you write will be constructed of 6 blues and 2 reds.
XFS, or EXT4, or any filesystem, can only drop blues into the first 6
cups of a stripe.  The RAID adds the two reds to every stripe.

Maybe now you understand why sw=6 for an 8 drive RAID6.  And now maybe
you understand what "distributed parity" actually means--every stripe is
shifted, not just the parity chunks.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-06-30 21:42 ` Stan Hoeppner
@ 2013-06-30 22:36   ` aurfalien
  2013-07-01  1:38   ` Dave Chinner
  1 sibling, 0 replies; 11+ messages in thread
From: aurfalien @ 2013-06-30 22:36 UTC (permalink / raw)
  To: stan; +Cc: xfs


On Jun 30, 2013, at 2:42 PM, Stan Hoeppner wrote:

> On 6/30/2013 1:43 PM, aurfalien wrote:
> 
>> I understand swidth should = #data disks.
> 
> No.  "swidth" is a byte value specifying the number of 512 byte blocks
> in the data stripe.
> 
> "sw" is #data disks.
> 
>> And the docs say for RAID 6 of 8 disks, that means 6.
>> 
>> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
>> 
>> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
> 
> No.  Lets try visual aids.
> 
> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
> Separate 24 blues (data) and 8 reds (parity).

But are the cups 8oz, 16oz or smaller/larger?

Ceramic, plastic, glass, etc...?

Actually I really enjoyed the visual aid, many many thanks.

- aurf

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-06-30 21:42 ` Stan Hoeppner
  2013-06-30 22:36   ` aurfalien
@ 2013-07-01  1:38   ` Dave Chinner
  2013-07-01  1:54     ` aurfalien
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2013-07-01  1:38 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: xfs

On Sun, Jun 30, 2013 at 04:42:06PM -0500, Stan Hoeppner wrote:
> On 6/30/2013 1:43 PM, aurfalien wrote:
> 
> > I understand swidth should = #data disks.
> 
> No.  "swidth" is a byte value specifying the number of 512 byte blocks
> in the data stripe.
> 
> "sw" is #data disks.
> 
> > And the docs say for RAID 6 of 8 disks, that means 6.
> > 
> > But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
> > 
> > So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
> 
> No.  Lets try visual aids.
> 
> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
> Separate 24 blues (data) and 8 reds (parity).
> 
> Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
> stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
> write, this time rotating two cups (drives) to the right.  Now drop
> blues into 5-2 and reds into 3-4.  You've written your third stripe,
> rotating by two cups (disks) again.
> 
> This is pretty much how RAID6 works.  Each time we wrote we dropped 8
> m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
> Every RAID stripe you write will be constructed of 6 blues and 2 reds.

Right, that's how they are constructed, but not all RAID distributes
parity across different disks in the array. Some are symmetric, some
are asymmetric, some rotate right, some rotate left, and some use
statistical algorithms to give an overall distribution without being
able to predict where a specific parity block might lie within a
stripe...

And at the other end of the scale, isochronous RAID arrays tend to
have dedicated parity disks so that data read and write behaviour is
deterministic and therefore predictable from a high level....

So, assuming that a RAID5/6 device has a specific data layout (be it
distributed or fixed) at the filesystem level is just a bad idea. We
simply don't know. Even if we did, the only thing we can optimise is
the thing that is common between all RAID5/6 devices - writing full
stripe widths is the most optimal method of writing to them....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-07-01  1:38   ` Dave Chinner
@ 2013-07-01  1:54     ` aurfalien
  2013-07-01  2:09       ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: aurfalien @ 2013-07-01  1:54 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Stan Hoeppner, xfs


On Jun 30, 2013, at 6:38 PM, Dave Chinner wrote:

> On Sun, Jun 30, 2013 at 04:42:06PM -0500, Stan Hoeppner wrote:
>> On 6/30/2013 1:43 PM, aurfalien wrote:
>> 
>>> I understand swidth should = #data disks.
>> 
>> No.  "swidth" is a byte value specifying the number of 512 byte blocks
>> in the data stripe.
>> 
>> "sw" is #data disks.
>> 
>>> And the docs say for RAID 6 of 8 disks, that means 6.
>>> 
>>> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
>>> 
>>> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
>> 
>> No.  Lets try visual aids.
>> 
>> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
>> Separate 24 blues (data) and 8 reds (parity).
>> 
>> Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
>> stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
>> write, this time rotating two cups (drives) to the right.  Now drop
>> blues into 5-2 and reds into 3-4.  You've written your third stripe,
>> rotating by two cups (disks) again.
>> 
>> This is pretty much how RAID6 works.  Each time we wrote we dropped 8
>> m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
>> Every RAID stripe you write will be constructed of 6 blues and 2 reds.
> 
> Right, that's how they are constructed, but not all RAID distributes
> parity across different disks in the array. Some are symmetric, some
> are asymmetric, some rotate right, some rotate left, and some use
> statistical algorithms to give an overall distribution without being
> able to predict where a specific parity block might lie within a
> stripe...
> 
> And at the other end of the scale, isochronous RAID arrays tend to
> have dedicated parity disks so that data read and write behaviour is
> deterministic and therefore predictable from a high level....
> 
> So, assuming that a RAID5/6 device has a specific data layout (be it
> distributed or fixed) at the filesystem level is just a bad idea. We
> simply don't know. Even if we did, the only thing we can optimise is
> the thing that is common between all RAID5/6 devices - writing full
> stripe widths is the most optimal method of writing to them....

Am I interpreting this to say;

16 disks is sw=16 regardless of parity?

As the thing common is number of disks.   Or 1 parity as the least common denom which would mean sw=15?

Peter brought this up;

The main goal is trying to the reduce the probability of
read-modify-write.

Which is a way for me to think it as  "don't over subscribe".

- aurf

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-07-01  1:54     ` aurfalien
@ 2013-07-01  2:09       ` Dave Chinner
  2013-07-01  2:47         ` Stan Hoeppner
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2013-07-01  2:09 UTC (permalink / raw)
  To: aurfalien; +Cc: Stan Hoeppner, xfs

On Sun, Jun 30, 2013 at 06:54:31PM -0700, aurfalien wrote:
> 
> On Jun 30, 2013, at 6:38 PM, Dave Chinner wrote:
> 
> > On Sun, Jun 30, 2013 at 04:42:06PM -0500, Stan Hoeppner wrote:
> >> On 6/30/2013 1:43 PM, aurfalien wrote:
> >> 
> >>> I understand swidth should = #data disks.
> >> 
> >> No.  "swidth" is a byte value specifying the number of 512 byte blocks
> >> in the data stripe.
> >> 
> >> "sw" is #data disks.
> >> 
> >>> And the docs say for RAID 6 of 8 disks, that means 6.
> >>> 
> >>> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
> >>> 
> >>> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
> >> 
> >> No.  Lets try visual aids.
> >> 
> >> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
> >> Separate 24 blues (data) and 8 reds (parity).
> >> 
> >> Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
> >> stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
> >> write, this time rotating two cups (drives) to the right.  Now drop
> >> blues into 5-2 and reds into 3-4.  You've written your third stripe,
> >> rotating by two cups (disks) again.
> >> 
> >> This is pretty much how RAID6 works.  Each time we wrote we dropped 8
> >> m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
> >> Every RAID stripe you write will be constructed of 6 blues and 2 reds.
> > 
> > Right, that's how they are constructed, but not all RAID distributes
> > parity across different disks in the array. Some are symmetric, some
> > are asymmetric, some rotate right, some rotate left, and some use
> > statistical algorithms to give an overall distribution without being
> > able to predict where a specific parity block might lie within a
> > stripe...
> > 
> > And at the other end of the scale, isochronous RAID arrays tend to
> > have dedicated parity disks so that data read and write behaviour is
> > deterministic and therefore predictable from a high level....
> > 
> > So, assuming that a RAID5/6 device has a specific data layout (be it
> > distributed or fixed) at the filesystem level is just a bad idea. We
> > simply don't know. Even if we did, the only thing we can optimise is
> > the thing that is common between all RAID5/6 devices - writing full
> > stripe widths is the most optimal method of writing to them....
> 
> Am I interpreting this to say;
> 
> 16 disks is sw=16 regardless of parity?

No. I'm just saying that parity layout is irrelevant to the
filesystem and that all we care about is sw does not include parity
disks.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-07-01  2:09       ` Dave Chinner
@ 2013-07-01  2:47         ` Stan Hoeppner
  2013-07-01  2:54           ` aurfalien
  0 siblings, 1 reply; 11+ messages in thread
From: Stan Hoeppner @ 2013-07-01  2:47 UTC (permalink / raw)
  To: xfs

On 6/30/2013 9:09 PM, Dave Chinner wrote:
> On Sun, Jun 30, 2013 at 06:54:31PM -0700, aurfalien wrote:
>>
>> On Jun 30, 2013, at 6:38 PM, Dave Chinner wrote:
>>
>>> On Sun, Jun 30, 2013 at 04:42:06PM -0500, Stan Hoeppner wrote:
>>>> On 6/30/2013 1:43 PM, aurfalien wrote:
>>>>
>>>>> I understand swidth should = #data disks.
>>>>
>>>> No.  "swidth" is a byte value specifying the number of 512 byte blocks
>>>> in the data stripe.
>>>>
>>>> "sw" is #data disks.
>>>>
>>>>> And the docs say for RAID 6 of 8 disks, that means 6.
>>>>>
>>>>> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
>>>>>
>>>>> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
>>>>
>>>> No.  Lets try visual aids.
>>>>
>>>> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
>>>> Separate 24 blues (data) and 8 reds (parity).
>>>>
>>>> Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
>>>> stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
>>>> write, this time rotating two cups (drives) to the right.  Now drop
>>>> blues into 5-2 and reds into 3-4.  You've written your third stripe,
>>>> rotating by two cups (disks) again.
>>>>
>>>> This is pretty much how RAID6 works.  Each time we wrote we dropped 8
>>>> m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
>>>> Every RAID stripe you write will be constructed of 6 blues and 2 reds.
>>>
>>> Right, that's how they are constructed, but not all RAID distributes
>>> parity across different disks in the array. Some are symmetric, some
>>> are asymmetric, some rotate right, some rotate left, and some use
>>> statistical algorithms to give an overall distribution without being
>>> able to predict where a specific parity block might lie within a
>>> stripe...
>>>
>>> And at the other end of the scale, isochronous RAID arrays tend to
>>> have dedicated parity disks so that data read and write behaviour is
>>> deterministic and therefore predictable from a high level....
>>>
>>> So, assuming that a RAID5/6 device has a specific data layout (be it
>>> distributed or fixed) at the filesystem level is just a bad idea. We
>>> simply don't know. Even if we did, the only thing we can optimise is
>>> the thing that is common between all RAID5/6 devices - writing full
>>> stripe widths is the most optimal method of writing to them....
>>
>> Am I interpreting this to say;
>>
>> 16 disks is sw=16 regardless of parity?
> 
> No. I'm just saying that parity layout is irrelevant to the
> filesystem and that all we care about is sw does not include parity
> disks.

So, here's the formula aurfalien, where #disks is the total number of
active disks (excluding spares) in the RAID array.  In the case of

RAID5	sw = (#disks - 1)
RAID6	sw = (#disks - 2)
RAID10  sw = (#disks / 2) [1]

[1] If using the Linux md/RAID10 driver with one of the non-standard
layouts such as n2 or f2, the formula may change.  This is beyond the
scope of this discussion.  Visit the linux-raid mailing list for further
details.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-07-01  2:47         ` Stan Hoeppner
@ 2013-07-01  2:54           ` aurfalien
  2013-07-02 21:48             ` Peter Grandi
  0 siblings, 1 reply; 11+ messages in thread
From: aurfalien @ 2013-07-01  2:54 UTC (permalink / raw)
  To: stan; +Cc: xfs


On Jun 30, 2013, at 7:47 PM, Stan Hoeppner wrote:

> On 6/30/2013 9:09 PM, Dave Chinner wrote:
>> On Sun, Jun 30, 2013 at 06:54:31PM -0700, aurfalien wrote:
>>> 
>>> On Jun 30, 2013, at 6:38 PM, Dave Chinner wrote:
>>> 
>>>> On Sun, Jun 30, 2013 at 04:42:06PM -0500, Stan Hoeppner wrote:
>>>>> On 6/30/2013 1:43 PM, aurfalien wrote:
>>>>> 
>>>>>> I understand swidth should = #data disks.
>>>>> 
>>>>> No.  "swidth" is a byte value specifying the number of 512 byte blocks
>>>>> in the data stripe.
>>>>> 
>>>>> "sw" is #data disks.
>>>>> 
>>>>>> And the docs say for RAID 6 of 8 disks, that means 6.
>>>>>> 
>>>>>> But parity is distributed and you actually have 8 disks/spindles working for you and a bit of parity on each.
>>>>>> 
>>>>>> So shouldn't swidth equal disks in raid when its concerning distributed parity raid?
>>>>> 
>>>>> No.  Lets try visual aids.
>>>>> 
>>>>> Set 8 coffee cups (disk drives) on a table.  Grab a bag of m&m's.
>>>>> Separate 24 blues (data) and 8 reds (parity).
>>>>> 
>>>>> Drop a blue m&m in cups 1-6 and a red into 7-8.  You just wrote one RAID
>>>>> stripe.  Now drop a blue into cups 3-8 and a red in 1-2.  Your second
>>>>> write, this time rotating two cups (drives) to the right.  Now drop
>>>>> blues into 5-2 and reds into 3-4.  You've written your third stripe,
>>>>> rotating by two cups (disks) again.
>>>>> 
>>>>> This is pretty much how RAID6 works.  Each time we wrote we dropped 8
>>>>> m&m's into 8 cups, 6 blue (data chunks) and 2 red (parity chunks).
>>>>> Every RAID stripe you write will be constructed of 6 blues and 2 reds.
>>>> 
>>>> Right, that's how they are constructed, but not all RAID distributes
>>>> parity across different disks in the array. Some are symmetric, some
>>>> are asymmetric, some rotate right, some rotate left, and some use
>>>> statistical algorithms to give an overall distribution without being
>>>> able to predict where a specific parity block might lie within a
>>>> stripe...
>>>> 
>>>> And at the other end of the scale, isochronous RAID arrays tend to
>>>> have dedicated parity disks so that data read and write behaviour is
>>>> deterministic and therefore predictable from a high level....
>>>> 
>>>> So, assuming that a RAID5/6 device has a specific data layout (be it
>>>> distributed or fixed) at the filesystem level is just a bad idea. We
>>>> simply don't know. Even if we did, the only thing we can optimise is
>>>> the thing that is common between all RAID5/6 devices - writing full
>>>> stripe widths is the most optimal method of writing to them....
>>> 
>>> Am I interpreting this to say;
>>> 
>>> 16 disks is sw=16 regardless of parity?
>> 
>> No. I'm just saying that parity layout is irrelevant to the
>> filesystem and that all we care about is sw does not include parity
>> disks.
> 
> So, here's the formula aurfalien, where #disks is the total number of
> active disks (excluding spares) in the RAID array.  In the case of
> 
> RAID5	sw = (#disks - 1)
> RAID6	sw = (#disks - 2)
> RAID10  sw = (#disks / 2) [1]
> 
> [1] If using the Linux md/RAID10 driver with one of the non-standard
> layouts such as n2 or f2, the formula may change.  This is beyond the
> scope of this discussion.  Visit the linux-raid mailing list for further

I totally got your original post with the cup o M&Ms.

Just wanted his take on it is all.

And I'm on too many mailing lists as it is :)

- aurf
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-07-01  2:54           ` aurfalien
@ 2013-07-02 21:48             ` Peter Grandi
  2013-07-03  0:15               ` Stan Hoeppner
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Grandi @ 2013-07-02 21:48 UTC (permalink / raw)
  To: Linux fs XFS

[ ... ]

>> RAID5	sw = (#disks - 1)
>> RAID6	sw = (#disks - 2)
>> RAID10       sw = (#disks / 2) [1]

What was probably all that needed saying for once is that
'swidth'/'sw' matter nearly only for avoiding read-modify-write,
and there is no reason to confuse the already confused by
mentioning here RAID10 (or RAID0) where read-modify-write won't
happen.

The somewhat secondary reason for which stripe width, or rather
something related to it, may matter even for non-parity RAID
sets is for filesystems that try to layout metadata tables so
that the metadata does not end up all on a subset of the disks
in the RAID set, which might occur if the metadata table
alignment is congruent with the "chunk" alignment.

That for example is likely to happen with 'ext[234]' filetrees,
and accordingly 'man mke2fs' rightly mentions for 'stripe-width'
(equivalent to 'swidth'/'sw') that is matters only for parity
RAID sets and because of read-modify-write:

  "This allows the block allocator to prevent read-modify-write
  of the parity in a RAID stripe if possible when the data is
  written."

and it is about 'stride' (the equivalent of 'su'/'sunit' in XFS)
that it says:

  "This mostly affects placement of filesystem metadata like
  bitmaps at mke2fs time to avoid placing them on a single disk,
  which can hurt performance.  It may also be used by the block
  allocator."

Uhm, I thougt that also affected placement of inode tables, but
I may be misremembering. Whether metadata alignment issues are
likley to happen with XFS, where metadata allocation is more
dynamic than for 'ext[234]', and whether it currently contains
code to deal with it, I don't remember.

Also, even assuming that 'sw' matters for RADI10 for reasons
other than parity updates that it does not do, the formula above
is simplistic:

>> [ ... ]
>> [1] If using the Linux md/RAID10 driver with one of the
>> non-standard layouts such as n2 or f2, the formula may
>> change. [ ... ]

Here the default is 'n' and the alternative layouts are 'o' and
'f', also with Linux MD there can be an odd number of members in
a RAID10 set. Not that matters as RAID10 (and some others) of
any shape does not have parity to update on write, so the
specific physical layouts of blocks is not relevant for RMW.

Anyhow I wrote a brief overall description of RMW here some time
ago:

  http://www.sabi.co.uk/blog/12-thr.html#120414

as RMW is an issue that matters in several cases other than
parity RAID.

Also because I think this is the third or fourth time that it
needed repeating in some mailing list that stripe width matters
almost only for RAID when there is parity, and thus almost
entirely not for RAID10.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: swidth in RAID
  2013-07-02 21:48             ` Peter Grandi
@ 2013-07-03  0:15               ` Stan Hoeppner
  0 siblings, 0 replies; 11+ messages in thread
From: Stan Hoeppner @ 2013-07-03  0:15 UTC (permalink / raw)
  To: xfs

On 7/2/2013 4:48 PM, Peter Grandi wrote:

> What was probably all that needed saying for once is that

What was needed was a simple explanation demonstrating the answer the OP
was looking for Peter.  I provided that.

In your 10,000th attempt to generate self gratification by demonstrating
your superior knowledge (actually lack thereof) on this list, all you
could have possibly achieved here is confusing the OP even further.

I find it sad that you've decided to prey on the young, the
inexperienced, after realizing the educated began ignoring you long ago.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-07-03  0:16 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-30 18:43 swidth in RAID aurfalien
2013-06-30 19:08 ` Peter Grandi
2013-06-30 21:42 ` Stan Hoeppner
2013-06-30 22:36   ` aurfalien
2013-07-01  1:38   ` Dave Chinner
2013-07-01  1:54     ` aurfalien
2013-07-01  2:09       ` Dave Chinner
2013-07-01  2:47         ` Stan Hoeppner
2013-07-01  2:54           ` aurfalien
2013-07-02 21:48             ` Peter Grandi
2013-07-03  0:15               ` Stan Hoeppner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.