linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* is it safe to change BTRFS_STRIPE_LEN?
@ 2014-05-24 16:44 john terragon
  2014-05-24 19:07 ` Austin S Hemmelgarn
  0 siblings, 1 reply; 3+ messages in thread
From: john terragon @ 2014-05-24 16:44 UTC (permalink / raw)
  To: linux-btrfs

Hi.

I'm playing around with (software) raid0 on SSDs and since I remember
I read somewhere that intel recommends 128K stripe size for HDD arrays
but only 16K stripe size for SSD arrays, I wanted to see how a
small(er) stripe size would work on my system. Obviously with btrfs on
top of md-raid I could use the stripe size I want. But if I'm not
mistaken the stripe size with the native raid0 in btrfs is fixed to
64K in BTRFS_STRIPE_LEN (volumes.h).
So I was wondering if it would be reasonably safe to just change that
to 16K (and duck and wait for the explosion ;) ).

Can anyone adept to the inner workings of btrfs raid0 code confirm if
that would be the right way to proceed? (obviously without absolutely
any blame to be placed on anyone other than myself if things should go
badly :) )

Thanks

john

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: is it safe to change BTRFS_STRIPE_LEN?
  2014-05-24 16:44 is it safe to change BTRFS_STRIPE_LEN? john terragon
@ 2014-05-24 19:07 ` Austin S Hemmelgarn
  2014-05-24 20:01   ` john terragon
  0 siblings, 1 reply; 3+ messages in thread
From: Austin S Hemmelgarn @ 2014-05-24 19:07 UTC (permalink / raw)
  To: john terragon, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1460 bytes --]

On 05/24/2014 12:44 PM, john terragon wrote:
> Hi.
> 
> I'm playing around with (software) raid0 on SSDs and since I remember
> I read somewhere that intel recommends 128K stripe size for HDD arrays
> but only 16K stripe size for SSD arrays, I wanted to see how a
> small(er) stripe size would work on my system. Obviously with btrfs on
> top of md-raid I could use the stripe size I want. But if I'm not
> mistaken the stripe size with the native raid0 in btrfs is fixed to
> 64K in BTRFS_STRIPE_LEN (volumes.h).
> So I was wondering if it would be reasonably safe to just change that
> to 16K (and duck and wait for the explosion ;) ).
> 
> Can anyone adept to the inner workings of btrfs raid0 code confirm if
> that would be the right way to proceed? (obviously without absolutely
> any blame to be placed on anyone other than myself if things should go
> badly :) )
I personally can't render an opinion on whether changing it would make
things break or not, but I do know that it would need to be changed both
in the kernel and the tools, and the resultant kernel and tools would
not be entirely compatible with filesystems produced by the regular
tools and kernel, possibly to the point of corrupting any filesystem
they touch.

As for the 64k default strip size, that sounds correct, and is probably
because that's the largest block that the I/O schedulers on Linux will
dispatch as a single write to the underlying device.


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 2967 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: is it safe to change BTRFS_STRIPE_LEN?
  2014-05-24 19:07 ` Austin S Hemmelgarn
@ 2014-05-24 20:01   ` john terragon
  0 siblings, 0 replies; 3+ messages in thread
From: john terragon @ 2014-05-24 20:01 UTC (permalink / raw)
  To: Austin S Hemmelgarn; +Cc: linux-btrfs

Yes the btrfs-tools would have to be recompiled too ( BTRFS_STRIPE_LEN
is defined in a volumes.h in there too).
And yes, kernel and tools would certainly kill any raid0 btrfs fs and
maybe any other multidevice kind of setting.


On Sat, May 24, 2014 at 9:07 PM, Austin S Hemmelgarn
<ahferroin7@gmail.com> wrote:
> On 05/24/2014 12:44 PM, john terragon wrote:
>> Hi.
>>
>> I'm playing around with (software) raid0 on SSDs and since I remember
>> I read somewhere that intel recommends 128K stripe size for HDD arrays
>> but only 16K stripe size for SSD arrays, I wanted to see how a
>> small(er) stripe size would work on my system. Obviously with btrfs on
>> top of md-raid I could use the stripe size I want. But if I'm not
>> mistaken the stripe size with the native raid0 in btrfs is fixed to
>> 64K in BTRFS_STRIPE_LEN (volumes.h).
>> So I was wondering if it would be reasonably safe to just change that
>> to 16K (and duck and wait for the explosion ;) ).
>>
>> Can anyone adept to the inner workings of btrfs raid0 code confirm if
>> that would be the right way to proceed? (obviously without absolutely
>> any blame to be placed on anyone other than myself if things should go
>> badly :) )
> I personally can't render an opinion on whether changing it would make
> things break or not, but I do know that it would need to be changed both
> in the kernel and the tools, and the resultant kernel and tools would
> not be entirely compatible with filesystems produced by the regular
> tools and kernel, possibly to the point of corrupting any filesystem
> they touch.
>
> As for the 64k default strip size, that sounds correct, and is probably
> because that's the largest block that the I/O schedulers on Linux will
> dispatch as a single write to the underlying device.
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-05-24 20:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-24 16:44 is it safe to change BTRFS_STRIPE_LEN? john terragon
2014-05-24 19:07 ` Austin S Hemmelgarn
2014-05-24 20:01   ` john terragon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).