All of lore.kernel.org
 help / color / mirror / Atom feed
* very large data-offset?
@ 2017-11-30 10:26 Duane
  2017-11-30 10:45 ` Wols Lists
  2017-12-04  0:51 ` NeilBrown
  0 siblings, 2 replies; 6+ messages in thread
From: Duane @ 2017-11-30 10:26 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 663 bytes --]

Why is the data offset set so big? I created a 3x4TB RAID5 array and the 
data offset was 128MB. Chunk size was the default 512kB.

I cannot see why such a large offset is used. I would think the data 
offset need only be at most the chunk size plus the space (1 sector) for 
the superblock and bitmap.

When reshaping the array, I am prompted to use an external file, so I 
don't see that mdadm ever uses the space.


I tried making some test arrays and got much smaller sizes. A 3x1GB 
RAID5 array with 64k chunks had a 1MB data offset.


If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with 
setting the data offset to around 2MB?


Thanks


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: very large data-offset?
  2017-11-30 10:26 very large data-offset? Duane
@ 2017-11-30 10:45 ` Wols Lists
  2017-12-05  3:26   ` Duane
  2017-12-04  0:51 ` NeilBrown
  1 sibling, 1 reply; 6+ messages in thread
From: Wols Lists @ 2017-11-30 10:45 UTC (permalink / raw)
  To: Duane, linux-raid

On 30/11/17 10:26, Duane wrote:
> Why is the data offset set so big? I created a 3x4TB RAID5 array and the
> data offset was 128MB. Chunk size was the default 512kB.
> 
> I cannot see why such a large offset is used. I would think the data
> offset need only be at most the chunk size plus the space (1 sector) for
> the superblock and bitmap.
> 
> When reshaping the array, I am prompted to use an external file, so I
> don't see that mdadm ever uses the space.
> 
Do you mean the manual tells you, or that mdadm refuses to run otherwise?

Certainly with a new array on larger disks, a backup file should be
totally unnecessary.

Run a reshape of some sort, and see if the offset changes :-)
> 
> I tried making some test arrays and got much smaller sizes. A 3x1GB
> RAID5 array with 64k chunks had a 1MB data offset.
> 
> 
> If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with
> setting the data offset to around 2MB?
> 
Dunno. How do you know the bitmap is "just one block"? I haven't dug in
to it so I don't know, but it makes sense to me that as the disks get
bigger, so does the bitmap.

What's the point of fighting the defaults, anyway? Just add
another/bigger disk if you need more space.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: very large data-offset?
  2017-11-30 10:26 very large data-offset? Duane
  2017-11-30 10:45 ` Wols Lists
@ 2017-12-04  0:51 ` NeilBrown
  2017-12-05  3:33   ` Duane
  1 sibling, 1 reply; 6+ messages in thread
From: NeilBrown @ 2017-12-04  0:51 UTC (permalink / raw)
  To: Duane, linux-raid

[-- Attachment #1: Type: text/plain, Size: 1098 bytes --]

On Thu, Nov 30 2017, Duane wrote:

> Why is the data offset set so big? I created a 3x4TB RAID5 array and the 
> data offset was 128MB. Chunk size was the default 512kB.

It is less than 0.1% of the device...

>
> I cannot see why such a large offset is used. I would think the data 
> offset need only be at most the chunk size plus the space (1 sector) for 
> the superblock and bitmap.

It is insurance.  If you want to change the chunksize later, having a
lot of head-room will allow the reshape to go much faster.

>
> When reshaping the array, I am prompted to use an external file, so I 
> don't see that mdadm ever uses the space.

Citation needed.... what version of mdadm, what kernel?  What reshape
command?

>
>
> I tried making some test arrays and got much smaller sizes. A 3x1GB 
> RAID5 array with 64k chunks had a 1MB data offset.
>
>
> If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with 
> setting the data offset to around 2MB?

Only that it might reduce your options in the future, though probably
not by much.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: very large data-offset?
  2017-11-30 10:45 ` Wols Lists
@ 2017-12-05  3:26   ` Duane
  0 siblings, 0 replies; 6+ messages in thread
From: Duane @ 2017-12-05  3:26 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2135 bytes --]



On 2017-11-30 03:45 AM, Wols Lists wrote:
> On 30/11/17 10:26, Duane wrote:
>> Why is the data offset set so big? I created a 3x4TB RAID5 array and the
>> data offset was 128MB. Chunk size was the default 512kB.
>>
>> I cannot see why such a large offset is used. I would think the data
>> offset need only be at most the chunk size plus the space (1 sector) for
>> the superblock and bitmap.
>>
>> When reshaping the array, I am prompted to use an external file, so I
>> don't see that mdadm ever uses the space.
>>
> Do you mean the manual tells you, or that mdadm refuses to run otherwise?
>
> Certainly with a new array on larger disks, a backup file should be
> totally unnecessary.
>
> Run a reshape of some sort, and see if the offset changes :-)
Reshaping won't run and suggests the use of the backup file.
I like the way mdadm gives helpful hints rather than fails quietly. :)

I don't have the space right now to grow the array device sizes and 
watch for changes in the data offset.
It is going on my todo list for when I've quit reshaping my filesystem 
and things have stabilized.
>> I tried making some test arrays and got much smaller sizes. A 3x1GB
>> RAID5 array with 64k chunks had a 1MB data offset.
>>
>>
>> If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with
>> setting the data offset to around 2MB?
>>
> Dunno. How do you know the bitmap is "just one block"? I haven't dug in
> to it so I don't know, but it makes sense to me that as the disks get
> bigger, so does the bitmap.
I misspoke. I wasn't paying too much attention to bitmaps: just saw a 
value for bitmap offset in the header and assumed I had a bitmap. 
Further investigation due to your question showed that I don't have a 
bitmap by default.
>
> What's the point of fighting the defaults, anyway? Just add
> another/bigger disk if you need more space.
I'm poor; 5, now 6 disks are my budget's limit. :(
>
> Cheers,
> Wol
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: very large data-offset?
  2017-12-04  0:51 ` NeilBrown
@ 2017-12-05  3:33   ` Duane
  2017-12-05 21:53     ` NeilBrown
  0 siblings, 1 reply; 6+ messages in thread
From: Duane @ 2017-12-05  3:33 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1226 bytes --]



On 2017-12-03 05:51 PM, NeilBrown wrote:
> On Thu, Nov 30 2017, Duane wrote:
>
>> Why is the data offset set so big? I created a 3x4TB RAID5 array and the
>> data offset was 128MB. Chunk size was the default 512kB.
> It is less than 0.1% of the device...
>
>> I cannot see why such a large offset is used. I would think the data
>> offset need only be at most the chunk size plus the space (1 sector) for
>> the superblock and bitmap.
> It is insurance.  If you want to change the chunksize later, having a
> lot of head-room will allow the reshape to go much faster.
>
>> When reshaping the array, I am prompted to use an external file, so I
>> don't see that mdadm ever uses the space.
> Citation needed.... what version of mdadm, what kernel?  What reshape
> command?
kernel:  9.64-1-lts
mdadm:  mdadm - v4.0 - 2017-01-09
action:  reduce the number of raid devices
>>
>> I tried making some test arrays and got much smaller sizes. A 3x1GB
>> RAID5 array with 64k chunks had a 1MB data offset.
>>
>>
>> If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with
>> setting the data offset to around 2MB?
> Only that it might reduce your options in the future, though probably
> not by much.
>
> NeilBrown


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: duane.vcf --]
[-- Type: text/x-vcard; name="duane.vcf", Size: 4 bytes --]

null

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: very large data-offset?
  2017-12-05  3:33   ` Duane
@ 2017-12-05 21:53     ` NeilBrown
  0 siblings, 0 replies; 6+ messages in thread
From: NeilBrown @ 2017-12-05 21:53 UTC (permalink / raw)
  To: Duane, linux-raid

[-- Attachment #1: Type: text/plain, Size: 1800 bytes --]

On Mon, Dec 04 2017, Duane wrote:

> On 2017-12-03 05:51 PM, NeilBrown wrote:
>> On Thu, Nov 30 2017, Duane wrote:
>>
>>> Why is the data offset set so big? I created a 3x4TB RAID5 array and the
>>> data offset was 128MB. Chunk size was the default 512kB.
>> It is less than 0.1% of the device...
>>
>>> I cannot see why such a large offset is used. I would think the data
>>> offset need only be at most the chunk size plus the space (1 sector) for
>>> the superblock and bitmap.
>> It is insurance.  If you want to change the chunksize later, having a
>> lot of head-room will allow the reshape to go much faster.
>>
>>> When reshaping the array, I am prompted to use an external file, so I
>>> don't see that mdadm ever uses the space.
>> Citation needed.... what version of mdadm, what kernel?  What reshape
>> command?
> kernel:  9.64-1-lts

I don't know what that means?  Maybe 4.9.64-1-lts.
That's nice and recent.

> mdadm:  mdadm - v4.0 - 2017-01-09
> action:  reduce the number of raid devices

Ahh.  Reducing the number of devices doesn't use the head-space, it used
the end-space.  As you reduce the size of the array when doing this,
there is always lots of end-space.  So I'm surprised that it would want
a backup file.
However without specifics (mdadm -E of devices before the reshape, and
exact command given) I won't be looking into why it might.

Thanks,
NeilBrown


>>>
>>> I tried making some test arrays and got much smaller sizes. A 3x1GB
>>> RAID5 array with 64k chunks had a 1MB data offset.
>>>
>>>
>>> If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with
>>> setting the data offset to around 2MB?
>> Only that it might reduce your options in the future, though probably
>> not by much.
>>
>> NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-12-05 21:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-30 10:26 very large data-offset? Duane
2017-11-30 10:45 ` Wols Lists
2017-12-05  3:26   ` Duane
2017-12-04  0:51 ` NeilBrown
2017-12-05  3:33   ` Duane
2017-12-05 21:53     ` NeilBrown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.