All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID 5: weird size results after Grow
@ 2007-10-13  8:11 Marko Berg
  2007-10-13 13:08 ` Bill Davidsen
  0 siblings, 1 reply; 9+ messages in thread
From: Marko Berg @ 2007-10-13  8:11 UTC (permalink / raw)
  To: linux-raid

Hi folks,

I added a fourth drive to a RAID 5 array. After some complications 
related to adding a new HD controller at the same time, and thus 
changing some device names, I re-created the array and got it working 
(in the sense "nothing degraded"). But size results are weird. Each 
component partition is 320 G, does anyone have an explanation for the 
"Used Dev Size" field value below? The 960 G total size is as it should 
be, but in practice Linux reports the array only having 625,019,608 
blocks. How can this be, even though the array should be clean with 4 
active devices?

$  mdadm -D /dev/md0
/dev/md0:
        Version : 01.02.03
  Creation Time : Sat Oct 13 01:25:26 2007
     Raid Level : raid5
     Array Size : 937705344 (894.27 GiB 960.21 GB)
  Used Dev Size : 625136896 (298.09 GiB 320.07 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Sat Oct 13 05:11:38 2007
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 64K
 
           Name : 0
           UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
         Events : 2
 
    Number   Major   Minor   RaidDevice State
       0     253        2        0      active sync   
/dev/VolGroup01/LogVol02
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       17        3      active sync   /dev/sdb1


Results for mdadm -E <partition> on all devices appear like this one, 
with positions changed:

$ mdadm -E /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
           Name : 0
  Creation Time : Sat Oct 13 01:25:26 2007
     Raid Level : raid5
   Raid Devices : 4
 
  Used Dev Size : 625137010 (298.09 GiB 320.07 GB)
     Array Size : 1875410688 (894.27 GiB 960.21 GB)
      Used Size : 625136896 (298.09 GiB 320.07 GB)
    Data Offset : 272 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 9b2037fb:231a8ebf:1aaa5577:140795cc
 
    Update Time : Sat Oct 13 10:56:02 2007
       Checksum : c729f5a1 - correct
         Events : 2
 
         Layout : left-symmetric
     Chunk Size : 64K
 
    Array Slot : 1 (0, 1, 2, 3)
   Array State : uUuu


Particularly, "Used Dev Size" and "Used Size" report an amount twice the 
size of the partition (and device). Array size is here twice the actual 
size, even though shown correctly within parentheses.

Finally, mdstat shows the block count as it should be.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[3] sdd1[2] sdc1[1] dm-2[0]
      937705344 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] 
[UUUU]
     
unused devices: <none>


Any suggestions on how to fix this, or what to investigate next, would 
be appreciated!

-- 
Marko

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13  8:11 RAID 5: weird size results after Grow Marko Berg
@ 2007-10-13 13:08 ` Bill Davidsen
  2007-10-13 16:19   ` Marko Berg
  0 siblings, 1 reply; 9+ messages in thread
From: Bill Davidsen @ 2007-10-13 13:08 UTC (permalink / raw)
  To: Marko Berg; +Cc: linux-raid

Marko Berg wrote:
> Hi folks,
>
> I added a fourth drive to a RAID 5 array. After some complications 
> related to adding a new HD controller at the same time, and thus 
> changing some device names, I re-created the array and got it working 
> (in the sense "nothing degraded"). But size results are weird. Each 
> component partition is 320 G, does anyone have an explanation for the 
> "Used Dev Size" field value below? The 960 G total size is as it 
> should be, but in practice Linux reports the array only having 
> 625,019,608 blocks.

I don't see that number below, what command reported this?

> How can this be, even though the array should be clean with 4 active 
> devices?
>
> $  mdadm -D /dev/md0
> /dev/md0:
>        Version : 01.02.03
>  Creation Time : Sat Oct 13 01:25:26 2007
>     Raid Level : raid5
>     Array Size : 937705344 (894.27 GiB 960.21 GB)
>  Used Dev Size : 625136896 (298.09 GiB 320.07 GB)
>   Raid Devices : 4
>  Total Devices : 4
> Preferred Minor : 0
>    Persistence : Superblock is persistent
>
>    Update Time : Sat Oct 13 05:11:38 2007
>          State : clean
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 0
>  Spare Devices : 0
>
>         Layout : left-symmetric
>     Chunk Size : 64K
>
>           Name : 0
>           UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
>         Events : 2
>
>    Number   Major   Minor   RaidDevice State
>       0     253        2        0      active sync   
> /dev/VolGroup01/LogVol02
>       1       8       33        1      active sync   /dev/sdc1
>       2       8       49        2      active sync   /dev/sdd1
>       3       8       17        3      active sync   /dev/sdb1
>
>
> Results for mdadm -E <partition> on all devices appear like this one, 
> with positions changed:
>
> $ mdadm -E /dev/sdc1
> /dev/sdc1:
>          Magic : a92b4efc
>        Version : 1.2
>    Feature Map : 0x0
>     Array UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
>           Name : 0
>  Creation Time : Sat Oct 13 01:25:26 2007
>     Raid Level : raid5
>   Raid Devices : 4
>
>  Used Dev Size : 625137010 (298.09 GiB 320.07 GB)
>     Array Size : 1875410688 (894.27 GiB 960.21 GB)
>      Used Size : 625136896 (298.09 GiB 320.07 GB)
>    Data Offset : 272 sectors
>   Super Offset : 8 sectors
>          State : clean
>    Device UUID : 9b2037fb:231a8ebf:1aaa5577:140795cc
>
>    Update Time : Sat Oct 13 10:56:02 2007
>       Checksum : c729f5a1 - correct
>         Events : 2
>
>         Layout : left-symmetric
>     Chunk Size : 64K
>
>    Array Slot : 1 (0, 1, 2, 3)
>   Array State : uUuu
>
>
> Particularly, "Used Dev Size" and "Used Size" report an amount twice 
> the size of the partition (and device). Array size is here twice the 
> actual size, even though shown correctly within parentheses.

Sectors are 512 bytes.
>
> Finally, mdstat shows the block count as it should be.
>
> $ cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sdb1[3] sdd1[2] sdc1[1] dm-2[0]
>      937705344 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] 
> [UUUU]
>     unused devices: <none>
>
>
> Any suggestions on how to fix this, or what to investigate next, would 
> be appreciated!
>
I'm not sure what you're trying to "fix" here, everything you posted 
looks sane.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 13:08 ` Bill Davidsen
@ 2007-10-13 16:19   ` Marko Berg
  2007-10-13 17:17     ` Corey Hickey
                       ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Marko Berg @ 2007-10-13 16:19 UTC (permalink / raw)
  To: davidsen; +Cc: linux-raid

Bill Davidsen wrote:
> Marko Berg wrote:
>> I added a fourth drive to a RAID 5 array. After some complications 
>> related to adding a new HD controller at the same time, and thus 
>> changing some device names, I re-created the array and got it working 
>> (in the sense "nothing degraded"). But size results are weird. Each 
>> component partition is 320 G, does anyone have an explanation for the 
>> "Used Dev Size" field value below? The 960 G total size is as it 
>> should be, but in practice Linux reports the array only having 
>> 625,019,608 blocks.
>
> I don't see that number below, what command reported this?

For instance df:

$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0             625019608 358223356 235539408  61% /usr/pub

>> How can this be, even though the array should be clean with 4 active 
>> devices?
>>
>> $  mdadm -D /dev/md0
>> /dev/md0:
>>        Version : 01.02.03
>>  Creation Time : Sat Oct 13 01:25:26 2007
>>     Raid Level : raid5
>>     Array Size : 937705344 (894.27 GiB 960.21 GB)
>>  Used Dev Size : 625136896 (298.09 GiB 320.07 GB)
>>   Raid Devices : 4
>>  Total Devices : 4
>> Preferred Minor : 0
>>    Persistence : Superblock is persistent
>>
>>    Update Time : Sat Oct 13 05:11:38 2007
>>          State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>>  Spare Devices : 0
>>
>>         Layout : left-symmetric
>>     Chunk Size : 64K
>>
>>           Name : 0
>>           UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
>>         Events : 2
>>
>>    Number   Major   Minor   RaidDevice State
>>       0     253        2        0      active sync   
>> /dev/VolGroup01/LogVol02
>>       1       8       33        1      active sync   /dev/sdc1
>>       2       8       49        2      active sync   /dev/sdd1
>>       3       8       17        3      active sync   /dev/sdb1
>>
>>
>> Results for mdadm -E <partition> on all devices appear like this one, 
>> with positions changed:
>>
>> $ mdadm -E /dev/sdc1
>> /dev/sdc1:
>>          Magic : a92b4efc
>>        Version : 1.2
>>    Feature Map : 0x0
>>     Array UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
>>           Name : 0
>>  Creation Time : Sat Oct 13 01:25:26 2007
>>     Raid Level : raid5
>>   Raid Devices : 4
>>
>>  Used Dev Size : 625137010 (298.09 GiB 320.07 GB)
>>     Array Size : 1875410688 (894.27 GiB 960.21 GB)
>>      Used Size : 625136896 (298.09 GiB 320.07 GB)
>>    Data Offset : 272 sectors
>>   Super Offset : 8 sectors
>>          State : clean
>>    Device UUID : 9b2037fb:231a8ebf:1aaa5577:140795cc
>>
>>    Update Time : Sat Oct 13 10:56:02 2007
>>       Checksum : c729f5a1 - correct
>>         Events : 2
>>
>>         Layout : left-symmetric
>>     Chunk Size : 64K
>>
>>    Array Slot : 1 (0, 1, 2, 3)
>>   Array State : uUuu
>>
>>
>> Particularly, "Used Dev Size" and "Used Size" report an amount twice 
>> the size of the partition (and device). Array size is here twice the 
>> actual size, even though shown correctly within parentheses.
>
> Sectors are 512 bytes.

So "Used Dev Size" above uses sector size, while "Array Size" uses 1k 
blocks? I'm pretty sure, though, that previously "Used Dev Size" was in 
1k blocks too. That's also what most of the examples in the net seem to 
have.

>> Finally, mdstat shows the block count as it should be.
>>
>> $ cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid5 sdb1[3] sdd1[2] sdc1[1] dm-2[0]
>>      937705344 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] 
>> [UUUU]
>>     unused devices: <none>
>>
>>
>> Any suggestions on how to fix this, or what to investigate next, 
>> would be appreciated!
>>
> I'm not sure what you're trying to "fix" here, everything you posted 
> looks sane.

I'm trying to find the missing 300 GB that, as df reports, are not 
available. I ought to have a 900 GB array, consisting of four 300 GB 
devices, while only 600 GB are available. Adding the fourth device 
didn't increase the capacity of the array (visible, at least). E.g. 
fdisk reports the array size to be 900 G, but df still claims 600 
capacity. Any clues why?

-- 
Marko

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 16:19   ` Marko Berg
@ 2007-10-13 17:17     ` Corey Hickey
  2007-10-13 17:32       ` Marko Berg
  2007-10-13 17:41     ` Justin Piszcz
  2007-10-14  5:05     ` Bill Davidsen
  2 siblings, 1 reply; 9+ messages in thread
From: Corey Hickey @ 2007-10-13 17:17 UTC (permalink / raw)
  To: Marko Berg; +Cc: davidsen, linux-raid

Marko Berg wrote:
> Bill Davidsen wrote:
>> Marko Berg wrote:
>>> Any suggestions on how to fix this, or what to investigate next, 
>>> would be appreciated!
>>>
>> I'm not sure what you're trying to "fix" here, everything you posted 
>> looks sane.
> 
> I'm trying to find the missing 300 GB that, as df reports, are not 
> available. I ought to have a 900 GB array, consisting of four 300 GB 
> devices, while only 600 GB are available. Adding the fourth device 
> didn't increase the capacity of the array (visible, at least). E.g. 
> fdisk reports the array size to be 900 G, but df still claims 600 
> capacity. Any clues why?

df reports the size of the filesystem, which is still about 600GB--the 
filesystem doesn't resize automatically when the size of the underlying 
device changes.

You'll need to use resize2fs, resize_reiserfs, or whatever other tool is 
appropriate for your type of filesystem.

-Corey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 17:17     ` Corey Hickey
@ 2007-10-13 17:32       ` Marko Berg
  2007-10-13 17:42         ` Justin Piszcz
  0 siblings, 1 reply; 9+ messages in thread
From: Marko Berg @ 2007-10-13 17:32 UTC (permalink / raw)
  To: Corey Hickey; +Cc: linux-raid

Corey Hickey wrote:
> Marko Berg wrote:
>> Bill Davidsen wrote:
>>> Marko Berg wrote:
>>>> Any suggestions on how to fix this, or what to investigate next, 
>>>> would be appreciated!
>>>>
>>> I'm not sure what you're trying to "fix" here, everything you posted 
>>> looks sane.
>>
>> I'm trying to find the missing 300 GB that, as df reports, are not 
>> available. I ought to have a 900 GB array, consisting of four 300 GB 
>> devices, while only 600 GB are available. Adding the fourth device 
>> didn't increase the capacity of the array (visible, at least). E.g. 
>> fdisk reports the array size to be 900 G, but df still claims 600 
>> capacity. Any clues why?
>
> df reports the size of the filesystem, which is still about 600GB--the 
> filesystem doesn't resize automatically when the size of the 
> underlying device changes.
>
> You'll need to use resize2fs, resize_reiserfs, or whatever other tool 
> is appropriate for your type of filesystem.
>
> -Corey 

Right, so this isn't one of my sharpest days... Thanks a bunch, Corey!

-- 
Marko

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 16:19   ` Marko Berg
  2007-10-13 17:17     ` Corey Hickey
@ 2007-10-13 17:41     ` Justin Piszcz
  2007-10-14  5:05     ` Bill Davidsen
  2 siblings, 0 replies; 9+ messages in thread
From: Justin Piszcz @ 2007-10-13 17:41 UTC (permalink / raw)
  To: Marko Berg; +Cc: davidsen, linux-raid



On Sat, 13 Oct 2007, Marko Berg wrote:

> Bill Davidsen wrote:
>> Marko Berg wrote:
>>> I added a fourth drive to a RAID 5 array. After some complications related 
>>> to adding a new HD controller at the same time, and thus changing some 
>>> device names, I re-created the array and got it working (in the sense 
>>> "nothing degraded"). But size results are weird. Each component partition 
>>> is 320 G, does anyone have an explanation for the "Used Dev Size" field 
>>> value below? The 960 G total size is as it should be, but in practice 
>>> Linux reports the array only having 625,019,608 blocks.
>> 
>> I don't see that number below, what command reported this?
>
> For instance df:
>
> $ df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md0             625019608 358223356 235539408  61% /usr/pub
>
>>> How can this be, even though the array should be clean with 4 active 
>>> devices?
>>> 
>>> $  mdadm -D /dev/md0
>>> /dev/md0:
>>>        Version : 01.02.03
>>>  Creation Time : Sat Oct 13 01:25:26 2007
>>>     Raid Level : raid5
>>>     Array Size : 937705344 (894.27 GiB 960.21 GB)
>>>  Used Dev Size : 625136896 (298.09 GiB 320.07 GB)
>>>   Raid Devices : 4
>>>  Total Devices : 4
>>> Preferred Minor : 0
>>>    Persistence : Superblock is persistent
>>>
>>>    Update Time : Sat Oct 13 05:11:38 2007
>>>          State : clean
>>> Active Devices : 4
>>> Working Devices : 4
>>> Failed Devices : 0
>>>  Spare Devices : 0
>>>
>>>         Layout : left-symmetric
>>>     Chunk Size : 64K
>>>
>>>           Name : 0
>>>           UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
>>>         Events : 2
>>>
>>>    Number   Major   Minor   RaidDevice State
>>>       0     253        2        0      active sync 
>>> /dev/VolGroup01/LogVol02
>>>       1       8       33        1      active sync   /dev/sdc1
>>>       2       8       49        2      active sync   /dev/sdd1
>>>       3       8       17        3      active sync   /dev/sdb1
>>> 
>>> 
>>> Results for mdadm -E <partition> on all devices appear like this one, with 
>>> positions changed:
>>> 
>>> $ mdadm -E /dev/sdc1
>>> /dev/sdc1:
>>>          Magic : a92b4efc
>>>        Version : 1.2
>>>    Feature Map : 0x0
>>>     Array UUID : 9bf903f8:7fc9eec1:2ff25011:37e9607b
>>>           Name : 0
>>>  Creation Time : Sat Oct 13 01:25:26 2007
>>>     Raid Level : raid5
>>>   Raid Devices : 4
>>>
>>>  Used Dev Size : 625137010 (298.09 GiB 320.07 GB)
>>>     Array Size : 1875410688 (894.27 GiB 960.21 GB)
>>>      Used Size : 625136896 (298.09 GiB 320.07 GB)
>>>    Data Offset : 272 sectors
>>>   Super Offset : 8 sectors
>>>          State : clean
>>>    Device UUID : 9b2037fb:231a8ebf:1aaa5577:140795cc
>>>
>>>    Update Time : Sat Oct 13 10:56:02 2007
>>>       Checksum : c729f5a1 - correct
>>>         Events : 2
>>>
>>>         Layout : left-symmetric
>>>     Chunk Size : 64K
>>>
>>>    Array Slot : 1 (0, 1, 2, 3)
>>>   Array State : uUuu
>>> 
>>> 
>>> Particularly, "Used Dev Size" and "Used Size" report an amount twice the 
>>> size of the partition (and device). Array size is here twice the actual 
>>> size, even though shown correctly within parentheses.
>> 
>> Sectors are 512 bytes.
>
> So "Used Dev Size" above uses sector size, while "Array Size" uses 1k blocks? 
> I'm pretty sure, though, that previously "Used Dev Size" was in 1k blocks 
> too. That's also what most of the examples in the net seem to have.
>
>>> Finally, mdstat shows the block count as it should be.
>>> 
>>> $ cat /proc/mdstat
>>> Personalities : [raid6] [raid5] [raid4]
>>> md0 : active raid5 sdb1[3] sdd1[2] sdc1[1] dm-2[0]
>>>      937705344 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] 
>>> [UUUU]
>>>     unused devices: <none>
>>> 
>>> 
>>> Any suggestions on how to fix this, or what to investigate next, would be 
>>> appreciated!
>>> 
>> I'm not sure what you're trying to "fix" here, everything you posted looks 
>> sane.
>
> I'm trying to find the missing 300 GB that, as df reports, are not available. 
> I ought to have a 900 GB array, consisting of four 300 GB devices, while only 
> 600 GB are available. Adding the fourth device didn't increase the capacity 
> of the array (visible, at least). E.g. fdisk reports the array size to be 900 
> G, but df still claims 600 capacity. Any clues why?
>
> -- 
> Marko
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

You have to expand the filesystem.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 17:32       ` Marko Berg
@ 2007-10-13 17:42         ` Justin Piszcz
  2007-10-13 17:59           ` Corey Hickey
  0 siblings, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2007-10-13 17:42 UTC (permalink / raw)
  To: Marko Berg; +Cc: Corey Hickey, linux-raid



On Sat, 13 Oct 2007, Marko Berg wrote:

> Corey Hickey wrote:
>> Marko Berg wrote:
>>> Bill Davidsen wrote:
>>>> Marko Berg wrote:
>>>>> Any suggestions on how to fix this, or what to investigate next, would 
>>>>> be appreciated!
>>>>> 
>>>> I'm not sure what you're trying to "fix" here, everything you posted 
>>>> looks sane.
>>> 
>>> I'm trying to find the missing 300 GB that, as df reports, are not 
>>> available. I ought to have a 900 GB array, consisting of four 300 GB 
>>> devices, while only 600 GB are available. Adding the fourth device didn't 
>>> increase the capacity of the array (visible, at least). E.g. fdisk reports 
>>> the array size to be 900 G, but df still claims 600 capacity. Any clues 
>>> why?
>> 
>> df reports the size of the filesystem, which is still about 600GB--the 
>> filesystem doesn't resize automatically when the size of the underlying 
>> device changes.
>> 
>> You'll need to use resize2fs, resize_reiserfs, or whatever other tool is 
>> appropriate for your type of filesystem.
>> 
>> -Corey 
>
> Right, so this isn't one of my sharpest days... Thanks a bunch, Corey!
>
> -- 
> Marko
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Ah, already answered.

Justin.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 17:42         ` Justin Piszcz
@ 2007-10-13 17:59           ` Corey Hickey
  0 siblings, 0 replies; 9+ messages in thread
From: Corey Hickey @ 2007-10-13 17:59 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: Marko Berg, linux-raid

Justin Piszcz wrote:
> 
> On Sat, 13 Oct 2007, Marko Berg wrote:
> 
>> Corey Hickey wrote:
>>> Marko Berg wrote:
>>>> Bill Davidsen wrote:
>>>>> Marko Berg wrote:
>>>>>> Any suggestions on how to fix this, or what to investigate next, would 
>>>>>> be appreciated!
>>>>>>
>>>>> I'm not sure what you're trying to "fix" here, everything you posted 
>>>>> looks sane.
>>>> I'm trying to find the missing 300 GB that, as df reports, are not 
>>>> available. I ought to have a 900 GB array, consisting of four 300 GB 
>>>> devices, while only 600 GB are available. Adding the fourth device didn't 
>>>> increase the capacity of the array (visible, at least). E.g. fdisk reports 
>>>> the array size to be 900 G, but df still claims 600 capacity. Any clues 
>>>> why?
>>> df reports the size of the filesystem, which is still about 600GB--the 
>>> filesystem doesn't resize automatically when the size of the underlying 
>>> device changes.
>>>
>>> You'll need to use resize2fs, resize_reiserfs, or whatever other tool is 
>>> appropriate for your type of filesystem.
>>>
>>> -Corey 
>> Right, so this isn't one of my sharpest days... Thanks a bunch, Corey!

No problem.

> Ah, already answered.

vger.kernel.org greylisted my smtp server, so it took my message a while 
to get to the list.

-Corey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID 5: weird size results after Grow
  2007-10-13 16:19   ` Marko Berg
  2007-10-13 17:17     ` Corey Hickey
  2007-10-13 17:41     ` Justin Piszcz
@ 2007-10-14  5:05     ` Bill Davidsen
  2 siblings, 0 replies; 9+ messages in thread
From: Bill Davidsen @ 2007-10-14  5:05 UTC (permalink / raw)
  To: Marko Berg; +Cc: linux-raid

Marko Berg wrote:
> Bill Davidsen wrote:
>> Marko Berg wrote:
>>> I added a fourth drive to a RAID 5 array. After some complications 
>>> related to adding a new HD controller at the same time, and thus 
>>> changing some device names, I re-created the array and got it 
>>> working (in the sense "nothing degraded"). But size results are 
>>> weird. Each component partition is 320 G, does anyone have an 
>>> explanation for the "Used Dev Size" field value below? The 960 G 
>>> total size is as it should be, but in practice Linux reports the 
>>> array only having 625,019,608 blocks.
>>
>> I don't see that number below, what command reported this?
>
> For instance df:
>
> $ df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md0             625019608 358223356 235539408  61% /usr/pub
>
>>> How can this be, even though the array should be clean with 4 active 
>>> devices?

df reports the size of the filesystem, mdadm reports the size of the array.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2007-10-14  5:05 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-10-13  8:11 RAID 5: weird size results after Grow Marko Berg
2007-10-13 13:08 ` Bill Davidsen
2007-10-13 16:19   ` Marko Berg
2007-10-13 17:17     ` Corey Hickey
2007-10-13 17:32       ` Marko Berg
2007-10-13 17:42         ` Justin Piszcz
2007-10-13 17:59           ` Corey Hickey
2007-10-13 17:41     ` Justin Piszcz
2007-10-14  5:05     ` Bill Davidsen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.