All of lore.kernel.org
 help / color / mirror / Atom feed
* Device role question
@ 2010-02-26 14:23 Piergiorgio Sartor
  2010-02-27  5:56 ` Michael Evans
  0 siblings, 1 reply; 9+ messages in thread
From: Piergiorgio Sartor @ 2010-02-26 14:23 UTC (permalink / raw)
  To: linux-raid

Hi all,

checking randomly the component of some RAID-10 arrays
(two disks each), with "mdadm -E /dev/sdXY", I noticed
the following.

There is an entry reported, called "Device Role".

On one array, the components are defined, respectively, as:

Active device 0
Active device 1

On another two arrays, it's a bit different.

Active device 0
spare

Why is it "spare" (all are RAID-10 f2)?

Does it make any difference one or the other role, in
this type of RAID?

On the other hand, "mdadm -D /dev/mdX" does not seem
to give any hint on the different roles.

Thanks,

bye,

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-26 14:23 Device role question Piergiorgio Sartor
@ 2010-02-27  5:56 ` Michael Evans
  2010-02-27  8:08   ` Piergiorgio Sartor
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Evans @ 2010-02-27  5:56 UTC (permalink / raw)
  To: Piergiorgio Sartor; +Cc: linux-raid

On Fri, Feb 26, 2010 at 6:23 AM, Piergiorgio Sartor
<piergiorgio.sartor@nexgo.de> wrote:
> Hi all,
>
> checking randomly the component of some RAID-10 arrays
> (two disks each), with "mdadm -E /dev/sdXY", I noticed
> the following.
>
> There is an entry reported, called "Device Role".
>
> On one array, the components are defined, respectively, as:
>
> Active device 0
> Active device 1
>
> On another two arrays, it's a bit different.
>
> Active device 0
> spare
>
> Why is it "spare" (all are RAID-10 f2)?
>
> Does it make any difference one or the other role, in
> this type of RAID?
>
> On the other hand, "mdadm -D /dev/mdX" does not seem
> to give any hint on the different roles.
>
> Thanks,
>
> bye,
>
> --
>
> piergiorgio
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Spare should mean that it is not currently a synced member of the
array; in other words a hot-spare.  As I recall raid10 cannot
currently be grown (or /may/ only be grown with VERY recent
tools+kernels); did you maybe create a single device raid10 and try to
grow it?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-27  5:56 ` Michael Evans
@ 2010-02-27  8:08   ` Piergiorgio Sartor
  2010-02-27  8:55     ` Michael Evans
  0 siblings, 1 reply; 9+ messages in thread
From: Piergiorgio Sartor @ 2010-02-27  8:08 UTC (permalink / raw)
  To: Michael Evans; +Cc: Piergiorgio Sartor, linux-raid

Hi,

> Spare should mean that it is not currently a synced member of the
> array; in other words a hot-spare.  As I recall raid10 cannot
> currently be grown (or /may/ only be grown with VERY recent
> tools+kernels); did you maybe create a single device raid10 and try to
> grow it?

uhm, well, not really, but almost.

If I remember correctly, at least one of the arrays
with "spare", was created with missing disk, added
then later.

BTW, I also notice another array, still RAID-10,
where both the disks have "spare" role...

Thanks,

bye,

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-27  8:08   ` Piergiorgio Sartor
@ 2010-02-27  8:55     ` Michael Evans
  2010-02-27  9:10       ` Piergiorgio Sartor
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Evans @ 2010-02-27  8:55 UTC (permalink / raw)
  To: Piergiorgio Sartor; +Cc: linux-raid

On Sat, Feb 27, 2010 at 12:08 AM, Piergiorgio Sartor
<piergiorgio.sartor@nexgo.de> wrote:
> Hi,
>
>> Spare should mean that it is not currently a synced member of the
>> array; in other words a hot-spare.  As I recall raid10 cannot
>> currently be grown (or /may/ only be grown with VERY recent
>> tools+kernels); did you maybe create a single device raid10 and try to
>> grow it?
>
> uhm, well, not really, but almost.
>
> If I remember correctly, at least one of the arrays
> with "spare", was created with missing disk, added
> then later.
>
> BTW, I also notice another array, still RAID-10,
> where both the disks have "spare" role...
>
> Thanks,
>
> bye,
>
> --
>
> piergiorgio
>

Ok, please run this for each disk in the array:

mdadm --examine /dev/(DEVICE)

The output would be most readable if you did each array's devices in
order, and you can list them on the same command (- - examine takes
multiple inputs)

If you still think the situation isn't as I described above, post the results.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-27  8:55     ` Michael Evans
@ 2010-02-27  9:10       ` Piergiorgio Sartor
  2010-02-28  3:34         ` Michael Evans
  2010-02-28  4:41         ` Neil Brown
  0 siblings, 2 replies; 9+ messages in thread
From: Piergiorgio Sartor @ 2010-02-27  9:10 UTC (permalink / raw)
  To: Michael Evans; +Cc: Piergiorgio Sartor, linux-raid

Hi,

> Ok, please run this for each disk in the array:
> 
> mdadm --examine /dev/(DEVICE)
> 
> The output would be most readable if you did each array's devices in
> order, and you can list them on the same command (- - examine takes
> multiple inputs)
> 
> If you still think the situation isn't as I described above, post the results.

Well, here it is:

$> mdadm -E /dev/sd[ab]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : 54db81a7:b47e9253:7291055e:4953c163
           Name : lvm
  Creation Time : Fri Feb  6 20:17:13 2009
     Raid Level : raid10
   Raid Devices : 2

 Avail Dev Size : 624928236 (297.99 GiB 319.96 GB)
     Array Size : 624928000 (297.99 GiB 319.96 GB)
  Used Dev Size : 624928000 (297.99 GiB 319.96 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 8f6cd2c4:0efc8286:09ec91c6:bc5014bf

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Feb 27 10:08:22 2010
       Checksum : 1703ded0 - correct
         Events : 161646

         Layout : far=2
     Chunk Size : 64K

   Device Role : spare
   Array State : AA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : 54db81a7:b47e9253:7291055e:4953c163
           Name : lvm
  Creation Time : Fri Feb  6 20:17:13 2009
     Raid Level : raid10
   Raid Devices : 2

 Avail Dev Size : 624928236 (297.99 GiB 319.96 GB)
     Array Size : 624928000 (297.99 GiB 319.96 GB)
  Used Dev Size : 624928000 (297.99 GiB 319.96 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 6e2763b5:9415b181:e41a9964:b0c21ca6

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Feb 27 10:08:22 2010
       Checksum : 87d25401 - correct
         Events : 161646

         Layout : far=2
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)

And the details too:

$> mdadm -D /dev/md1
/dev/md1:
        Version : 1.1
  Creation Time : Fri Feb  6 20:17:13 2009
     Raid Level : raid10
     Array Size : 312464000 (297.99 GiB 319.96 GB)
  Used Dev Size : 312464000 (297.99 GiB 319.96 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Feb 27 10:09:24 2010
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : far=2
     Chunk Size : 64K

           Name : lvm
           UUID : 54db81a7:b47e9253:7291055e:4953c163
         Events : 161646

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       2       8        2        1      active sync   /dev/sda2

bye,

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-27  9:10       ` Piergiorgio Sartor
@ 2010-02-28  3:34         ` Michael Evans
  2010-02-28 10:38           ` Piergiorgio Sartor
  2010-02-28  4:41         ` Neil Brown
  1 sibling, 1 reply; 9+ messages in thread
From: Michael Evans @ 2010-02-28  3:34 UTC (permalink / raw)
  To: Piergiorgio Sartor; +Cc: linux-raid

On Sat, Feb 27, 2010 at 1:10 AM, Piergiorgio Sartor
<piergiorgio.sartor@nexgo.de> wrote:
> Hi,
>
>> Ok, please run this for each disk in the array:
>>
>> mdadm --examine /dev/(DEVICE)
>>
>> The output would be most readable if you did each array's devices in
>> order, and you can list them on the same command (- - examine takes
>> multiple inputs)
>>
>> If you still think the situation isn't as I described above, post the results.
>
> Well, here it is:
>
> $> mdadm -E /dev/sd[ab]2
> /dev/sda2:
>          Magic : a92b4efc
>        Version : 1.1
>    Feature Map : 0x1
>     Array UUID : 54db81a7:b47e9253:7291055e:4953c163
>           Name : lvm
>  Creation Time : Fri Feb  6 20:17:13 2009
>     Raid Level : raid10
>   Raid Devices : 2
>
>  Avail Dev Size : 624928236 (297.99 GiB 319.96 GB)
>     Array Size : 624928000 (297.99 GiB 319.96 GB)
>  Used Dev Size : 624928000 (297.99 GiB 319.96 GB)
>    Data Offset : 264 sectors
>   Super Offset : 0 sectors
>          State : clean
>    Device UUID : 8f6cd2c4:0efc8286:09ec91c6:bc5014bf
>
> Internal Bitmap : 8 sectors from superblock
>    Update Time : Sat Feb 27 10:08:22 2010
>       Checksum : 1703ded0 - correct
>         Events : 161646
>
>         Layout : far=2
>     Chunk Size : 64K
>
>   Device Role : spare
>   Array State : AA ('A' == active, '.' == missing)
> /dev/sdb2:
>          Magic : a92b4efc
>        Version : 1.1
>    Feature Map : 0x1
>     Array UUID : 54db81a7:b47e9253:7291055e:4953c163
>           Name : lvm
>  Creation Time : Fri Feb  6 20:17:13 2009
>     Raid Level : raid10
>   Raid Devices : 2
>
>  Avail Dev Size : 624928236 (297.99 GiB 319.96 GB)
>     Array Size : 624928000 (297.99 GiB 319.96 GB)
>  Used Dev Size : 624928000 (297.99 GiB 319.96 GB)
>    Data Offset : 264 sectors
>   Super Offset : 0 sectors
>          State : clean
>    Device UUID : 6e2763b5:9415b181:e41a9964:b0c21ca6
>
> Internal Bitmap : 8 sectors from superblock
>    Update Time : Sat Feb 27 10:08:22 2010
>       Checksum : 87d25401 - correct
>         Events : 161646
>
>         Layout : far=2
>     Chunk Size : 64K
>
>   Device Role : Active device 0
>   Array State : AA ('A' == active, '.' == missing)
>
> And the details too:
>
> $> mdadm -D /dev/md1
> /dev/md1:
>        Version : 1.1
>  Creation Time : Fri Feb  6 20:17:13 2009
>     Raid Level : raid10
>     Array Size : 312464000 (297.99 GiB 319.96 GB)
>  Used Dev Size : 312464000 (297.99 GiB 319.96 GB)
>   Raid Devices : 2
>  Total Devices : 2
>    Persistence : Superblock is persistent
>
>  Intent Bitmap : Internal
>
>    Update Time : Sat Feb 27 10:09:24 2010
>          State : active
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>  Spare Devices : 0
>
>         Layout : far=2
>     Chunk Size : 64K
>
>           Name : lvm
>           UUID : 54db81a7:b47e9253:7291055e:4953c163
>         Events : 161646
>
>    Number   Major   Minor   RaidDevice State
>       0       8       18        0      active sync   /dev/sdb2
>       2       8        2        1      active sync   /dev/sda2
>
> bye,
>
> --
>
> piergiorgio
>

I've checked my arrays and my only RAID-10 array has a single spare
(hot spare) as part of the set with several other members.  All
current members storing DATA are listed as active members.


What's confusing is that /proc/mdadm lists it as an active member
(synced to data) but that the device does not match.  Maybe you can
stop/restart the array?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-27  9:10       ` Piergiorgio Sartor
  2010-02-28  3:34         ` Michael Evans
@ 2010-02-28  4:41         ` Neil Brown
  2010-02-28 10:35           ` Piergiorgio Sartor
  1 sibling, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-02-28  4:41 UTC (permalink / raw)
  To: Piergiorgio Sartor; +Cc: Michael Evans, linux-raid

On Sat, 27 Feb 2010 10:10:27 +0100
Piergiorgio Sartor <piergiorgio.sartor@nexgo.de> wrote:

> Hi,
> 
> > Ok, please run this for each disk in the array:
> > 
> > mdadm --examine /dev/(DEVICE)
> > 
> > The output would be most readable if you did each array's devices in
> > order, and you can list them on the same command (- - examine takes
> > multiple inputs)
> > 
> > If you still think the situation isn't as I described above, post the results.
> 
> Well, here it is:
> 
> $> mdadm -E /dev/sd[ab]2
> /dev/sda2:
>           Magic : a92b4efc
>         Version : 1.1
>     Feature Map : 0x1
>      Array UUID : 54db81a7:b47e9253:7291055e:4953c163
>            Name : lvm
>   Creation Time : Fri Feb  6 20:17:13 2009
>      Raid Level : raid10
>    Raid Devices : 2
> 
>  Avail Dev Size : 624928236 (297.99 GiB 319.96 GB)
>      Array Size : 624928000 (297.99 GiB 319.96 GB)
>   Used Dev Size : 624928000 (297.99 GiB 319.96 GB)
>     Data Offset : 264 sectors
>    Super Offset : 0 sectors
>           State : clean
>     Device UUID : 8f6cd2c4:0efc8286:09ec91c6:bc5014bf
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Sat Feb 27 10:08:22 2010
>        Checksum : 1703ded0 - correct
>          Events : 161646
> 
>          Layout : far=2
>      Chunk Size : 64K
> 
>    Device Role : spare
>    Array State : AA ('A' == active, '.' == missing)
> /dev/sdb2:
>           Magic : a92b4efc
>         Version : 1.1
>     Feature Map : 0x1
>      Array UUID : 54db81a7:b47e9253:7291055e:4953c163
>            Name : lvm
>   Creation Time : Fri Feb  6 20:17:13 2009
>      Raid Level : raid10
>    Raid Devices : 2
> 
>  Avail Dev Size : 624928236 (297.99 GiB 319.96 GB)
>      Array Size : 624928000 (297.99 GiB 319.96 GB)
>   Used Dev Size : 624928000 (297.99 GiB 319.96 GB)
>     Data Offset : 264 sectors
>    Super Offset : 0 sectors
>           State : clean
>     Device UUID : 6e2763b5:9415b181:e41a9964:b0c21ca6
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Sat Feb 27 10:08:22 2010
>        Checksum : 87d25401 - correct
>          Events : 161646
> 
>          Layout : far=2
>      Chunk Size : 64K
> 
>    Device Role : Active device 0
>    Array State : AA ('A' == active, '.' == missing)
> 
> And the details too:
> 
> $> mdadm -D /dev/md1
> /dev/md1:
>         Version : 1.1
>   Creation Time : Fri Feb  6 20:17:13 2009
>      Raid Level : raid10
>      Array Size : 312464000 (297.99 GiB 319.96 GB)
>   Used Dev Size : 312464000 (297.99 GiB 319.96 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Sat Feb 27 10:09:24 2010
>           State : active
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : far=2
>      Chunk Size : 64K
> 
>            Name : lvm
>            UUID : 54db81a7:b47e9253:7291055e:4953c163
>          Events : 161646
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       18        0      active sync   /dev/sdb2
>        2       8        2        1      active sync   /dev/sda2
> 
> bye,
> 


Thanks for all the details.  They help.

It looks like a bug in mdadm which was fixed in 3.1.1.  It is only present in
3.0 and 3.0.x (I don't think you said what version of mdadm you are using).

NeilBrown

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-28  4:41         ` Neil Brown
@ 2010-02-28 10:35           ` Piergiorgio Sartor
  0 siblings, 0 replies; 9+ messages in thread
From: Piergiorgio Sartor @ 2010-02-28 10:35 UTC (permalink / raw)
  To: Neil Brown; +Cc: Piergiorgio Sartor, Michael Evans, linux-raid

Hi,

> It looks like a bug in mdadm which was fixed in 3.1.1.  It is only present in
> 3.0 and 3.0.x (I don't think you said what version of mdadm you are using).

sorry for that, it is Fedora 12, with mdadm 3.0.3
(latest available in updates).

Thanks for the clarification,

bye,

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device role question
  2010-02-28  3:34         ` Michael Evans
@ 2010-02-28 10:38           ` Piergiorgio Sartor
  0 siblings, 0 replies; 9+ messages in thread
From: Piergiorgio Sartor @ 2010-02-28 10:38 UTC (permalink / raw)
  To: Michael Evans; +Cc: Piergiorgio Sartor, linux-raid

Hi,

> What's confusing is that /proc/mdadm lists it as an active member
> (synced to data) but that the device does not match.  Maybe you can
> stop/restart the array?

thank for the support.
I tried to stop/start, incremental assembly (in different
order), fail/remove/add of both components (once at time,
of course), but this information seems to be stable.

The only thing I did not try was to fail/remove one HDD,
zero the superblock, add it again.

Anyway, it seems Neil explained this is a bug of mdadm,
so I hope this will be fixed in the next Fedora upgrade.

Thanks again,

bye,

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-02-28 10:38 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-02-26 14:23 Device role question Piergiorgio Sartor
2010-02-27  5:56 ` Michael Evans
2010-02-27  8:08   ` Piergiorgio Sartor
2010-02-27  8:55     ` Michael Evans
2010-02-27  9:10       ` Piergiorgio Sartor
2010-02-28  3:34         ` Michael Evans
2010-02-28 10:38           ` Piergiorgio Sartor
2010-02-28  4:41         ` Neil Brown
2010-02-28 10:35           ` Piergiorgio Sartor

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.