All of lore.kernel.org
 help / color / mirror / Atom feed
* strange status raid 5
@ 2014-03-31 14:08 bobzer
  2014-03-31 14:28 ` Mikael Abrahamsson
  2014-04-01 22:41 ` NeilBrown
  0 siblings, 2 replies; 5+ messages in thread
From: bobzer @ 2014-03-31 14:08 UTC (permalink / raw)
  To: linux-raid

Hi,

My raid 5 is in a strange state. the mdstat tell me is degraded, but
when i check the disk that should be in the raid anymore it tell me
everything is right ... I'm lost

#cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[3] sdd1[1]
     3907021568 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/2] [UU_]

unused devices: <none>


so my third disk is unused i check with :

#mdadm -D /dev/md0
/dev/md0:
       Version : 1.2
 Creation Time : Sun Mar  4 22:49:14 2012
    Raid Level : raid5
    Array Size : 3907021568 (3726.03 GiB 4000.79 GB)
 Used Dev Size : 1953510784 (1863.01 GiB 2000.40 GB)
  Raid Devices : 3
 Total Devices : 2
   Persistence : Superblock is persistent

   Update Time : Sun Mar 30 23:01:35 2014
         State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 128K
          Name : debian:0
          UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
        Events : 255801

   Number   Major   Minor   RaidDevice State
      3       8       33        0      active sync   /dev/sdc1
      1       8       49        1      active sync   /dev/sdd1
      2       0        0        2      removed


after verify what says the disk i got confuse :


#mdadm --examine /dev/sdb1
/dev/sdb1:
         Magic : a92b4efc
       Version : 1.2
   Feature Map : 0x0
    Array UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
          Name : debian:0
 Creation Time : Sun Mar  4 22:49:14 2012
    Raid Level : raid5
  Raid Devices : 3

Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB)
    Array Size : 7814043136 (3726.03 GiB 4000.79 GB)
 Used Dev Size : 3907021568 (1863.01 GiB 2000.40 GB)
   Data Offset : 2048 sectors
  Super Offset : 8 sectors
         State : clean
   Device UUID : f9059dfb:74af1ab7:bc1465b1:e2ff30ba

   Update Time : Sun Jan  5 04:11:41 2014
 Bad Block Log : 512 entries available at offset 2032 sectors
      Checksum : 7df1fefc - correct
        Events : 436
        Layout : left-symmetric
    Chunk Size : 128K

  Device Role : Active device 2
  Array State : AAA ('A' == active, '.' == missing)


so there i tried to re-add the disk that doesn't works
so i tred to stop the raid and assemble it again but it doesn't work either

#mdadm --stop /dev/md0
#mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
mdadm: /dev/md0 has been started with 2 drives (out of 3).


can you help me guys ?


Thanks by advance

ps: i've got mdadm 3.3-devel i would update it but don't how to do ...

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange status raid 5
  2014-03-31 14:08 strange status raid 5 bobzer
@ 2014-03-31 14:28 ` Mikael Abrahamsson
  2014-03-31 14:35   ` bobzer
  2014-04-01 22:41 ` NeilBrown
  1 sibling, 1 reply; 5+ messages in thread
From: Mikael Abrahamsson @ 2014-03-31 14:28 UTC (permalink / raw)
  To: bobzer; +Cc: linux-raid

On Mon, 31 Mar 2014, bobzer wrote:

> so there i tried to re-add the disk that doesn't works

"Doesn't work" isn't a helpful problem description. Please provide output 
from dmesg when you do this, error message (if any) from CLI, what command 
you tried with, etc.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange status raid 5
  2014-03-31 14:28 ` Mikael Abrahamsson
@ 2014-03-31 14:35   ` bobzer
  0 siblings, 0 replies; 5+ messages in thread
From: bobzer @ 2014-03-31 14:35 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: linux-raid

Sorry to not be precise enough
here are the command and result i tried:
# mdadm --re-add /dev/md0 /dev/sdb1
mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible

# mdadm --add /dev/md0 /dev/sdb1
mdadm: /dev/sdb1 reports being an active member for /dev/md0, but a
--re-add fails.
mdadm: not performing --add as that would convert /dev/sdb1 in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdb1" first.

Also i check my disk and seem ok :
#smartcll -l selftest /dev/sdb1
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining
LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     14046         -
# 2  Short offline       Completed without error       00%     14041         -


Thanks for helping me

On Mon, Mar 31, 2014 at 10:28 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Mon, 31 Mar 2014, bobzer wrote:
>
>> so there i tried to re-add the disk that doesn't works
>
>
> "Doesn't work" isn't a helpful problem description. Please provide output
> from dmesg when you do this, error message (if any) from CLI, what command
> you tried with, etc.
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange status raid 5
  2014-03-31 14:08 strange status raid 5 bobzer
  2014-03-31 14:28 ` Mikael Abrahamsson
@ 2014-04-01 22:41 ` NeilBrown
  2014-04-01 23:46   ` bobzer
  1 sibling, 1 reply; 5+ messages in thread
From: NeilBrown @ 2014-04-01 22:41 UTC (permalink / raw)
  To: bobzer; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3454 bytes --]

On Mon, 31 Mar 2014 10:08:30 -0400 bobzer <bobzer@gmail.com> wrote:

> Hi,
> 
> My raid 5 is in a strange state. the mdstat tell me is degraded, but
> when i check the disk that should be in the raid anymore it tell me
> everything is right ... I'm lost
> 
> #cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sdc1[3] sdd1[1]
>      3907021568 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/2] [UU_]
> 
> unused devices: <none>
> 
> 
> so my third disk is unused i check with :
> 
> #mdadm -D /dev/md0
> /dev/md0:
>        Version : 1.2
>  Creation Time : Sun Mar  4 22:49:14 2012
>     Raid Level : raid5
>     Array Size : 3907021568 (3726.03 GiB 4000.79 GB)
>  Used Dev Size : 1953510784 (1863.01 GiB 2000.40 GB)
>   Raid Devices : 3
>  Total Devices : 2
>    Persistence : Superblock is persistent
> 
>    Update Time : Sun Mar 30 23:01:35 2014
>          State : clean, degraded
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
>  Spare Devices : 0
> 
>         Layout : left-symmetric
>     Chunk Size : 128K
>           Name : debian:0
>           UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
>         Events : 255801
> 
>    Number   Major   Minor   RaidDevice State
>       3       8       33        0      active sync   /dev/sdc1
>       1       8       49        1      active sync   /dev/sdd1
>       2       0        0        2      removed
> 
> 
> after verify what says the disk i got confuse :
> 
> 
> #mdadm --examine /dev/sdb1
> /dev/sdb1:
>          Magic : a92b4efc
>        Version : 1.2
>    Feature Map : 0x0
>     Array UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
>           Name : debian:0
>  Creation Time : Sun Mar  4 22:49:14 2012
>     Raid Level : raid5
>   Raid Devices : 3
> 
> Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB)
>     Array Size : 7814043136 (3726.03 GiB 4000.79 GB)
>  Used Dev Size : 3907021568 (1863.01 GiB 2000.40 GB)
>    Data Offset : 2048 sectors
>   Super Offset : 8 sectors
>          State : clean
>    Device UUID : f9059dfb:74af1ab7:bc1465b1:e2ff30ba
> 
>    Update Time : Sun Jan  5 04:11:41 2014
>  Bad Block Log : 512 entries available at offset 2032 sectors
>       Checksum : 7df1fefc - correct
>         Events : 436
>         Layout : left-symmetric
>     Chunk Size : 128K
> 
>   Device Role : Active device 2
>   Array State : AAA ('A' == active, '.' == missing)

sdb1 thinks it is OK, but that is normal.  When a device fails the fact that
it is failed isn't recorded on that device, only on the other devices.

> 
> 
> so there i tried to re-add the disk that doesn't works
> so i tred to stop the raid and assemble it again but it doesn't work either
> 
> #mdadm --stop /dev/md0
> #mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
> mdadm: /dev/md0 has been started with 2 drives (out of 3).
> 
> 
> can you help me guys ?

If
   mdadm /dev/md0 --add /dev/sdb1
doesn't work, them run
   mdadm --zero-super /dev/sdb1
first.

> 
> 
> Thanks by advance
> 
> ps: i've got mdadm 3.3-devel i would update it but don't how to do ...

 git clone git://neil.brown.name/mdadm
 cd mdadm
 make
 make install


> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange status raid 5
  2014-04-01 22:41 ` NeilBrown
@ 2014-04-01 23:46   ` bobzer
  0 siblings, 0 replies; 5+ messages in thread
From: bobzer @ 2014-04-01 23:46 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Thank you very much, it's recovering right now :-)

I'll update mdadm after

On Tue, Apr 1, 2014 at 6:41 PM, NeilBrown <neilb@suse.de> wrote:
> On Mon, 31 Mar 2014 10:08:30 -0400 bobzer <bobzer@gmail.com> wrote:
>
>> Hi,
>>
>> My raid 5 is in a strange state. the mdstat tell me is degraded, but
>> when i check the disk that should be in the raid anymore it tell me
>> everything is right ... I'm lost
>>
>> #cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid5 sdc1[3] sdd1[1]
>>      3907021568 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/2] [UU_]
>>
>> unused devices: <none>
>>
>>
>> so my third disk is unused i check with :
>>
>> #mdadm -D /dev/md0
>> /dev/md0:
>>        Version : 1.2
>>  Creation Time : Sun Mar  4 22:49:14 2012
>>     Raid Level : raid5
>>     Array Size : 3907021568 (3726.03 GiB 4000.79 GB)
>>  Used Dev Size : 1953510784 (1863.01 GiB 2000.40 GB)
>>   Raid Devices : 3
>>  Total Devices : 2
>>    Persistence : Superblock is persistent
>>
>>    Update Time : Sun Mar 30 23:01:35 2014
>>          State : clean, degraded
>> Active Devices : 2
>> Working Devices : 2
>> Failed Devices : 0
>>  Spare Devices : 0
>>
>>         Layout : left-symmetric
>>     Chunk Size : 128K
>>           Name : debian:0
>>           UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
>>         Events : 255801
>>
>>    Number   Major   Minor   RaidDevice State
>>       3       8       33        0      active sync   /dev/sdc1
>>       1       8       49        1      active sync   /dev/sdd1
>>       2       0        0        2      removed
>>
>>
>> after verify what says the disk i got confuse :
>>
>>
>> #mdadm --examine /dev/sdb1
>> /dev/sdb1:
>>          Magic : a92b4efc
>>        Version : 1.2
>>    Feature Map : 0x0
>>     Array UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
>>           Name : debian:0
>>  Creation Time : Sun Mar  4 22:49:14 2012
>>     Raid Level : raid5
>>   Raid Devices : 3
>>
>> Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB)
>>     Array Size : 7814043136 (3726.03 GiB 4000.79 GB)
>>  Used Dev Size : 3907021568 (1863.01 GiB 2000.40 GB)
>>    Data Offset : 2048 sectors
>>   Super Offset : 8 sectors
>>          State : clean
>>    Device UUID : f9059dfb:74af1ab7:bc1465b1:e2ff30ba
>>
>>    Update Time : Sun Jan  5 04:11:41 2014
>>  Bad Block Log : 512 entries available at offset 2032 sectors
>>       Checksum : 7df1fefc - correct
>>         Events : 436
>>         Layout : left-symmetric
>>     Chunk Size : 128K
>>
>>   Device Role : Active device 2
>>   Array State : AAA ('A' == active, '.' == missing)
>
> sdb1 thinks it is OK, but that is normal.  When a device fails the fact that
> it is failed isn't recorded on that device, only on the other devices.
>
>>
>>
>> so there i tried to re-add the disk that doesn't works
>> so i tred to stop the raid and assemble it again but it doesn't work either
>>
>> #mdadm --stop /dev/md0
>> #mdadm --assemble --force /dev/md0 /dev/sd[bcd]1
>> mdadm: /dev/md0 has been started with 2 drives (out of 3).
>>
>>
>> can you help me guys ?
>
> If
>    mdadm /dev/md0 --add /dev/sdb1
> doesn't work, them run
>    mdadm --zero-super /dev/sdb1
> first.
>
>>
>>
>> Thanks by advance
>>
>> ps: i've got mdadm 3.3-devel i would update it but don't how to do ...
>
>  git clone git://neil.brown.name/mdadm
>  cd mdadm
>  make
>  make install
>
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> NeilBrown

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-04-01 23:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-31 14:08 strange status raid 5 bobzer
2014-03-31 14:28 ` Mikael Abrahamsson
2014-03-31 14:35   ` bobzer
2014-04-01 22:41 ` NeilBrown
2014-04-01 23:46   ` bobzer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.