All of lore.kernel.org
 help / color / mirror / Atom feed
* Problems with a RAID5 array
@ 2016-10-25  7:45 Nicolas Nicolaou
  2016-10-25 12:48 ` Wols Lists
  2016-10-25 17:54 ` Andreas Klauer
  0 siblings, 2 replies; 3+ messages in thread
From: Nicolas Nicolaou @ 2016-10-25  7:45 UTC (permalink / raw)
  To: linux-raid

Hi all,

I am a newbie in the RAID field but i encountered some problems 
with my RAID5 configuration on a QNAP NAS machine. 

In particular i added a 3TB drive and the array seemed to be rebuilt 
automatically. Originally i had 3 3TB drives on it. 
The rebuilt finished and i was able to access my data. For some weird reason
one of the drives was not added and i tried to expand the RAID capacity. 
The expand failed but still no problems...

When i rebooted the system however the RAID became inactive and 
now i cannot access any of the data. 

Below you can see the mdadm —examine information for the 4 drives. 

I saw a thread that recreating the RAID may solve the issue 
(https://raid.wiki.kernel.org/index.php/RAID_Recovery). 
Before going to that path though i wanted to see your take. 

Thanks,
Nicolas

/dev/sda3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 11d32674:4247f385:74ee352b:5e4c22c7
Name : 0
Creation Time : Wed Jan 9 02:29:02 2013
Raid Level : raid5
Raid Devices : 4

Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 17572185216 (8379.07 GiB 8996.96 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : clean
Device UUID : 650aa6e2:c725d7f0:b6c8a5fb:8f0ed37f

Update Time : Thu Oct 20 08:33:37 2016
Checksum : bb791848 - correct
Events : 175176

Layout : left-symmetric
Chunk Size : 64K

Array Slot : 2 (0, failed, 2, failed, 3, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
Array State : u_Uu 381 failed
/dev/sdb3:
Magic : a92b4efc
Version : 00.90.00
UUID : 3fe6d5d4:5b9d61f2:4f7ddb81:e4ae2138
Creation Time : Thu Jun 7 19:50:56 2012
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Sun Oct 23 21:05:33 2016
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 968897f8 - correct
Events : 0.12252139

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 19 2 active sync /dev/sdb3

0 0 8 3 0 active sync /dev/sda3
1 1 8 35 1 active sync /dev/sdc3
2 2 8 19 2 active sync /dev/sdb3
3 3 8 51 3 active sync /dev/sdd3
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : 3fe6d5d4:5b9d61f2:4f7ddb81:e4ae2138
Creation Time : Thu Jun 7 19:50:56 2012
Raid Level : raid5
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0

Update Time : Sun Oct 23 21:05:33 2016
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 96889806 - correct
Events : 0.12252139

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 1 8 35 1 active sync /dev/sdc3

0 0 8 3 0 active sync /dev/sda3
1 1 8 35 1 active sync /dev/sdc3
2 2 8 19 2 active sync /dev/sdb3
3 3 8 51 3 active sync /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 11d32674:4247f385:74ee352b:5e4c22c7
Name : 0
Creation Time : Wed Jan 9 02:29:02 2013
Raid Level : raid5
Raid Devices : 4

Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 17572185216 (8379.07 GiB 8996.96 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : clean
Device UUID : 0b3406a9:15fd802f:e9e3ed19:1c684e54

Update Time : Thu Oct 20 09:45:06 2016
Checksum : b3469ebd - correct
Events : 175176

Layout : left-symmetric
Chunk Size : 64K

Array Slot : 4 (0, failed, 2, failed, 3, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
Array State : u_uU 381 failed

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Problems with a RAID5 array
  2016-10-25  7:45 Problems with a RAID5 array Nicolas Nicolaou
@ 2016-10-25 12:48 ` Wols Lists
  2016-10-25 17:54 ` Andreas Klauer
  1 sibling, 0 replies; 3+ messages in thread
From: Wols Lists @ 2016-10-25 12:48 UTC (permalink / raw)
  To: Nicolas Nicolaou, linux-raid

On 25/10/16 08:45, Nicolas Nicolaou wrote:
> Hi all,
> 
> I am a newbie in the RAID field but i encountered some problems 
> with my RAID5 configuration on a QNAP NAS machine. 
> 
> In particular i added a 3TB drive and the array seemed to be rebuilt 
> automatically. Originally i had 3 3TB drives on it. 
> The rebuilt finished and i was able to access my data. For some weird reason
> one of the drives was not added and i tried to expand the RAID capacity. 
> The expand failed but still no problems...
> 
> When i rebooted the system however the RAID became inactive and 
> now i cannot access any of the data. 
> 
> Below you can see the mdadm —examine information for the 4 drives. 
> 
> I saw a thread that recreating the RAID may solve the issue 
> (https://raid.wiki.kernel.org/index.php/RAID_Recovery). 
> Before going to that path though i wanted to see your take. 
> 
Firstly, using "--force" is not necessarily a bad idea, though you want
to avoid it if you can. Using "--create" is an absolutely crazy idea
unless you are being hand-held by an expert. DO NOT attempt that on your
own unless you really want to lose everything.

Secondly, if you *are* going to be mad enough to try "--create", make
sure you've run Phil's lsdrv utility and you have a hard copy of the
output saved somewhere safe!

Go back to the raid wiki, go to the home page, and read section 4, "When
things go wrogn". Read the entire section. It includes the page you've
referenced, but that's an old page that will be deprecated. It's a
moderately safe bet that when the experts chime in, they will want a lot
of the information that tells you to gather. And hopefully, working
through this will give you a few clues yourself.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Problems with a RAID5 array
  2016-10-25  7:45 Problems with a RAID5 array Nicolas Nicolaou
  2016-10-25 12:48 ` Wols Lists
@ 2016-10-25 17:54 ` Andreas Klauer
  1 sibling, 0 replies; 3+ messages in thread
From: Andreas Klauer @ 2016-10-25 17:54 UTC (permalink / raw)
  To: Nicolas Nicolaou; +Cc: linux-raid

On Tue, Oct 25, 2016 at 10:45:12AM +0300, Nicolas Nicolaou wrote:
> Below you can see the mdadm —examine information for the 4 drives. 

The output doesn't seem to be formatted correctly, hard to read.
Also you posted only 2 of 4 disks, for two different RAIDs,
with different metadata (1.0 vs. 0.90), creation times...

Basically this is too confusing to say anything about it.

Did you pick the correct devices?

> /dev/sda3:
> Magic : a92b4efc
> Version : 1.0
> Array UUID : 11d32674:4247f385:74ee352b:5e4c22c7
> Creation Time : Wed Jan 9 02:29:02 2013
> Raid Level : raid5
> Raid Devices : 4

> /dev/sdb3:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 3fe6d5d4:5b9d61f2:4f7ddb81:e4ae2138
> Creation Time : Thu Jun 7 19:50:56 2012
> Raid Level : raid5

Regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-10-25 17:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-25  7:45 Problems with a RAID5 array Nicolas Nicolaou
2016-10-25 12:48 ` Wols Lists
2016-10-25 17:54 ` Andreas Klauer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.