linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Raid 5 where 2 disks out of 4 were unplugged
@ 2021-08-26  9:18 Gennaro Oliva
  2021-08-26 17:00 ` Phil Turmel
  2021-08-28 18:02 ` Anthony Youngman
  0 siblings, 2 replies; 4+ messages in thread
From: Gennaro Oliva @ 2021-08-26  9:18 UTC (permalink / raw)
  To: linux-raid

Hello,
I have a QNAP with Linux 3.4.6 and mdadm 3.3. I have 4 drives assembled
in raid 5, two of those drives where accidentally removed and now they
are out of sync. This is a partial output of mdadm --examine 

/dev/sda3:
    Update Time : Thu Jul  8 18:01:51 2021
       Checksum : 4bc8157c - correct
         Events : 469678
   Device Role : Active device 0
   Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
    Update Time : Thu Jul  8 18:01:51 2021
       Checksum : 7fac997f - correct
         Events : 469678
   Device Role : Active device 1
   Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
    Update Time : Thu Jul  8 13:15:58 2021
       Checksum : fcd5279f - correct
         Events : 469667
   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
    Update Time : Thu Jul  8 13:15:58 2021
       Checksum : b9bc1e2e - correct
         Events : 469667
   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

The disk are all healthy. I tried to re-assemble the drive with
mdadm --verbose --assemble --force
using various combination of 3 drives or using all the four drives but
I'm always notified I have no enough drives to start the array.

This is the output when trying to use all the drives:

mdadm --verbose --assemble --force /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3            
mdadm: looking for devices for /dev/md1
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0.
mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1.
mdadm: /dev/sdc3 is identified as a member of /dev/md1, slot 2.
mdadm: /dev/sdd3 is identified as a member of /dev/md1, slot 3.
mdadm: added /dev/sdb3 to /dev/md1 as 1
mdadm: added /dev/sdc3 to /dev/md1 as 2 (possibly out of date)
mdadm: added /dev/sdd3 to /dev/md1 as 3 (possibly out of date)
mdadm: added /dev/sda3 to /dev/md1 as 0
mdadm: /dev/md1 assembled from 2 drives - not enough to start the array.

The number of events is really close (11). What is my next option to
recover the partition? Do I need to rebuild the superblock?
What options should I use?

Thank you for reading this e-mail.
Best regards,
-- 
Gennaro Oliva

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid 5 where 2 disks out of 4 were unplugged
  2021-08-26  9:18 Raid 5 where 2 disks out of 4 were unplugged Gennaro Oliva
@ 2021-08-26 17:00 ` Phil Turmel
  2021-08-28 18:02 ` Anthony Youngman
  1 sibling, 0 replies; 4+ messages in thread
From: Phil Turmel @ 2021-08-26 17:00 UTC (permalink / raw)
  To: Gennaro Oliva, linux-raid

Hello Gennaro,

Good report.

On 8/26/21 5:18 AM, Gennaro Oliva wrote:
> Hello,
> I have a QNAP with Linux 3.4.6 and mdadm 3.3. I have 4 drives assembled
> in raid 5, two of those drives where accidentally removed and now they
> are out of sync. This is a partial output of mdadm --examine

[trim /]

> mdadm --verbose --assemble --force /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
> mdadm: looking for devices for /dev/md1
> mdadm: failed to get exclusive lock on mapfile - continue anyway...
> mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0.
> mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1.
> mdadm: /dev/sdc3 is identified as a member of /dev/md1, slot 2.
> mdadm: /dev/sdd3 is identified as a member of /dev/md1, slot 3.
> mdadm: added /dev/sdb3 to /dev/md1 as 1
> mdadm: added /dev/sdc3 to /dev/md1 as 2 (possibly out of date)
> mdadm: added /dev/sdd3 to /dev/md1 as 3 (possibly out of date)
> mdadm: added /dev/sda3 to /dev/md1 as 0
> mdadm: /dev/md1 assembled from 2 drives - not enough to start the array.
> 

This should have worked.  I suspect your mdadm is one of the buggy ones 
for forced assembly.

Download and compile latest mdadm and do this again.  (Newer mdadm works 
on older kernels.)

Regards,

Phil

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid 5 where 2 disks out of 4 were unplugged
  2021-08-26  9:18 Raid 5 where 2 disks out of 4 were unplugged Gennaro Oliva
  2021-08-26 17:00 ` Phil Turmel
@ 2021-08-28 18:02 ` Anthony Youngman
  2021-09-15 14:11   ` Gennaro Oliva
  1 sibling, 1 reply; 4+ messages in thread
From: Anthony Youngman @ 2021-08-28 18:02 UTC (permalink / raw)
  To: Gennaro Oliva, linux-raid

On 26/08/2021 10:18, Gennaro Oliva wrote:
> Hello,
> I have a QNAP with Linux 3.4.6 and mdadm 3.3. I have 4 drives assembled
> in raid 5, two of those drives where accidentally removed and now they
> are out of sync. This is a partial output of mdadm --examine
>
> /dev/sda3:
>      Update Time : Thu Jul  8 18:01:51 2021
>         Checksum : 4bc8157c - correct
>           Events : 469678
>     Device Role : Active device 0
>     Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdb3:
>      Update Time : Thu Jul  8 18:01:51 2021
>         Checksum : 7fac997f - correct
>           Events : 469678
>     Device Role : Active device 1
>     Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdc3:
>      Update Time : Thu Jul  8 13:15:58 2021
>         Checksum : fcd5279f - correct
>           Events : 469667
>     Device Role : Active device 2
>     Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdd3:
>      Update Time : Thu Jul  8 13:15:58 2021
>         Checksum : b9bc1e2e - correct
>           Events : 469667
>     Device Role : Active device 3
>     Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
> The disk are all healthy. I tried to re-assemble the drive with
> mdadm --verbose --assemble --force
> using various combination of 3 drives or using all the four drives but
> I'm always notified I have no enough drives to start the array.
>
> This is the output when trying to use all the drives:
>
> mdadm --verbose --assemble --force /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
> mdadm: looking for devices for /dev/md1
> mdadm: failed to get exclusive lock on mapfile - continue anyway...
> mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0.
> mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1.
> mdadm: /dev/sdc3 is identified as a member of /dev/md1, slot 2.
> mdadm: /dev/sdd3 is identified as a member of /dev/md1, slot 3.
> mdadm: added /dev/sdb3 to /dev/md1 as 1
> mdadm: added /dev/sdc3 to /dev/md1 as 2 (possibly out of date)
> mdadm: added /dev/sdd3 to /dev/md1 as 3 (possibly out of date)
> mdadm: added /dev/sda3 to /dev/md1 as 0
> mdadm: /dev/md1 assembled from 2 drives - not enough to start the array.
>
> The number of events is really close (11). What is my next option to
> recover the partition? Do I need to rebuild the superblock?
> What options should I use?
>
Do NOT "rebuild the superblock" whatever you mean by that. What I think 
you need to do is force-assemble the array. You might lose a bit of data 
- the first thing you will need to do after a forced assembly is to 
check the file system ...

The low discrepancy in the event count is a good sign, you won't lose much.

What I would suggest is you read up on the linux wiki, use overlays to 
test and make sure you won't lose anything, and then do the force 
assembly for real.

https://raid.wiki.kernel.org/index.php/Linux_Raid

Cheers,

Wol


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid 5 where 2 disks out of 4 were unplugged
  2021-08-28 18:02 ` Anthony Youngman
@ 2021-09-15 14:11   ` Gennaro Oliva
  0 siblings, 0 replies; 4+ messages in thread
From: Gennaro Oliva @ 2021-09-15 14:11 UTC (permalink / raw)
  To: Anthony Youngman; +Cc: linux-raid

Hi Anthony and Phil,

On Sat, Aug 28, 2021 at 07:02:03PM +0100, Anthony Youngman wrote:
> Do NOT "rebuild the superblock" whatever you mean by that. What I think you
> need to do is force-assemble the array. You might lose a bit of data - the
> first thing you will need to do after a forced assembly is to check the file
> system ...

I attached the disk on a recent linux system and I was able to force
assemble the raid again. Thank you for your valuable help.
Best regards,
-- 
Gennaro Oliva

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-09-15 14:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26  9:18 Raid 5 where 2 disks out of 4 were unplugged Gennaro Oliva
2021-08-26 17:00 ` Phil Turmel
2021-08-28 18:02 ` Anthony Youngman
2021-09-15 14:11   ` Gennaro Oliva

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).