All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID 1 to RAID 5 failure
@ 2022-04-04 15:19 Jorge Nunes
  2022-04-04 16:42 ` Wols Lists
  2022-04-05  0:29 ` Roy Sigurd Karlsbakk
  0 siblings, 2 replies; 11+ messages in thread
From: Jorge Nunes @ 2022-04-04 15:19 UTC (permalink / raw)
  To: linux-raid

Hi everyone.
Probably this isn't the forum to post this, but I can't get true
valuable help on this:

I have a NAS which is capable of having a RAID with four disks with
armbian debian bullseye. I used it for a long time with only two, sda
and sdd on RAID 1 - they are WD30EFRX. Now, I bought two more WD30EFRX
(refurbished) and my idea was to add them to have a RAID 5 array.
These were the steps I've made:

Didn't do a backup :-(

Unmount everything:
```
$ sudo umount /srv/dev-disk-by-uuid-d1430a9e-6461-481b-9765-86e18e517cfc

$ sudo umount -f /dev/md0
```
Stopped the array:
```
$ sudo mdadm --stop /dev/md0
```

Change the array to a RAID 5 with only the existing disks:
```
$ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
```
I made a mistake here and used the whole disks instead of the
/dev/sd[ad]1 partitions and MDADM warned me that /dev/sdd had a
partition and it would be overridden... I pressed 'Y' to continue...
:-(
It took a long time to complete without any errors.

Then I added the two new disks /dev/sdb and /dev/sdc to the array:
```
$ sudo mdadm --add /dev/md0 /dev/sdb
$ sudo mdadm --add /dev/md0 /dev/sdc
```
And did a grow to use the four disks:
```
$ sudo mdadm --grow /dev/md0 --raid-disk=4
```
During this process a reshape was performed like this
```
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdc[4] sdb[3] sdd[2] sda[0]
      2930134016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [==================>..]  reshape = 90.1% (2640502272/2930134016)
finish=64.3min speed=75044K/sec
      bitmap: 0/22 pages [0KB], 65536KB chunk
```
```
$ sudo mdadm -D /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri Mar 11 16:10:02 2022
        Raid Level : raid5
        Array Size : 2930134016 (2794.39 GiB 3000.46 GB)
     Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Mar 12 20:20:14 2022
             State : clean, reshaping
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

    Reshape Status : 97% complete
     Delta Devices : 2, (2->4)

              Name : helios4:0  (local to host helios4)
              UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
            Events : 12037

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       2       8       48        1      active sync   /dev/sdd
       4       8       32        2      active sync   /dev/sdc
       3       8       16        3      active sync   /dev/sdb
```

When this looooooong process has completed without errors, I did a e2fsck
```
$ sudo e2fsck /dev/md0
```
And... it gave this info:
```
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
or
    e2fsck -b 32768 <device>
```
At this point I realized that I've made some mistakes during this process...
Googled for the problem and I think the disks in the array are somehow
order 'reversed' judging from this post:
https://forum.qnap.com/viewtopic.php?t=125534

So, the partition is 'gone' and when I try to assemble the array now,
I have this info:
```
$ sudo mdadm --assemble --scan -v

mdadm: /dev/sdd is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 3.
mdadm: /dev/sdc is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdd to /dev/md/0 as 1
mdadm: added /dev/sdc to /dev/md/0 as 2
mdadm: added /dev/sdb to /dev/md/0 as 3
mdadm: added /dev/sda to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 4 drives.

$ dmesg

[143605.261894] md/raid:md0: device sda operational as raid disk 0
[143605.261909] md/raid:md0: device sdb operational as raid disk 3
[143605.261919] md/raid:md0: device sdc operational as raid disk 2
[143605.261927] md/raid:md0: device sdd operational as raid disk 1
[143605.267400] md/raid:md0: raid level 5 active with 4 out of 4
devices, algorithm 2
[143605.792653] md0: detected capacity change from 0 to 17580804096

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md0 : active (auto-read-only) raid5 sda[0] sdb[3] sdc[4] sdd[2]
      8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk


$ sudo mdadm -D /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri Mar 11 16:10:02 2022
        Raid Level : raid5
        Array Size : 8790402048 (8383.18 GiB 9001.37 GB)
     Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Mar 12 21:24:59 2022
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : helios4:0  (local to host helios4)
              UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
            Events : 12124

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       2       8       48        1      active sync   /dev/sdd
       4       8       32        2      active sync   /dev/sdc
       3       8       16        3      active sync   /dev/sdb
```

The array mounts but there is no superblock.

At this stage, I did a photorec to try to recover my valuable data
(mainly family photos):
```
$ sudo photorec /log /d ~/k/RAID_REC/ /dev/md0
```
I just recovered a lot of them but others are corrupted because on the
photorec recovering process (sector by sector) it increments the
sector count as time passes but then the counter is 'reset' to a lower
value (my suspicion that the disks are scrambled in the array) and it
recovers some files again (some are equal).

So, my question is: Is there a chance to redo the array correctly
without losing the information inside? Is it possible to recover the
'lost' partition that existed on RAID 1 to be able to do a convenient
backup? Or the only chance is to have a correct disk alignment inside
the array to be able to use photorec to recover the files correctly?

I appreciate your help.
Thanks!

Best,

Jorge

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-04 15:19 RAID 1 to RAID 5 failure Jorge Nunes
@ 2022-04-04 16:42 ` Wols Lists
  2022-04-04 17:17   ` Jorge Nunes
  2022-04-05  0:29 ` Roy Sigurd Karlsbakk
  1 sibling, 1 reply; 11+ messages in thread
From: Wols Lists @ 2022-04-04 16:42 UTC (permalink / raw)
  To: Jorge Nunes, linux-raid; +Cc: Phil Turmel, NeilBrown

On 04/04/2022 16:19, Jorge Nunes wrote:
> Hi everyone.
> Probably this isn't the forum to post this, but I can't get true
> valuable help on this:

This is exactly the correct forum ...
> 
> I have a NAS which is capable of having a RAID with four disks with
> armbian debian bullseye. I used it for a long time with only two, sda
> and sdd on RAID 1 - they are WD30EFRX. Now, I bought two more WD30EFRX
> (refurbished) and my idea was to add them to have a RAID 5 array.
> These were the steps I've made:
> 
> Didn't do a backup :-(
> 
> Unmount everything:
> ```
> $ sudo umount /srv/dev-disk-by-uuid-d1430a9e-6461-481b-9765-86e18e517cfc
> 
> $ sudo umount -f /dev/md0
> ```
> Stopped the array:
> ```
> $ sudo mdadm --stop /dev/md0
> ```
> 
> Change the array to a RAID 5 with only the existing disks:
> ```
> $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
> ```
> I made a mistake here and used the whole disks instead of the
> /dev/sd[ad]1 partitions and MDADM warned me that /dev/sdd had a
> partition and it would be overridden... I pressed 'Y' to continue...
> :-(
> It took a long time to complete without any errors.

You made an even bigger error here - and I'm very sorry but it was 
probably fatal :-(

If sda and sdd were your original disks, you created a NEW array, with 
different geometry. This will probably have trashed your data.
> 
> Then I added the two new disks /dev/sdb and /dev/sdc to the array:
> ```
> $ sudo mdadm --add /dev/md0 /dev/sdb
> $ sudo mdadm --add /dev/md0 /dev/sdc
> ```
> And did a grow to use the four disks:
> ```
> $ sudo mdadm --grow /dev/md0 --raid-disk=4
> ```
And if the first mistake wasn't fatal, this probably was.

> During this process a reshape was performed like this
> ```
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active raid5 sdc[4] sdb[3] sdd[2] sda[0]
>        2930134016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
>        [==================>..]  reshape = 90.1% (2640502272/2930134016)
> finish=64.3min speed=75044K/sec
>        bitmap: 0/22 pages [0KB], 65536KB chunk
> ```
> ```
> $ sudo mdadm -D /dev/md0
> 
> /dev/md0:
>             Version : 1.2
>       Creation Time : Fri Mar 11 16:10:02 2022
>          Raid Level : raid5
>          Array Size : 2930134016 (2794.39 GiB 3000.46 GB)
>       Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
>        Raid Devices : 4
>       Total Devices : 4
>         Persistence : Superblock is persistent
> 
>       Intent Bitmap : Internal
> 
>         Update Time : Sat Mar 12 20:20:14 2022
>               State : clean, reshaping
>      Active Devices : 4
>     Working Devices : 4
>      Failed Devices : 0
>       Spare Devices : 0
> 
>              Layout : left-symmetric
>          Chunk Size : 512K
> 
> Consistency Policy : bitmap
> 
>      Reshape Status : 97% complete
>       Delta Devices : 2, (2->4)
> 
>                Name : helios4:0  (local to host helios4)
>                UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
>              Events : 12037
> 
>      Number   Major   Minor   RaidDevice State
>         0       8        0        0      active sync   /dev/sda
>         2       8       48        1      active sync   /dev/sdd
>         4       8       32        2      active sync   /dev/sdc
>         3       8       16        3      active sync   /dev/sdb
> ```
> 
> When this looooooong process has completed without errors, I did a e2fsck
> ```
> $ sudo e2fsck /dev/md0
> ```
> And... it gave this info:
> ```
> e2fsck 1.46.2 (28-Feb-2021)
> ext2fs_open2: Bad magic number in super-block
> e2fsck: Superblock invalid, trying backup blocks...
> e2fsck: Bad magic number in super-block while trying to open /dev/md0
> 
> The superblock could not be read or does not describe a valid ext2/ext3/ext4
> filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
> filesystem (and not swap or ufs or something else), then the superblock
> is corrupt, and you might try running e2fsck with an alternate superblock:
>      e2fsck -b 8193 <device>
> or
>      e2fsck -b 32768 <device>
> ```
> At this point I realized that I've made some mistakes during this process...
> Googled for the problem and I think the disks in the array are somehow
> order 'reversed' judging from this post:
> https://forum.qnap.com/viewtopic.php?t=125534
> 
> So, the partition is 'gone' and when I try to assemble the array now,
> I have this info:
> ```
> $ sudo mdadm --assemble --scan -v
> 
> mdadm: /dev/sdd is identified as a member of /dev/md/0, slot 1.
> mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 3.
> mdadm: /dev/sdc is identified as a member of /dev/md/0, slot 2.
> mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
> mdadm: added /dev/sdd to /dev/md/0 as 1
> mdadm: added /dev/sdc to /dev/md/0 as 2
> mdadm: added /dev/sdb to /dev/md/0 as 3
> mdadm: added /dev/sda to /dev/md/0 as 0
> mdadm: /dev/md/0 has been started with 4 drives.
> 
> $ dmesg
> 
> [143605.261894] md/raid:md0: device sda operational as raid disk 0
> [143605.261909] md/raid:md0: device sdb operational as raid disk 3
> [143605.261919] md/raid:md0: device sdc operational as raid disk 2
> [143605.261927] md/raid:md0: device sdd operational as raid disk 1
> [143605.267400] md/raid:md0: raid level 5 active with 4 out of 4
> devices, algorithm 2
> [143605.792653] md0: detected capacity change from 0 to 17580804096
> 
> $ cat /proc/mdstat
> 
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active (auto-read-only) raid5 sda[0] sdb[3] sdc[4] sdd[2]
>        8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
>        bitmap: 0/22 pages [0KB], 65536KB chunk
> 
> 
> $ sudo mdadm -D /dev/md0
> 
> /dev/md0:
>             Version : 1.2
>       Creation Time : Fri Mar 11 16:10:02 2022
>          Raid Level : raid5
>          Array Size : 8790402048 (8383.18 GiB 9001.37 GB)
>       Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
>        Raid Devices : 4
>       Total Devices : 4
>         Persistence : Superblock is persistent
> 
>       Intent Bitmap : Internal
> 
>         Update Time : Sat Mar 12 21:24:59 2022
>               State : clean
>      Active Devices : 4
>     Working Devices : 4
>      Failed Devices : 0
>       Spare Devices : 0
> 
>              Layout : left-symmetric
>          Chunk Size : 512K
> 
> Consistency Policy : bitmap
> 
>                Name : helios4:0  (local to host helios4)
>                UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
>              Events : 12124
> 
>      Number   Major   Minor   RaidDevice State
>         0       8        0        0      active sync   /dev/sda
>         2       8       48        1      active sync   /dev/sdd
>         4       8       32        2      active sync   /dev/sdc
>         3       8       16        3      active sync   /dev/sdb
> ```
> 
> The array mounts but there is no superblock.
> 
> At this stage, I did a photorec to try to recover my valuable data
> (mainly family photos):

This I am afraid is probably your best bet.
> ```
> $ sudo photorec /log /d ~/k/RAID_REC/ /dev/md0
> ```
> I just recovered a lot of them but others are corrupted because on the
> photorec recovering process (sector by sector) it increments the
> sector count as time passes but then the counter is 'reset' to a lower
> value (my suspicion that the disks are scrambled in the array) and it
> recovers some files again (some are equal).

No they're not scrambled. The raid spreads blocks across the individual 
disks. You're running photorec over the md. Try running it over the 
individual disks, sda,sdb,sdc,sdd. You might get a different set of 
pictures back.> Best,
 >
 > Jorge

> 
> So, my question is: Is there a chance to redo the array correctly
> without losing the information inside? Is it possible to recover the
> 'lost' partition that existed on RAID 1 to be able to do a convenient
> backup? Or the only chance is to have a correct disk alignment inside
> the array to be able to use photorec to recover the files correctly?
> 
> I appreciate your help.
> Thanks!
> 
I've cc'd the guys most likely to be able to help, but I think they'll 
give you the same answer I have, sorry.

Your only hope is probably to convert it back to the original two-disk 
raid 5, then it is *likely* that your original mirror will be in place. 
If you then recreate the original partition, I'm *hoping* this will give 
you your original mirror back in a broken state. From which you can 
might be able to recover.

But I seriously suggest DON'T DO ANYTHING that writes to the disk until 
the experts chime in. You've trashed your raid, don't make it any worse.

Wol

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-04 16:42 ` Wols Lists
@ 2022-04-04 17:17   ` Jorge Nunes
  0 siblings, 0 replies; 11+ messages in thread
From: Jorge Nunes @ 2022-04-04 17:17 UTC (permalink / raw)
  To: Wols Lists; +Cc: linux-raid, Phil Turmel, NeilBrown

Hi Wol,

Thank you for your answer. I'll wait as you suggested.

Best,
Jorge

Wols Lists <antlists@youngman.org.uk> escreveu no dia segunda,
4/04/2022 à(s) 17:42:
>
> On 04/04/2022 16:19, Jorge Nunes wrote:
> > Hi everyone.
> > Probably this isn't the forum to post this, but I can't get true
> > valuable help on this:
>
> This is exactly the correct forum ...
> >
> > I have a NAS which is capable of having a RAID with four disks with
> > armbian debian bullseye. I used it for a long time with only two, sda
> > and sdd on RAID 1 - they are WD30EFRX. Now, I bought two more WD30EFRX
> > (refurbished) and my idea was to add them to have a RAID 5 array.
> > These were the steps I've made:
> >
> > Didn't do a backup :-(
> >
> > Unmount everything:
> > ```
> > $ sudo umount /srv/dev-disk-by-uuid-d1430a9e-6461-481b-9765-86e18e517cfc
> >
> > $ sudo umount -f /dev/md0
> > ```
> > Stopped the array:
> > ```
> > $ sudo mdadm --stop /dev/md0
> > ```
> >
> > Change the array to a RAID 5 with only the existing disks:
> > ```
> > $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
> > ```
> > I made a mistake here and used the whole disks instead of the
> > /dev/sd[ad]1 partitions and MDADM warned me that /dev/sdd had a
> > partition and it would be overridden... I pressed 'Y' to continue...
> > :-(
> > It took a long time to complete without any errors.
>
> You made an even bigger error here - and I'm very sorry but it was
> probably fatal :-(
>
> If sda and sdd were your original disks, you created a NEW array, with
> different geometry. This will probably have trashed your data.
> >
> > Then I added the two new disks /dev/sdb and /dev/sdc to the array:
> > ```
> > $ sudo mdadm --add /dev/md0 /dev/sdb
> > $ sudo mdadm --add /dev/md0 /dev/sdc
> > ```
> > And did a grow to use the four disks:
> > ```
> > $ sudo mdadm --grow /dev/md0 --raid-disk=4
> > ```
> And if the first mistake wasn't fatal, this probably was.
>
> > During this process a reshape was performed like this
> > ```
> > Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> > [raid4] [raid10]
> > md0 : active raid5 sdc[4] sdb[3] sdd[2] sda[0]
> >        2930134016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
> >        [==================>..]  reshape = 90.1% (2640502272/2930134016)
> > finish=64.3min speed=75044K/sec
> >        bitmap: 0/22 pages [0KB], 65536KB chunk
> > ```
> > ```
> > $ sudo mdadm -D /dev/md0
> >
> > /dev/md0:
> >             Version : 1.2
> >       Creation Time : Fri Mar 11 16:10:02 2022
> >          Raid Level : raid5
> >          Array Size : 2930134016 (2794.39 GiB 3000.46 GB)
> >       Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
> >        Raid Devices : 4
> >       Total Devices : 4
> >         Persistence : Superblock is persistent
> >
> >       Intent Bitmap : Internal
> >
> >         Update Time : Sat Mar 12 20:20:14 2022
> >               State : clean, reshaping
> >      Active Devices : 4
> >     Working Devices : 4
> >      Failed Devices : 0
> >       Spare Devices : 0
> >
> >              Layout : left-symmetric
> >          Chunk Size : 512K
> >
> > Consistency Policy : bitmap
> >
> >      Reshape Status : 97% complete
> >       Delta Devices : 2, (2->4)
> >
> >                Name : helios4:0  (local to host helios4)
> >                UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
> >              Events : 12037
> >
> >      Number   Major   Minor   RaidDevice State
> >         0       8        0        0      active sync   /dev/sda
> >         2       8       48        1      active sync   /dev/sdd
> >         4       8       32        2      active sync   /dev/sdc
> >         3       8       16        3      active sync   /dev/sdb
> > ```
> >
> > When this looooooong process has completed without errors, I did a e2fsck
> > ```
> > $ sudo e2fsck /dev/md0
> > ```
> > And... it gave this info:
> > ```
> > e2fsck 1.46.2 (28-Feb-2021)
> > ext2fs_open2: Bad magic number in super-block
> > e2fsck: Superblock invalid, trying backup blocks...
> > e2fsck: Bad magic number in super-block while trying to open /dev/md0
> >
> > The superblock could not be read or does not describe a valid ext2/ext3/ext4
> > filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
> > filesystem (and not swap or ufs or something else), then the superblock
> > is corrupt, and you might try running e2fsck with an alternate superblock:
> >      e2fsck -b 8193 <device>
> > or
> >      e2fsck -b 32768 <device>
> > ```
> > At this point I realized that I've made some mistakes during this process...
> > Googled for the problem and I think the disks in the array are somehow
> > order 'reversed' judging from this post:
> > https://forum.qnap.com/viewtopic.php?t=125534
> >
> > So, the partition is 'gone' and when I try to assemble the array now,
> > I have this info:
> > ```
> > $ sudo mdadm --assemble --scan -v
> >
> > mdadm: /dev/sdd is identified as a member of /dev/md/0, slot 1.
> > mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 3.
> > mdadm: /dev/sdc is identified as a member of /dev/md/0, slot 2.
> > mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
> > mdadm: added /dev/sdd to /dev/md/0 as 1
> > mdadm: added /dev/sdc to /dev/md/0 as 2
> > mdadm: added /dev/sdb to /dev/md/0 as 3
> > mdadm: added /dev/sda to /dev/md/0 as 0
> > mdadm: /dev/md/0 has been started with 4 drives.
> >
> > $ dmesg
> >
> > [143605.261894] md/raid:md0: device sda operational as raid disk 0
> > [143605.261909] md/raid:md0: device sdb operational as raid disk 3
> > [143605.261919] md/raid:md0: device sdc operational as raid disk 2
> > [143605.261927] md/raid:md0: device sdd operational as raid disk 1
> > [143605.267400] md/raid:md0: raid level 5 active with 4 out of 4
> > devices, algorithm 2
> > [143605.792653] md0: detected capacity change from 0 to 17580804096
> >
> > $ cat /proc/mdstat
> >
> > Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> > [raid4] [raid10]
> > md0 : active (auto-read-only) raid5 sda[0] sdb[3] sdc[4] sdd[2]
> >        8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
> >        bitmap: 0/22 pages [0KB], 65536KB chunk
> >
> >
> > $ sudo mdadm -D /dev/md0
> >
> > /dev/md0:
> >             Version : 1.2
> >       Creation Time : Fri Mar 11 16:10:02 2022
> >          Raid Level : raid5
> >          Array Size : 8790402048 (8383.18 GiB 9001.37 GB)
> >       Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
> >        Raid Devices : 4
> >       Total Devices : 4
> >         Persistence : Superblock is persistent
> >
> >       Intent Bitmap : Internal
> >
> >         Update Time : Sat Mar 12 21:24:59 2022
> >               State : clean
> >      Active Devices : 4
> >     Working Devices : 4
> >      Failed Devices : 0
> >       Spare Devices : 0
> >
> >              Layout : left-symmetric
> >          Chunk Size : 512K
> >
> > Consistency Policy : bitmap
> >
> >                Name : helios4:0  (local to host helios4)
> >                UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
> >              Events : 12124
> >
> >      Number   Major   Minor   RaidDevice State
> >         0       8        0        0      active sync   /dev/sda
> >         2       8       48        1      active sync   /dev/sdd
> >         4       8       32        2      active sync   /dev/sdc
> >         3       8       16        3      active sync   /dev/sdb
> > ```
> >
> > The array mounts but there is no superblock.
> >
> > At this stage, I did a photorec to try to recover my valuable data
> > (mainly family photos):
>
> This I am afraid is probably your best bet.
> > ```
> > $ sudo photorec /log /d ~/k/RAID_REC/ /dev/md0
> > ```
> > I just recovered a lot of them but others are corrupted because on the
> > photorec recovering process (sector by sector) it increments the
> > sector count as time passes but then the counter is 'reset' to a lower
> > value (my suspicion that the disks are scrambled in the array) and it
> > recovers some files again (some are equal).
>
> No they're not scrambled. The raid spreads blocks across the individual
> disks. You're running photorec over the md. Try running it over the
> individual disks, sda,sdb,sdc,sdd. You might get a different set of
> pictures back.> Best,
>  >
>  > Jorge
>
> >
> > So, my question is: Is there a chance to redo the array correctly
> > without losing the information inside? Is it possible to recover the
> > 'lost' partition that existed on RAID 1 to be able to do a convenient
> > backup? Or the only chance is to have a correct disk alignment inside
> > the array to be able to use photorec to recover the files correctly?
> >
> > I appreciate your help.
> > Thanks!
> >
> I've cc'd the guys most likely to be able to help, but I think they'll
> give you the same answer I have, sorry.
>
> Your only hope is probably to convert it back to the original two-disk
> raid 5, then it is *likely* that your original mirror will be in place.
> If you then recreate the original partition, I'm *hoping* this will give
> you your original mirror back in a broken state. From which you can
> might be able to recover.
>
> But I seriously suggest DON'T DO ANYTHING that writes to the disk until
> the experts chime in. You've trashed your raid, don't make it any worse.
>
> Wol

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-04 15:19 RAID 1 to RAID 5 failure Jorge Nunes
  2022-04-04 16:42 ` Wols Lists
@ 2022-04-05  0:29 ` Roy Sigurd Karlsbakk
  2022-04-05  9:17   ` Roy Sigurd Karlsbakk
  1 sibling, 1 reply; 11+ messages in thread
From: Roy Sigurd Karlsbakk @ 2022-04-05  0:29 UTC (permalink / raw)
  To: Jorge Nunes; +Cc: Linux Raid

> Didn't do a backup :-(

First mistake… *Always* keep a backup (or three)

> 
> Unmount everything:

No need - what you should have done, was just to grow the array by

Partition the new drives exactly like the old ones
mdadm --add /dev/md0 /dev/sd[cd]1 # note that sd[cd] means sdc and sdd, but can be written this way on the commandline
mdadm --grow --level=5 --raid-devices=4

This would have grown and converted the array to raid5 without any data loss.

> $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd

As earlier mentioned, this is to create a new array, not a conversion.

> So, my question is: Is there a chance to redo the array correctly
> without losing the information inside? Is it possible to recover the
> 'lost' partition that existed on RAID 1 to be able to do a convenient
> backup? Or the only chance is to have a correct disk alignment inside
> the array to be able to use photorec to recover the files correctly?

As mentioned, it doesn't look promising, but there are a few things that can be tried.

Your data may still reside on the sda1 and sdd1, but since it was converted to RAID-5, the data would have been distributed among the two drives and not being the same on both. Further growing the raid, would move the data around to the other disks. I did a small test here on some vdisks to see if this could be reversed somehow and see if I could find the original filesystem. I could - but it was terribly corrupted, so not a single file remained.

If this was valuable data, there might be a way to rescue them, but I fear a lot is overwritten already. Others in here (or other places) may know more about how to fix this, though. If you find out how, please tell. It'd be interesting to learn :)

PS: I have my personal notebook for technical stuff at https://wiki.karlsbakk.net/index.php/Roy's_notes in case you might find that interesting. There's quite a bit about storage there. Simply growing a raid is apparently forgotten, since I thought that was too simple. I'll add it.

So hope you didn't lose too much valuable data

Vennlig hilsen / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-05  0:29 ` Roy Sigurd Karlsbakk
@ 2022-04-05  9:17   ` Roy Sigurd Karlsbakk
  2022-04-05 10:50     ` Jorge Nunes
  0 siblings, 1 reply; 11+ messages in thread
From: Roy Sigurd Karlsbakk @ 2022-04-05  9:17 UTC (permalink / raw)
  To: Jorge Nunes; +Cc: Linux Raid

I re-did these tests this morning, since I was unsure if I could have made some mistake last night - I was tired. There results were about the same - complete data loss.

As for curiousity, I also tried to skip the expand phase after creating the initial raid5 on top of the raid1. After creating it, I stopped it and recreated the old raid1 with --assume-clean. This worked well - no errors from mount or fsck.

So I guess it was the mdadm --grow --raid-devices=4 that was the final nail in the coffin.

I just hope you find a way to backup your files next time. I'm quite sure we've all been there - thought we were smart enough or something and the shit hit the fan and no - we weren't.

Vennlig hilsen

roy
-- 
Roy Sigurd Karlsbakk
(+47) 98013356
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
Hið góða skaltu í stein höggva, hið illa í snjó rita.

----- Original Message -----
> From: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>
> To: "Jorge Nunes" <jorgebnunes@gmail.com>
> Cc: "Linux Raid" <linux-raid@vger.kernel.org>
> Sent: Tuesday, 5 April, 2022 02:29:03
> Subject: Re: RAID 1 to RAID 5 failure

>> Didn't do a backup :-(
> 
> First mistake… *Always* keep a backup (or three)
> 
>> 
>> Unmount everything:
> 
> No need - what you should have done, was just to grow the array by
> 
> Partition the new drives exactly like the old ones
> mdadm --add /dev/md0 /dev/sd[cd]1 # note that sd[cd] means sdc and sdd, but can
> be written this way on the commandline
> mdadm --grow --level=5 --raid-devices=4
> 
> This would have grown and converted the array to raid5 without any data loss.
> 
>> $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
> 
> As earlier mentioned, this is to create a new array, not a conversion.
> 
>> So, my question is: Is there a chance to redo the array correctly
>> without losing the information inside? Is it possible to recover the
>> 'lost' partition that existed on RAID 1 to be able to do a convenient
>> backup? Or the only chance is to have a correct disk alignment inside
>> the array to be able to use photorec to recover the files correctly?
> 
> As mentioned, it doesn't look promising, but there are a few things that can be
> tried.
> 
> Your data may still reside on the sda1 and sdd1, but since it was converted to
> RAID-5, the data would have been distributed among the two drives and not being
> the same on both. Further growing the raid, would move the data around to the
> other disks. I did a small test here on some vdisks to see if this could be
> reversed somehow and see if I could find the original filesystem. I could - but
> it was terribly corrupted, so not a single file remained.
> 
> If this was valuable data, there might be a way to rescue them, but I fear a lot
> is overwritten already. Others in here (or other places) may know more about
> how to fix this, though. If you find out how, please tell. It'd be interesting
> to learn :)
> 
> PS: I have my personal notebook for technical stuff at
> https://wiki.karlsbakk.net/index.php/Roy's_notes in case you might find that
> interesting. There's quite a bit about storage there. Simply growing a raid is
> apparently forgotten, since I thought that was too simple. I'll add it.
> 
> So hope you didn't lose too much valuable data
> 
> Vennlig hilsen / Best regards
> 
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 98013356
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
> et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og
> relevante synonymer på norsk.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-05  9:17   ` Roy Sigurd Karlsbakk
@ 2022-04-05 10:50     ` Jorge Nunes
  2022-04-05 11:30       ` Roy Sigurd Karlsbakk
  2022-04-06 19:57       ` Wols Lists
  0 siblings, 2 replies; 11+ messages in thread
From: Jorge Nunes @ 2022-04-05 10:50 UTC (permalink / raw)
  To: Roy Sigurd Karlsbakk; +Cc: Linux Raid

Hi roy.

Thank you for your time.

Now, I'm doing a photorec on /dev/sda and /dev/sdd and I get better
results on (some) of the data recovered if I do it on top of /dev/md0.
I don't care anymore about recovering the filesystem, I just want to
maximize the quality of data recovered with photorec.

Best regards,
Jorge

Roy Sigurd Karlsbakk <roy@karlsbakk.net> escreveu no dia terça,
5/04/2022 à(s) 10:17:
>
> I re-did these tests this morning, since I was unsure if I could have made some mistake last night - I was tired. There results were about the same - complete data loss.
>
> As for curiousity, I also tried to skip the expand phase after creating the initial raid5 on top of the raid1. After creating it, I stopped it and recreated the old raid1 with --assume-clean. This worked well - no errors from mount or fsck.
>
> So I guess it was the mdadm --grow --raid-devices=4 that was the final nail in the coffin.
>
> I just hope you find a way to backup your files next time. I'm quite sure we've all been there - thought we were smart enough or something and the shit hit the fan and no - we weren't.
>
> Vennlig hilsen
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 98013356
> http://blogg.karlsbakk.net/
> GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
> --
> Hið góða skaltu í stein höggva, hið illa í snjó rita.
>
> ----- Original Message -----
> > From: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>
> > To: "Jorge Nunes" <jorgebnunes@gmail.com>
> > Cc: "Linux Raid" <linux-raid@vger.kernel.org>
> > Sent: Tuesday, 5 April, 2022 02:29:03
> > Subject: Re: RAID 1 to RAID 5 failure
>
> >> Didn't do a backup :-(
> >
> > First mistake… *Always* keep a backup (or three)
> >
> >>
> >> Unmount everything:
> >
> > No need - what you should have done, was just to grow the array by
> >
> > Partition the new drives exactly like the old ones
> > mdadm --add /dev/md0 /dev/sd[cd]1 # note that sd[cd] means sdc and sdd, but can
> > be written this way on the commandline
> > mdadm --grow --level=5 --raid-devices=4
> >
> > This would have grown and converted the array to raid5 without any data loss.
> >
> >> $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
> >
> > As earlier mentioned, this is to create a new array, not a conversion.
> >
> >> So, my question is: Is there a chance to redo the array correctly
> >> without losing the information inside? Is it possible to recover the
> >> 'lost' partition that existed on RAID 1 to be able to do a convenient
> >> backup? Or the only chance is to have a correct disk alignment inside
> >> the array to be able to use photorec to recover the files correctly?
> >
> > As mentioned, it doesn't look promising, but there are a few things that can be
> > tried.
> >
> > Your data may still reside on the sda1 and sdd1, but since it was converted to
> > RAID-5, the data would have been distributed among the two drives and not being
> > the same on both. Further growing the raid, would move the data around to the
> > other disks. I did a small test here on some vdisks to see if this could be
> > reversed somehow and see if I could find the original filesystem. I could - but
> > it was terribly corrupted, so not a single file remained.
> >
> > If this was valuable data, there might be a way to rescue them, but I fear a lot
> > is overwritten already. Others in here (or other places) may know more about
> > how to fix this, though. If you find out how, please tell. It'd be interesting
> > to learn :)
> >
> > PS: I have my personal notebook for technical stuff at
> > https://wiki.karlsbakk.net/index.php/Roy's_notes in case you might find that
> > interesting. There's quite a bit about storage there. Simply growing a raid is
> > apparently forgotten, since I thought that was too simple. I'll add it.
> >
> > So hope you didn't lose too much valuable data
> >
> > Vennlig hilsen / Best regards
> >
> > roy
> > --
> > Roy Sigurd Karlsbakk
> > (+47) 98013356
> > --
> > I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
> > et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> > idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og
> > relevante synonymer på norsk.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-05 10:50     ` Jorge Nunes
@ 2022-04-05 11:30       ` Roy Sigurd Karlsbakk
  2022-04-06  6:57         ` Jorge Nunes
  2022-04-06 19:57       ` Wols Lists
  1 sibling, 1 reply; 11+ messages in thread
From: Roy Sigurd Karlsbakk @ 2022-04-05 11:30 UTC (permalink / raw)
  To: Jorge Nunes; +Cc: Linux Raid

That's probably a good idea. Hope you get most of it out of there.

And find a way to backup when you're done ;)

Vennlig hilsen

roy
-- 
Roy Sigurd Karlsbakk
(+47) 98013356
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
Hið góða skaltu í stein höggva, hið illa í snjó rita.

----- Original Message -----
> From: "Jorge Nunes" <jorgebnunes@gmail.com>
> To: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>
> Cc: "Linux Raid" <linux-raid@vger.kernel.org>
> Sent: Tuesday, 5 April, 2022 12:50:20
> Subject: Re: RAID 1 to RAID 5 failure

> Hi roy.
> 
> Thank you for your time.
> 
> Now, I'm doing a photorec on /dev/sda and /dev/sdd and I get better
> results on (some) of the data recovered if I do it on top of /dev/md0.
> I don't care anymore about recovering the filesystem, I just want to
> maximize the quality of data recovered with photorec.
> 
> Best regards,
> Jorge
> 
> Roy Sigurd Karlsbakk <roy@karlsbakk.net> escreveu no dia terça,
> 5/04/2022 à(s) 10:17:
>>
>> I re-did these tests this morning, since I was unsure if I could have made some
>> mistake last night - I was tired. There results were about the same - complete
>> data loss.
>>
>> As for curiousity, I also tried to skip the expand phase after creating the
>> initial raid5 on top of the raid1. After creating it, I stopped it and
>> recreated the old raid1 with --assume-clean. This worked well - no errors from
>> mount or fsck.
>>
>> So I guess it was the mdadm --grow --raid-devices=4 that was the final nail in
>> the coffin.
>>
>> I just hope you find a way to backup your files next time. I'm quite sure we've
>> all been there - thought we were smart enough or something and the shit hit the
>> fan and no - we weren't.
>>
>> Vennlig hilsen
>>
>> roy
>> --
>> Roy Sigurd Karlsbakk
>> (+47) 98013356
>> http://blogg.karlsbakk.net/
>> GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
>> --
>> Hið góða skaltu í stein höggva, hið illa í snjó rita.
>>
>> ----- Original Message -----
>> > From: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>
>> > To: "Jorge Nunes" <jorgebnunes@gmail.com>
>> > Cc: "Linux Raid" <linux-raid@vger.kernel.org>
>> > Sent: Tuesday, 5 April, 2022 02:29:03
>> > Subject: Re: RAID 1 to RAID 5 failure
>>
>> >> Didn't do a backup :-(
>> >
>> > First mistake… *Always* keep a backup (or three)
>> >
>> >>
>> >> Unmount everything:
>> >
>> > No need - what you should have done, was just to grow the array by
>> >
>> > Partition the new drives exactly like the old ones
>> > mdadm --add /dev/md0 /dev/sd[cd]1 # note that sd[cd] means sdc and sdd, but can
>> > be written this way on the commandline
>> > mdadm --grow --level=5 --raid-devices=4
>> >
>> > This would have grown and converted the array to raid5 without any data loss.
>> >
>> >> $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
>> >
>> > As earlier mentioned, this is to create a new array, not a conversion.
>> >
>> >> So, my question is: Is there a chance to redo the array correctly
>> >> without losing the information inside? Is it possible to recover the
>> >> 'lost' partition that existed on RAID 1 to be able to do a convenient
>> >> backup? Or the only chance is to have a correct disk alignment inside
>> >> the array to be able to use photorec to recover the files correctly?
>> >
>> > As mentioned, it doesn't look promising, but there are a few things that can be
>> > tried.
>> >
>> > Your data may still reside on the sda1 and sdd1, but since it was converted to
>> > RAID-5, the data would have been distributed among the two drives and not being
>> > the same on both. Further growing the raid, would move the data around to the
>> > other disks. I did a small test here on some vdisks to see if this could be
>> > reversed somehow and see if I could find the original filesystem. I could - but
>> > it was terribly corrupted, so not a single file remained.
>> >
>> > If this was valuable data, there might be a way to rescue them, but I fear a lot
>> > is overwritten already. Others in here (or other places) may know more about
>> > how to fix this, though. If you find out how, please tell. It'd be interesting
>> > to learn :)
>> >
>> > PS: I have my personal notebook for technical stuff at
>> > https://wiki.karlsbakk.net/index.php/Roy's_notes in case you might find that
>> > interesting. There's quite a bit about storage there. Simply growing a raid is
>> > apparently forgotten, since I thought that was too simple. I'll add it.
>> >
>> > So hope you didn't lose too much valuable data
>> >
>> > Vennlig hilsen / Best regards
>> >
>> > roy
>> > --
>> > Roy Sigurd Karlsbakk
>> > (+47) 98013356
>> > --
>> > I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
>> > et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
>> > idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og
> > > relevante synonymer på norsk.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-05 11:30       ` Roy Sigurd Karlsbakk
@ 2022-04-06  6:57         ` Jorge Nunes
  0 siblings, 0 replies; 11+ messages in thread
From: Jorge Nunes @ 2022-04-06  6:57 UTC (permalink / raw)
  To: Roy Sigurd Karlsbakk; +Cc: Linux Raid

I'll never forget to do a backup.

Thanks for your help.
Regards,
Jorge

Roy Sigurd Karlsbakk <roy@karlsbakk.net> escreveu no dia terça,
5/04/2022 à(s) 12:30:
>
> That's probably a good idea. Hope you get most of it out of there.
>
> And find a way to backup when you're done ;)
>
> Vennlig hilsen
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 98013356
> http://blogg.karlsbakk.net/
> GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
> --
> Hið góða skaltu í stein höggva, hið illa í snjó rita.
>
> ----- Original Message -----
> > From: "Jorge Nunes" <jorgebnunes@gmail.com>
> > To: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>
> > Cc: "Linux Raid" <linux-raid@vger.kernel.org>
> > Sent: Tuesday, 5 April, 2022 12:50:20
> > Subject: Re: RAID 1 to RAID 5 failure
>
> > Hi roy.
> >
> > Thank you for your time.
> >
> > Now, I'm doing a photorec on /dev/sda and /dev/sdd and I get better
> > results on (some) of the data recovered if I do it on top of /dev/md0.
> > I don't care anymore about recovering the filesystem, I just want to
> > maximize the quality of data recovered with photorec.
> >
> > Best regards,
> > Jorge
> >
> > Roy Sigurd Karlsbakk <roy@karlsbakk.net> escreveu no dia terça,
> > 5/04/2022 à(s) 10:17:
> >>
> >> I re-did these tests this morning, since I was unsure if I could have made some
> >> mistake last night - I was tired. There results were about the same - complete
> >> data loss.
> >>
> >> As for curiousity, I also tried to skip the expand phase after creating the
> >> initial raid5 on top of the raid1. After creating it, I stopped it and
> >> recreated the old raid1 with --assume-clean. This worked well - no errors from
> >> mount or fsck.
> >>
> >> So I guess it was the mdadm --grow --raid-devices=4 that was the final nail in
> >> the coffin.
> >>
> >> I just hope you find a way to backup your files next time. I'm quite sure we've
> >> all been there - thought we were smart enough or something and the shit hit the
> >> fan and no - we weren't.
> >>
> >> Vennlig hilsen
> >>
> >> roy
> >> --
> >> Roy Sigurd Karlsbakk
> >> (+47) 98013356
> >> http://blogg.karlsbakk.net/
> >> GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
> >> --
> >> Hið góða skaltu í stein höggva, hið illa í snjó rita.
> >>
> >> ----- Original Message -----
> >> > From: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>
> >> > To: "Jorge Nunes" <jorgebnunes@gmail.com>
> >> > Cc: "Linux Raid" <linux-raid@vger.kernel.org>
> >> > Sent: Tuesday, 5 April, 2022 02:29:03
> >> > Subject: Re: RAID 1 to RAID 5 failure
> >>
> >> >> Didn't do a backup :-(
> >> >
> >> > First mistake… *Always* keep a backup (or three)
> >> >
> >> >>
> >> >> Unmount everything:
> >> >
> >> > No need - what you should have done, was just to grow the array by
> >> >
> >> > Partition the new drives exactly like the old ones
> >> > mdadm --add /dev/md0 /dev/sd[cd]1 # note that sd[cd] means sdc and sdd, but can
> >> > be written this way on the commandline
> >> > mdadm --grow --level=5 --raid-devices=4
> >> >
> >> > This would have grown and converted the array to raid5 without any data loss.
> >> >
> >> >> $ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd
> >> >
> >> > As earlier mentioned, this is to create a new array, not a conversion.
> >> >
> >> >> So, my question is: Is there a chance to redo the array correctly
> >> >> without losing the information inside? Is it possible to recover the
> >> >> 'lost' partition that existed on RAID 1 to be able to do a convenient
> >> >> backup? Or the only chance is to have a correct disk alignment inside
> >> >> the array to be able to use photorec to recover the files correctly?
> >> >
> >> > As mentioned, it doesn't look promising, but there are a few things that can be
> >> > tried.
> >> >
> >> > Your data may still reside on the sda1 and sdd1, but since it was converted to
> >> > RAID-5, the data would have been distributed among the two drives and not being
> >> > the same on both. Further growing the raid, would move the data around to the
> >> > other disks. I did a small test here on some vdisks to see if this could be
> >> > reversed somehow and see if I could find the original filesystem. I could - but
> >> > it was terribly corrupted, so not a single file remained.
> >> >
> >> > If this was valuable data, there might be a way to rescue them, but I fear a lot
> >> > is overwritten already. Others in here (or other places) may know more about
> >> > how to fix this, though. If you find out how, please tell. It'd be interesting
> >> > to learn :)
> >> >
> >> > PS: I have my personal notebook for technical stuff at
> >> > https://wiki.karlsbakk.net/index.php/Roy's_notes in case you might find that
> >> > interesting. There's quite a bit about storage there. Simply growing a raid is
> >> > apparently forgotten, since I thought that was too simple. I'll add it.
> >> >
> >> > So hope you didn't lose too much valuable data
> >> >
> >> > Vennlig hilsen / Best regards
> >> >
> >> > roy
> >> > --
> >> > Roy Sigurd Karlsbakk
> >> > (+47) 98013356
> >> > --
> >> > I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
> >> > et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> >> > idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og
> > > > relevante synonymer på norsk.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-05 10:50     ` Jorge Nunes
  2022-04-05 11:30       ` Roy Sigurd Karlsbakk
@ 2022-04-06 19:57       ` Wols Lists
  2022-04-06 20:46         ` Jorge Nunes
  1 sibling, 1 reply; 11+ messages in thread
From: Wols Lists @ 2022-04-06 19:57 UTC (permalink / raw)
  To: Jorge Nunes, Roy Sigurd Karlsbakk; +Cc: Linux Raid

On 05/04/2022 11:50, Jorge Nunes wrote:
> Hi roy.
> 
> Thank you for your time.
> 
> Now, I'm doing a photorec on /dev/sda and /dev/sdd and I get better
> results on (some) of the data recovered if I do it on top of /dev/md0.
> I don't care anymore about recovering the filesystem, I just want to
> maximize the quality of data recovered with photorec.

Once you've recovered everything you can, if no-one else has chimed in, 
do try shrinking it back to a 2-disk raid-5. It SHOULD restore your 
original filesystem. You've then just got to find out where it starts - 
what filesystem was it?

If it's an ext4 there's probably a signature which will tell you where 
it starts. Then somebody should be able to tell you how to mount it and 
back it up properly ...

I'm sure there will be clues to other file systems, ask on your distro 
list for more information - the more people who see a request for help, 
the more likely you are to get some.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-06 19:57       ` Wols Lists
@ 2022-04-06 20:46         ` Jorge Nunes
  2022-04-06 21:02           ` Roy Sigurd Karlsbakk
  0 siblings, 1 reply; 11+ messages in thread
From: Jorge Nunes @ 2022-04-06 20:46 UTC (permalink / raw)
  To: Wols Lists; +Cc: Roy Sigurd Karlsbakk, Linux Raid

Hi again!

Roy: Thank you for your input. This recovery of the misaligned data
takes a lot of time but I'm keeping this task till the end of the
array.

Wol: Then I'll try this but someone has to guide me to do this shrink
and try to get the initial array alignment.

Thank you both!
Best,
Jorge

Wols Lists <antlists@youngman.org.uk> escreveu no dia quarta,
6/04/2022 à(s) 20:57:
>
> On 05/04/2022 11:50, Jorge Nunes wrote:
> > Hi roy.
> >
> > Thank you for your time.
> >
> > Now, I'm doing a photorec on /dev/sda and /dev/sdd and I get better
> > results on (some) of the data recovered if I do it on top of /dev/md0.
> > I don't care anymore about recovering the filesystem, I just want to
> > maximize the quality of data recovered with photorec.
>
> Once you've recovered everything you can, if no-one else has chimed in,
> do try shrinking it back to a 2-disk raid-5. It SHOULD restore your
> original filesystem. You've then just got to find out where it starts -
> what filesystem was it?
>
> If it's an ext4 there's probably a signature which will tell you where
> it starts. Then somebody should be able to tell you how to mount it and
> back it up properly ...
>
> I'm sure there will be clues to other file systems, ask on your distro
> list for more information - the more people who see a request for help,
> the more likely you are to get some.
>
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RAID 1 to RAID 5 failure
  2022-04-06 20:46         ` Jorge Nunes
@ 2022-04-06 21:02           ` Roy Sigurd Karlsbakk
  0 siblings, 0 replies; 11+ messages in thread
From: Roy Sigurd Karlsbakk @ 2022-04-06 21:02 UTC (permalink / raw)
  To: Jorge Nunes; +Cc: Wols Lists, Linux Raid

[-- Attachment #1: Type: text/plain, Size: 2066 bytes --]

I made a log of my testing - perhaps this'll help

Vennlig hilsen

roy
-- 
Roy Sigurd Karlsbakk
(+47) 98013356
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
Hið góða skaltu í stein höggva, hið illa í snjó rita.

----- Original Message -----
> From: "Jorge Nunes" <jorgebnunes@gmail.com>
> To: "Wols Lists" <antlists@youngman.org.uk>
> Cc: "Roy Sigurd Karlsbakk" <roy@karlsbakk.net>, "Linux Raid" <linux-raid@vger.kernel.org>
> Sent: Wednesday, 6 April, 2022 22:46:10
> Subject: Re: RAID 1 to RAID 5 failure

> Hi again!
> 
> Roy: Thank you for your input. This recovery of the misaligned data
> takes a lot of time but I'm keeping this task till the end of the
> array.
> 
> Wol: Then I'll try this but someone has to guide me to do this shrink
> and try to get the initial array alignment.
> 
> Thank you both!
> Best,
> Jorge
> 
> Wols Lists <antlists@youngman.org.uk> escreveu no dia quarta,
> 6/04/2022 à(s) 20:57:
>>
>> On 05/04/2022 11:50, Jorge Nunes wrote:
>> > Hi roy.
>> >
>> > Thank you for your time.
>> >
>> > Now, I'm doing a photorec on /dev/sda and /dev/sdd and I get better
>> > results on (some) of the data recovered if I do it on top of /dev/md0.
>> > I don't care anymore about recovering the filesystem, I just want to
>> > maximize the quality of data recovered with photorec.
>>
>> Once you've recovered everything you can, if no-one else has chimed in,
>> do try shrinking it back to a 2-disk raid-5. It SHOULD restore your
>> original filesystem. You've then just got to find out where it starts -
>> what filesystem was it?
>>
>> If it's an ext4 there's probably a signature which will tell you where
>> it starts. Then somebody should be able to tell you how to mount it and
>> back it up properly ...
>>
>> I'm sure there will be clues to other file systems, ask on your distro
>> list for more information - the more people who see a request for help,
>> the more likely you are to get some.
>>
>> Cheers,
> > Wol

[-- Attachment #2: major-fsckup.md --]
[-- Type: application/x-genesis-rom, Size: 5789 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-04-06 21:38 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-04 15:19 RAID 1 to RAID 5 failure Jorge Nunes
2022-04-04 16:42 ` Wols Lists
2022-04-04 17:17   ` Jorge Nunes
2022-04-05  0:29 ` Roy Sigurd Karlsbakk
2022-04-05  9:17   ` Roy Sigurd Karlsbakk
2022-04-05 10:50     ` Jorge Nunes
2022-04-05 11:30       ` Roy Sigurd Karlsbakk
2022-04-06  6:57         ` Jorge Nunes
2022-04-06 19:57       ` Wols Lists
2022-04-06 20:46         ` Jorge Nunes
2022-04-06 21:02           ` Roy Sigurd Karlsbakk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.