All of lore.kernel.org
 help / color / mirror / Atom feed
* Recovering RAID-1
@ 2021-06-13 22:51 H
  2021-06-14  8:17 ` antlists
  0 siblings, 1 reply; 6+ messages in thread
From: H @ 2021-06-13 22:51 UTC (permalink / raw)
  To: Linux RAID Mailing List

I am running CentOS 7 and would like to see if I can "recover" a lost(?) RAID-1 setup on two identical SSDs and would greatly appreciate some assistance.

Part of the history is that I many months ago used Intel fake RAID on the motherboard but I /think/ I may also have used software RAID... The motherboard was replaced and because there were some issues I eventually abandoned the fake RAID and did not have mdadm running at that time. Thus, I have been operating off the two identical disks in non-RAID mode but would now like to see if I can get back to having RAID-1 running using mdadm only. There have been a number of OS updates, new software installed and work done on the computer since but it is running fine.

Because I have no notes and really did not know what I had done before the above motherboard replacement I have to tread very carefully. Here is my current understanding:

- mdadm is installed on the system

- the two relevant disks are currently /dev/sdb and /dev/sdc (I also have two other disks in the system that are, for this purpose, irrelevant)

- gparted tells me that both disks are 238.47 GiB in size with:

-- /boot/efi of 260 MiB

-- /boot 1.00 GiB and formatted xfs

-- (LUKS) encrypted partition 247.22 GiB

-- unallocated 4.34 MiB

- gparted further shows a key symbol next to /dev/sdc1 and /dev/sdc2 and /dev/sdb3 but not /dev/sdb1, /dev/sdb2 and /dev/sdc3. Googling this it suggests that partitions with keys are in use and the ones with /no/ keys are /not/ in use?

- am I correct therefore that the system booted from /dev/sdc1 and /dev/sdc2 and using /dev/sdb3 for everything /but/ /boot and /boot/efi?

- cat /proc/mdstat does not show any RAID information

- my /uneducated/ guess is that there /might/ be some RAID-information in the last 4.34 MiB but I am not sure how to check it?

- my next /uneducated/ guess is that, if so, mdadm 0.90 was/could be used since that is the version that seems to use space at the end of the disk

I would very much appreciate if anyone can suggest how to check the last items? Once this has been verified the next step would be to get mdadm RAID-1 going again.

If any of the above is incorrect in whole or partially, please advise.

Thanks!


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Recovering RAID-1
  2021-06-13 22:51 Recovering RAID-1 H
@ 2021-06-14  8:17 ` antlists
  2021-06-14 17:35   ` H
  0 siblings, 1 reply; 6+ messages in thread
From: antlists @ 2021-06-14  8:17 UTC (permalink / raw)
  To: H, Linux RAID Mailing List

On 13/06/2021 23:51, H wrote:
> I would very much appreciate if anyone can suggest how to check the last items? Once this has been verified the next step would be to get mdadm RAID-1 going again.

An obvious first step is to run lsdrv. 
https://raid.wiki.kernel.org/index.php/Asking_for_help

That will hopefully find anything there.

But before you do anything BACKUP BACKUP BACKUP. It's only 250GB from 
what I can see - getting your hands on a 500GB or 1TB drive shouldn't be 
hard, and a quick stream of the partition shouldn't take long (although 
a "cp -a" might be safer, given that LUKS is involved ...).

Cheers,
Wol

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Recovering RAID-1
  2021-06-14  8:17 ` antlists
@ 2021-06-14 17:35   ` H
  2021-06-16 18:02     ` Piergiorgio Sartor
  0 siblings, 1 reply; 6+ messages in thread
From: H @ 2021-06-14 17:35 UTC (permalink / raw)
  To: Linux RAID Mailing List

On 06/14/2021 04:17 AM, antlists wrote:
> On 13/06/2021 23:51, H wrote:
>> I would very much appreciate if anyone can suggest how to check the last items? Once this has been verified the next step would be to get mdadm RAID-1 going again.
>
> An obvious first step is to run lsdrv. https://raid.wiki.kernel.org/index.php/Asking_for_help
>
> That will hopefully find anything there.
>
> But before you do anything BACKUP BACKUP BACKUP. It's only 250GB from what I can see - getting your hands on a 500GB or 1TB drive shouldn't be hard, and a quick stream of the partition shouldn't take long (although a "cp -a" might be safer, given that LUKS is involved ...).
>
> Cheers,
> Wol

Thank you for the link, here is the output from the various packages listed on that page:

uname -a

Linux tsp520c 3.10.0-1160.2.2.el7.x86_64 #1 SMP Tue Oct 20 16:53:08 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

mdadm --version

mdadm - v4.1 - 2018-10-01

smartctl --xall /dev...

I skipped this since the output is lengthy and not sure which parts we might need.

mdadm --examine /dev/sdb (and /dev/sdc as well as individual partitions)

[root@tsp520c ~]# mdadm --examine /dev/sdb
/dev/sdb:
   MBR Magic : aa55
Partition[0] :    500118191 sectors at            1 (type ee)
[root@tsp520c ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
   MBR Magic : aa55
Partition[0] :   1701978223 sectors at   1948285285 (type 6e)
Partition[3] :          441 sectors at     28049408 (type 00)
[root@tsp520c ~]# mdadm --examine /dev/sdb2
mdadm: No md superblock detected on /dev/sdb2.
[root@tsp520c ~]# mdadm --examine /dev/sdb3
mdadm: No md superblock detected on /dev/sdb3.
[root@tsp520c ~]#

[root@tsp520c ~]# mdadm --examine /dev/sdc
/dev/sdc:
   MBR Magic : aa55
Partition[0] :    500118191 sectors at            1 (type ee)
[root@tsp520c ~]# mdadm --examine /dev/sdc1
/dev/sdc1:
   MBR Magic : aa55
Partition[0] :   1701978223 sectors at   1948285285 (type 6e)
Partition[3] :          441 sectors at     28049408 (type 00)
[root@tsp520c ~]# mdadm --examine /dev/sdc2
mdadm: No md superblock detected on /dev/sdc2.
[root@tsp520c ~]# mdadm --examine /dev/sdc3
mdadm: No md superblock detected on /dev/sdc3.

cat /proc/mdstat

Personalities :
unused devices: <none>

mdadm --detail /dev/mdx

There are no /dev/md devices

lsdrv

**Warning** The following utility(ies) failed to execute:
  sginfo
Some information may be missing.

USB [uas] Bus 002 Device 002: ID 0bc2:231a Seagate RSS LLC Expansion Portable {NAADA87P}
└scsi 0:0:0:0 Seagate  Expansion      
 └sda 3.64t [8:0] crypto_LUKS {f573965d-f469-4fc2-abf6-8155f7f422c4}
  └dm-4 3.64t [253:4] ext4 {3a94f5a0-058a-4002-9067-27ed211e99f0}
   └Mounted as /dev/mapper/luks-f573965d-f469-4fc2-abf6-8155f7f422c4 @ /run/media/hakan/3a94f5a0-058a-4002-9067-27ed211e99f0
PCI [ahci] 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode]
├scsi 1:0:0:0 ATA      SAMSUNG MZ7LH256 {S4VSNE0MA03154}
│└sdb 238.47g [8:16] Partitioned (gpt)
│ ├sdb1 260.00m [8:17] vfat 'SYSTEM' {A850-134B}
│ ├sdb2 1.00g [8:18] xfs {2d8a56bf-f1e3-4f02-9ae7-3a20c987586d}
│ └sdb3 237.22g [8:19] crypto_LUKS {8fb015aa-50d8-49b5-9001-964e3247fc87}
│  └dm-0 237.21g [253:0] PV LVM2_member <237.21g used, 4.00m free {K082KU-HZAr-i6Np-9TwL-av7Z-Nytm-4I4jHe}
│   └VG centos_tsp520c 237.21g 4.00m free {Y4mpA3-tMd8-L5Pg-xYQF-lcQY-7wgk-Ox2iSi}
│    ├dm-3 179.52g [253:3] LV home xfs {1d7fabc3-c6f5-4e43-b609-ea86d33012c1}
│    │└Mounted as /dev/mapper/centos_tsp520c-home @ /home
│    ├dm-1 50.00g [253:1] LV root xfs {f4f1de82-b53d-4d6d-81f0-621103dddec5}
│    │└Mounted as /dev/mapper/centos_tsp520c-root @ /
│    └dm-2 7.69g [253:2] LV swap swap {7fbb4125-6394-4fe8-83a1-8ff0e079ae98}
├scsi 2:0:0:0 ATA      SAMSUNG MZ7LH256 {S4VSNE0MA03145}
│└sdc 238.47g [8:32] Partitioned (gpt)
│ ├sdc1 260.00m [8:33] vfat 'SYSTEM' {A850-134B}
│ │└Mounted as /dev/sdc1 @ /boot/efi
│ ├sdc2 1.00g [8:34] xfs {2d8a56bf-f1e3-4f02-9ae7-3a20c987586d}
│ │└Mounted as /dev/sdc2 @ /boot
│ └sdc3 237.22g [8:35] crypto_LUKS {8fb015aa-50d8-49b5-9001-964e3247fc87}
└scsi 7:0:0:0 ATA      Samsung SSD 860  {S597NE0MA20991N}
 └sdd 1.82t [8:48] Partitioned (gpt)
  ├sdd1 1.82t [8:49] zfs_member 'zfspool' {3888980096123243448}
  └sdd9 8.00m [8:57] Empty/Unknown
Other Block Devices
└loop0 0.00k [7:0] Empty/Unknown

Note that there are two other disks in the system which are not relevant (sda and sdd). The two identical SSDs, SAMSUNG MZ7LH256, are the ones that should be configured RAID-1 (sdb and sdc).

Thank you.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Recovering RAID-1
  2021-06-14 17:35   ` H
@ 2021-06-16 18:02     ` Piergiorgio Sartor
  2021-06-17 17:37       ` H
  0 siblings, 1 reply; 6+ messages in thread
From: Piergiorgio Sartor @ 2021-06-16 18:02 UTC (permalink / raw)
  To: H; +Cc: Linux RAID Mailing List

On Mon, Jun 14, 2021 at 01:35:06PM -0400, H wrote:
> On 06/14/2021 04:17 AM, antlists wrote:
> > On 13/06/2021 23:51, H wrote:
> >> I would very much appreciate if anyone can suggest how to check the last items? Once this has been verified the next step would be to get mdadm RAID-1 going again.
> >
> > An obvious first step is to run lsdrv. https://raid.wiki.kernel.org/index.php/Asking_for_help
> >
> > That will hopefully find anything there.
> >
> > But before you do anything BACKUP BACKUP BACKUP. It's only 250GB from what I can see - getting your hands on a 500GB or 1TB drive shouldn't be hard, and a quick stream of the partition shouldn't take long (although a "cp -a" might be safer, given that LUKS is involved ...).
> >
> > Cheers,
> > Wol
> 
> Thank you for the link, here is the output from the various packages listed on that page:
> 
> uname -a
> 
> Linux tsp520c 3.10.0-1160.2.2.el7.x86_64 #1 SMP Tue Oct 20 16:53:08 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
> 
> mdadm --version
> 
> mdadm - v4.1 - 2018-10-01
> 
> smartctl --xall /dev...
> 
> I skipped this since the output is lengthy and not sure which parts we might need.
> 
> mdadm --examine /dev/sdb (and /dev/sdc as well as individual partitions)
> 
> [root@tsp520c ~]# mdadm --examine /dev/sdb
> /dev/sdb:
>    MBR Magic : aa55
> Partition[0] :    500118191 sectors at            1 (type ee)
> [root@tsp520c ~]# mdadm --examine /dev/sdb1
> /dev/sdb1:
>    MBR Magic : aa55
> Partition[0] :   1701978223 sectors at   1948285285 (type 6e)
> Partition[3] :          441 sectors at     28049408 (type 00)
> [root@tsp520c ~]# mdadm --examine /dev/sdb2
> mdadm: No md superblock detected on /dev/sdb2.
> [root@tsp520c ~]# mdadm --examine /dev/sdb3
> mdadm: No md superblock detected on /dev/sdb3.
> [root@tsp520c ~]#
> 
> [root@tsp520c ~]# mdadm --examine /dev/sdc
> /dev/sdc:
>    MBR Magic : aa55
> Partition[0] :    500118191 sectors at            1 (type ee)
> [root@tsp520c ~]# mdadm --examine /dev/sdc1
> /dev/sdc1:
>    MBR Magic : aa55
> Partition[0] :   1701978223 sectors at   1948285285 (type 6e)
> Partition[3] :          441 sectors at     28049408 (type 00)
> [root@tsp520c ~]# mdadm --examine /dev/sdc2
> mdadm: No md superblock detected on /dev/sdc2.
> [root@tsp520c ~]# mdadm --examine /dev/sdc3
> mdadm: No md superblock detected on /dev/sdc3.

It does not seem there is any Linux RAID around.

Maybe it was the "fake RAID" or whatever, which
was handled by the motherboard.

If the data is accessible, just copy everything
somewhere else and reconfigure the storage.

bye,

pg

> 
> cat /proc/mdstat
> 
> Personalities :
> unused devices: <none>
> 
> mdadm --detail /dev/mdx
> 
> There are no /dev/md devices
> 
> lsdrv
> 
> **Warning** The following utility(ies) failed to execute:
>   sginfo
> Some information may be missing.
> 
> USB [uas] Bus 002 Device 002: ID 0bc2:231a Seagate RSS LLC Expansion Portable {NAADA87P}
> └scsi 0:0:0:0 Seagate  Expansion      
>  └sda 3.64t [8:0] crypto_LUKS {f573965d-f469-4fc2-abf6-8155f7f422c4}
>   └dm-4 3.64t [253:4] ext4 {3a94f5a0-058a-4002-9067-27ed211e99f0}
>    └Mounted as /dev/mapper/luks-f573965d-f469-4fc2-abf6-8155f7f422c4 @ /run/media/hakan/3a94f5a0-058a-4002-9067-27ed211e99f0
> PCI [ahci] 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode]
> ├scsi 1:0:0:0 ATA      SAMSUNG MZ7LH256 {S4VSNE0MA03154}
> │└sdb 238.47g [8:16] Partitioned (gpt)
> │ ├sdb1 260.00m [8:17] vfat 'SYSTEM' {A850-134B}
> │ ├sdb2 1.00g [8:18] xfs {2d8a56bf-f1e3-4f02-9ae7-3a20c987586d}
> │ └sdb3 237.22g [8:19] crypto_LUKS {8fb015aa-50d8-49b5-9001-964e3247fc87}
> │  └dm-0 237.21g [253:0] PV LVM2_member <237.21g used, 4.00m free {K082KU-HZAr-i6Np-9TwL-av7Z-Nytm-4I4jHe}
> │   └VG centos_tsp520c 237.21g 4.00m free {Y4mpA3-tMd8-L5Pg-xYQF-lcQY-7wgk-Ox2iSi}
> │    ├dm-3 179.52g [253:3] LV home xfs {1d7fabc3-c6f5-4e43-b609-ea86d33012c1}
> │    │└Mounted as /dev/mapper/centos_tsp520c-home @ /home
> │    ├dm-1 50.00g [253:1] LV root xfs {f4f1de82-b53d-4d6d-81f0-621103dddec5}
> │    │└Mounted as /dev/mapper/centos_tsp520c-root @ /
> │    └dm-2 7.69g [253:2] LV swap swap {7fbb4125-6394-4fe8-83a1-8ff0e079ae98}
> ├scsi 2:0:0:0 ATA      SAMSUNG MZ7LH256 {S4VSNE0MA03145}
> │└sdc 238.47g [8:32] Partitioned (gpt)
> │ ├sdc1 260.00m [8:33] vfat 'SYSTEM' {A850-134B}
> │ │└Mounted as /dev/sdc1 @ /boot/efi
> │ ├sdc2 1.00g [8:34] xfs {2d8a56bf-f1e3-4f02-9ae7-3a20c987586d}
> │ │└Mounted as /dev/sdc2 @ /boot
> │ └sdc3 237.22g [8:35] crypto_LUKS {8fb015aa-50d8-49b5-9001-964e3247fc87}
> └scsi 7:0:0:0 ATA      Samsung SSD 860  {S597NE0MA20991N}
>  └sdd 1.82t [8:48] Partitioned (gpt)
>   ├sdd1 1.82t [8:49] zfs_member 'zfspool' {3888980096123243448}
>   └sdd9 8.00m [8:57] Empty/Unknown
> Other Block Devices
> └loop0 0.00k [7:0] Empty/Unknown
> 
> Note that there are two other disks in the system which are not relevant (sda and sdd). The two identical SSDs, SAMSUNG MZ7LH256, are the ones that should be configured RAID-1 (sdb and sdc).
> 
> Thank you.
> 
> 

-- 

piergiorgio

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Recovering RAID-1
  2021-06-16 18:02     ` Piergiorgio Sartor
@ 2021-06-17 17:37       ` H
  2021-06-17 18:22         ` Wols Lists
  0 siblings, 1 reply; 6+ messages in thread
From: H @ 2021-06-17 17:37 UTC (permalink / raw)
  Cc: Linux RAID Mailing List

On 06/16/2021 02:02 PM, Piergiorgio Sartor wrote:
> On Mon, Jun 14, 2021 at 01:35:06PM -0400, H wrote:
>> On 06/14/2021 04:17 AM, antlists wrote:
>>> On 13/06/2021 23:51, H wrote:
>>>> I would very much appreciate if anyone can suggest how to check the last items? Once this has been verified the next step would be to get mdadm RAID-1 going again.
>>> An obvious first step is to run lsdrv. https://raid.wiki.kernel.org/index.php/Asking_for_help
>>>
>>> That will hopefully find anything there.
>>>
>>> But before you do anything BACKUP BACKUP BACKUP. It's only 250GB from what I can see - getting your hands on a 500GB or 1TB drive shouldn't be hard, and a quick stream of the partition shouldn't take long (although a "cp -a" might be safer, given that LUKS is involved ...).
>>>
>>> Cheers,
>>> Wol
>> Thank you for the link, here is the output from the various packages listed on that page:
>>
>> uname -a
>>
>> Linux tsp520c 3.10.0-1160.2.2.el7.x86_64 #1 SMP Tue Oct 20 16:53:08 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>>
>> mdadm --version
>>
>> mdadm - v4.1 - 2018-10-01
>>
>> smartctl --xall /dev...
>>
>> I skipped this since the output is lengthy and not sure which parts we might need.
>>
>> mdadm --examine /dev/sdb (and /dev/sdc as well as individual partitions)
>>
>> [root@tsp520c ~]# mdadm --examine /dev/sdb
>> /dev/sdb:
>>    MBR Magic : aa55
>> Partition[0] :    500118191 sectors at            1 (type ee)
>> [root@tsp520c ~]# mdadm --examine /dev/sdb1
>> /dev/sdb1:
>>    MBR Magic : aa55
>> Partition[0] :   1701978223 sectors at   1948285285 (type 6e)
>> Partition[3] :          441 sectors at     28049408 (type 00)
>> [root@tsp520c ~]# mdadm --examine /dev/sdb2
>> mdadm: No md superblock detected on /dev/sdb2.
>> [root@tsp520c ~]# mdadm --examine /dev/sdb3
>> mdadm: No md superblock detected on /dev/sdb3.
>> [root@tsp520c ~]#
>>
>> [root@tsp520c ~]# mdadm --examine /dev/sdc
>> /dev/sdc:
>>    MBR Magic : aa55
>> Partition[0] :    500118191 sectors at            1 (type ee)
>> [root@tsp520c ~]# mdadm --examine /dev/sdc1
>> /dev/sdc1:
>>    MBR Magic : aa55
>> Partition[0] :   1701978223 sectors at   1948285285 (type 6e)
>> Partition[3] :          441 sectors at     28049408 (type 00)
>> [root@tsp520c ~]# mdadm --examine /dev/sdc2
>> mdadm: No md superblock detected on /dev/sdc2.
>> [root@tsp520c ~]# mdadm --examine /dev/sdc3
>> mdadm: No md superblock detected on /dev/sdc3.
> It does not seem there is any Linux RAID around.
>
> Maybe it was the "fake RAID" or whatever, which
> was handled by the motherboard.
>
> If the data is accessible, just copy everything
> somewhere else and reconfigure the storage.
>
> bye,
>
> pg
>
>> cat /proc/mdstat
>>
>> Personalities :
>> unused devices: <none>
>>
>> mdadm --detail /dev/mdx
>>
>> There are no /dev/md devices
>>
>> lsdrv
>>
>> **Warning** The following utility(ies) failed to execute:
>>   sginfo
>> Some information may be missing.
>>
>> USB [uas] Bus 002 Device 002: ID 0bc2:231a Seagate RSS LLC Expansion Portable {NAADA87P}
>> └scsi 0:0:0:0 Seagate  Expansion      
>>  └sda 3.64t [8:0] crypto_LUKS {f573965d-f469-4fc2-abf6-8155f7f422c4}
>>   └dm-4 3.64t [253:4] ext4 {3a94f5a0-058a-4002-9067-27ed211e99f0}
>>    └Mounted as /dev/mapper/luks-f573965d-f469-4fc2-abf6-8155f7f422c4 @ /run/media/hakan/3a94f5a0-058a-4002-9067-27ed211e99f0
>> PCI [ahci] 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode]
>> ├scsi 1:0:0:0 ATA      SAMSUNG MZ7LH256 {S4VSNE0MA03154}
>> │└sdb 238.47g [8:16] Partitioned (gpt)
>> │ ├sdb1 260.00m [8:17] vfat 'SYSTEM' {A850-134B}
>> │ ├sdb2 1.00g [8:18] xfs {2d8a56bf-f1e3-4f02-9ae7-3a20c987586d}
>> │ └sdb3 237.22g [8:19] crypto_LUKS {8fb015aa-50d8-49b5-9001-964e3247fc87}
>> │  └dm-0 237.21g [253:0] PV LVM2_member <237.21g used, 4.00m free {K082KU-HZAr-i6Np-9TwL-av7Z-Nytm-4I4jHe}
>> │   └VG centos_tsp520c 237.21g 4.00m free {Y4mpA3-tMd8-L5Pg-xYQF-lcQY-7wgk-Ox2iSi}
>> │    ├dm-3 179.52g [253:3] LV home xfs {1d7fabc3-c6f5-4e43-b609-ea86d33012c1}
>> │    │└Mounted as /dev/mapper/centos_tsp520c-home @ /home
>> │    ├dm-1 50.00g [253:1] LV root xfs {f4f1de82-b53d-4d6d-81f0-621103dddec5}
>> │    │└Mounted as /dev/mapper/centos_tsp520c-root @ /
>> │    └dm-2 7.69g [253:2] LV swap swap {7fbb4125-6394-4fe8-83a1-8ff0e079ae98}
>> ├scsi 2:0:0:0 ATA      SAMSUNG MZ7LH256 {S4VSNE0MA03145}
>> │└sdc 238.47g [8:32] Partitioned (gpt)
>> │ ├sdc1 260.00m [8:33] vfat 'SYSTEM' {A850-134B}
>> │ │└Mounted as /dev/sdc1 @ /boot/efi
>> │ ├sdc2 1.00g [8:34] xfs {2d8a56bf-f1e3-4f02-9ae7-3a20c987586d}
>> │ │└Mounted as /dev/sdc2 @ /boot
>> │ └sdc3 237.22g [8:35] crypto_LUKS {8fb015aa-50d8-49b5-9001-964e3247fc87}
>> └scsi 7:0:0:0 ATA      Samsung SSD 860  {S597NE0MA20991N}
>>  └sdd 1.82t [8:48] Partitioned (gpt)
>>   ├sdd1 1.82t [8:49] zfs_member 'zfspool' {3888980096123243448}
>>   └sdd9 8.00m [8:57] Empty/Unknown
>> Other Block Devices
>> └loop0 0.00k [7:0] Empty/Unknown
>>
>> Note that there are two other disks in the system which are not relevant (sda and sdd). The two identical SSDs, SAMSUNG MZ7LH256, are the ones that should be configured RAID-1 (sdb and sdc).
>>
>> Thank you.
>>
>>
I see. I do recollect that at one time I had /dev/md127 and /dev/md128 show up in gparted. Could they have been created by the Intel fake RAID?

If there are no signs of a remaining mdadm RAID installation and I have to install fresh, I do have a couple of questions. Naturally I need to make backups of the partitions before doing anything but:

- Since the system seems to be booting from sdc1 and sdc2 while using sdb3 for the data partition, I would think that any restoration should be from backups of sdc1, sdc2 and sdb3, correct?

- Is there anyway to do a dry-run of a fresh mdadm installation to see if it might install on the disks as currently partitioned and not disturb sdc1, sdc2 and sdb3? I do have this minimal partition of ca 4 MiB at the end of both disks which, if I understand correctly, might be used by a mdadm 0.90 scheme?


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Recovering RAID-1
  2021-06-17 17:37       ` H
@ 2021-06-17 18:22         ` Wols Lists
  0 siblings, 0 replies; 6+ messages in thread
From: Wols Lists @ 2021-06-17 18:22 UTC (permalink / raw)
  To: H; +Cc: Linux RAID Mailing List

On 17/06/21 18:37, H wrote:
> I see. I do recollect that at one time I had /dev/md127 and /dev/md128 show up in gparted. Could they have been created by the Intel fake RAID?
> 
> If there are no signs of a remaining mdadm RAID installation and I have to install fresh, I do have a couple of questions. Naturally I need to make backups of the partitions before doing anything but:
> 
> - Since the system seems to be booting from sdc1 and sdc2 while using sdb3 for the data partition, I would think that any restoration should be from backups of sdc1, sdc2 and sdb3, correct?

Yes. What you could do is create your new array(s) using sdb1, sdb2, and
sdc3, using the "missing" option to create single-disk mirrors. You then
dd the contents of sdc1, sdc2 and sdb3 across, make sure you can/are
booting cleanly from the mirrors, and then add sdc1, sdc2 and sdb3 to
the mirrors.

I'd probably happily do this on a test system for the hell of it, but on
a live system I'd make sure I had backups and be rather careful...
> 
> - Is there anyway to do a dry-run of a fresh mdadm installation to see if it might install on the disks as currently partitioned and not disturb sdc1, sdc2 and sdb3? I do have this minimal partition of ca 4 MiB at the end of both disks which, if I understand correctly, might be used by a mdadm 0.90 scheme?

As above, you could dry-run on the apparently unused partitions, but
BACKUP BACKUP BACKUP!

And no, that little partition at the end will not be where the md
metadata is stored. You're correct, v0.9 (and v1.0) store their metadata
at the end, but at the end of the raid partition, not in a separate
partition.

And you're better off using the recommended 1.2 metadata, because 0.9 is
deprecated, and 1.0 and 1.1 are considered vulnerable to being stomped
on by other utilities that don't know about raid. Even 1.2 is
vulnerable, there've been a couple of cases recently where it seems that
other software (the BIOS, even?) "helpfully" writes a GPT on an
apparently blank disk - RIGHT WHERE MDADM PUT ITS SUPERBLOCK. And these
utilities don't ask permission! Windows of course is notorious for this,
although it seems that it no longer does it regardless it just assumes
that if the user invokes the disk management software then the user must
know what they're doing ... do they ever?

Cheers,
Wol

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-06-17 18:25 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-13 22:51 Recovering RAID-1 H
2021-06-14  8:17 ` antlists
2021-06-14 17:35   ` H
2021-06-16 18:02     ` Piergiorgio Sartor
2021-06-17 17:37       ` H
2021-06-17 18:22         ` Wols Lists

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.