* Booting after Debian upgrade: /dev/md5 does not exist
@ 2014-07-22 8:09 Ron Leach
2014-07-22 12:29 ` Phil Turmel
0 siblings, 1 reply; 9+ messages in thread
From: Ron Leach @ 2014-07-22 8:09 UTC (permalink / raw)
To: linux-raid
List, good morning,
After updating a 2 x 2TB RAID1 server from Debian Lenny to Debian
Squeeze today (first stage of 2-stage process to upgrade to current
Debian stable, Wheezy), boot sequence stops with a warning that
/dev/md5 does not exist. (7 partitions; /dev/md5 is mounted on /home;
md0 to md4 exist and mount ok, and so does md6; mdstat shows them
synchronised.)
There's background. Some time ago, we had a drive failure and tried a
repair by inserting a new disc and 'dd'-ing the existing disc in order
to replicate the partition structures. That was the wrong thing to
do, mdadm became very active and tried to repair itself during the dd
operation, so I cancelled the dd, quickly. Removed that disc, ran in
crippled-mode copying all the data off the system and took the machine
offline. (Data is safe, and running on a new, separate, RAID 1
server.) In the meantime, mdadm had recovered itself from the dd-ing
problems but in so doing had named the original /dev/md5 partition
/dev/md126 (there was a thread about similar md numbers a few months
ago). It seemed happy, even though mdadm.conf still referred to
/dev/md5, while fstab referred to /dev/md126; I never understood how,
or why, but it ran. Finally repaired the RAID1 by inserting another
new disc, using gdisk, this time, to replicate the partitions, and
repairing each md(x) one at a time, adding sdb(n) as appropriate. The
RAID1 was now operating, prior to the Debian upgrade.
Applying the upgrade to Squeeze, the new mdadm says that /dev/md5 does
not exist. I changed the fstab entry to refer to /dev/md5, to match
/etc/mdadm.conf, but booting still stops. I can continue the boot,
but without /home, nevertheless I can do maintenance and package
installations etc.
Is there another data file somewhere I need to repair so that mdadm
sees /dev/md5 and starts that array?
Here's mdadm.conf
D5s2:/# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root@systemdesk
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=eb3b45e8:e1d73b1a:63042e90:fced6612
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=93a0b403:18aa4e20:f77b0142:25a55090
ARRAY /dev/md2 level=raid1 num-devices=2
UUID=99104b71:9dd6cf88:e1a05948:57032dd7
ARRAY /dev/md3 level=raid1 num-devices=2
UUID=5dbd5605:1d61cbaa:ac5c64ee:5356e8a9
ARRAY /dev/md4 level=raid1 num-devices=2
UUID=725cfde4:114fef9a:4ed1ccad:18d72d44
ARRAY /dev/md5 level=raid1 num-devices=2
UUID=5bad4c7c:780696f4:fbaacbb9:204d67b9
ARRAY /dev/md6 level=raid1 num-devices=2
UUID=94171c8e:c47d18a8:c073121c:f9f222fe
# This file was auto-generated on Fri, 29 Jan 2010 16:06:38 +0000
# by mkconf $Id$
D5s2:/#
Another thought is that perhaps the uuid entry for /dev/md5 isn't
correct, especially since the file is dated 2010 which was years
before the disc problems. I'm fairly sure that nothing has removed
/dev/md5 or its underlying /dev/sda7 and /dev/sdb7, and it will
contain all /home 's data, so I didn't want to initialise the array.
I wondered whether there was some way I could define /dev/md5 using
'sda/sdb' notation, if, perhaps, the uuid info is incorrect.
I'm assuming the filesystem isn't an issue; this machine is using XFS
on all partitions.
Open to any suggestions, and I agree that locking dd away somewhere
out of reach of the dangerously under-informed would be a good idea,
for a start.
regards, Ron
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 8:09 Booting after Debian upgrade: /dev/md5 does not exist Ron Leach
@ 2014-07-22 12:29 ` Phil Turmel
2014-07-22 13:21 ` Ron Leach
0 siblings, 1 reply; 9+ messages in thread
From: Phil Turmel @ 2014-07-22 12:29 UTC (permalink / raw)
To: Ron Leach, linux-raid
Good morning Ron,
On 07/22/2014 04:09 AM, Ron Leach wrote:
> List, good morning,
>
> After updating a 2 x 2TB RAID1 server from Debian Lenny to Debian
> Squeeze today (first stage of 2-stage process to upgrade to current
> Debian stable, Wheezy), boot sequence stops with a warning that /dev/md5
> does not exist. (7 partitions; /dev/md5 is mounted on /home; md0 to md4
> exist and mount ok, and so does md6; mdstat shows them synchronised.)
[trim /]
> Is there another data file somewhere I need to repair so that mdadm sees
> /dev/md5 and starts that array?
Yes, there's a copy of mdadm.conf in your initramfs that governs what is
assembled in the early boot phase. Strictly speaking, only the arrays
needed to get to your root filesystem *must* be assembled then, but all
the distros I've tried assemble everything then. The "mkinitrd" or
"update-initramfs" utility will copy your mdadm.conf into the initramfs.
> Here's mdadm.conf
>
> D5s2:/# cat /etc/mdadm/mdadm.conf
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST <system>
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR root@systemdesk
>
> # definitions of existing MD arrays
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=eb3b45e8:e1d73b1a:63042e90:fced6612
> ARRAY /dev/md1 level=raid1 num-devices=2
> UUID=93a0b403:18aa4e20:f77b0142:25a55090
> ARRAY /dev/md2 level=raid1 num-devices=2
> UUID=99104b71:9dd6cf88:e1a05948:57032dd7
> ARRAY /dev/md3 level=raid1 num-devices=2
> UUID=5dbd5605:1d61cbaa:ac5c64ee:5356e8a9
> ARRAY /dev/md4 level=raid1 num-devices=2
> UUID=725cfde4:114fef9a:4ed1ccad:18d72d44
> ARRAY /dev/md5 level=raid1 num-devices=2
> UUID=5bad4c7c:780696f4:fbaacbb9:204d67b9
> ARRAY /dev/md6 level=raid1 num-devices=2
> UUID=94171c8e:c47d18a8:c073121c:f9f222fe
If you want your boot process to be as robust as possible, omit the
'level=' and 'num-devices=' selectors in the ARRAY clauses and identify
your filesystems in fstab with LABEL= or UUID= taken from the output of
"blkid". (Not the array UUIDs.)
You can start by using "mdadm -Es >>/etc/mdadm/mdadm.conf", deleting the
unnecessary parts, and adjusting array numbers to suit your preferences.
Phil
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 12:29 ` Phil Turmel
@ 2014-07-22 13:21 ` Ron Leach
2014-07-22 15:08 ` Phil Turmel
0 siblings, 1 reply; 9+ messages in thread
From: Ron Leach @ 2014-07-22 13:21 UTC (permalink / raw)
Cc: linux-raid
On 22/07/2014 13:29, Phil Turmel wrote:
>
> there's a copy of mdadm.conf in your initramfs that governs what is
> assembled in the early boot phase. Strictly speaking, only the arrays
> needed to get to your root filesystem *must* be assembled then, but all
> the distros I've tried assemble everything then. The "mkinitrd" or
> "update-initramfs" utility will copy your mdadm.conf into the initramfs.
Noted.
[...]
> If you want your boot process to be as robust as possible, omit the
> 'level=' and 'num-devices=' selectors in the ARRAY clauses and identify
> your filesystems in fstab with LABEL= or UUID= taken from the output of
> "blkid". (Not the array UUIDs.)
>
I don't think I quite follow this. I worry that I'll make the
incorrect change and then not be able to boot or then repair my mistake.
Here's the output of blkid (lines are wrapping, sorry)
D5s2:/# blkid
/dev/sda1: UUID="eb3b45e8-e1d7-3b1a-6304-2e90fced6612"
TYPE="linux_raid_member"
/dev/sda2: UUID="93a0b403-18aa-4e20-f77b-014225a55090"
TYPE="linux_raid_member"
/dev/sda3: UUID="99104b71-9dd6-cf88-e1a0-594857032dd7"
TYPE="linux_raid_member"
/dev/sda4: UUID="5dbd5605-1d61-cbaa-ac5c-64ee5356e8a9"
TYPE="linux_raid_member"
/dev/sda5: TYPE="swap"
/dev/sda6: UUID="725cfde4-114f-ef9a-4ed1-ccad18d72d44"
TYPE="linux_raid_member"
/dev/sda7: UUID="5bad4c7c-7806-96f4-e201-a2f57bba85d7"
TYPE="linux_raid_member"
/dev/sda8: UUID="94171c8e-c47d-18a8-c073-121cf9f222fe"
TYPE="linux_raid_member"
/dev/sdb1: UUID="eb3b45e8-e1d7-3b1a-6304-2e90fced6612"
TYPE="linux_raid_member"
/dev/sdb2: UUID="93a0b403-18aa-4e20-f77b-014225a55090"
TYPE="linux_raid_member"
/dev/sdb3: UUID="99104b71-9dd6-cf88-e1a0-594857032dd7"
TYPE="linux_raid_member"
/dev/sdb4: UUID="5dbd5605-1d61-cbaa-ac5c-64ee5356e8a9"
TYPE="linux_raid_member"
/dev/sdb5: TYPE="swap"
/dev/sdb6: UUID="725cfde4-114f-ef9a-4ed1-ccad18d72d44"
TYPE="linux_raid_member"
/dev/sdb7: UUID="5bad4c7c-7806-96f4-e201-a2f57bba85d7"
TYPE="linux_raid_member"
/dev/sdb8: UUID="94171c8e-c47d-18a8-c073-121cf9f222fe"
TYPE="linux_raid_member"
/dev/md0: LABEL="boot" UUID="67c165a8-020a-4931-98d4-21b3dcb5d53c"
TYPE="ext2"
/dev/md1: LABEL="slash" UUID="6fa78f26-4ca9-4e41-909d-ac4c8877f317"
TYPE="xfs"
/dev/md2: LABEL="usr" UUID="9ba54810-c299-424c-b312-e13325e00e4f"
TYPE="xfs"
/dev/md3: LABEL="var" UUID="7d4918f3-eb9e-493a-a106-b9c21eff412c"
TYPE="xfs"
/dev/md4: LABEL="tmp" UUID="cf09135c-cc46-424f-9f0b-a737cfacf27b"
TYPE="xfs"
/dev/md6: LABEL="Data97" UUID="a2e22925-f763-4b70-9559-d959b1eb9329"
TYPE="xfs"
D5s2:/#
My first query is, /dev/md5 is missing from this; should it be, at
this stage?
You mentioned not to use the array uuids, but aren't these uuids the
only uuids equating to the md device? The other ids listed here by
blkid are the individual partitions on the underlying drives.
> You can start by using "mdadm -Es>>/etc/mdadm/mdadm.conf", deleting the
> unnecessary parts, and adjusting array numbers to suit your preferences.
>
D5s2:/# mdadm -Es>>/etc/mdadm/mdadm.conf2
D5s2:/# cat /etc/mdadm/mdadm.conf2
ARRAY /dev/md0 UUID=eb3b45e8:e1d73b1a:63042e90:fced6612
ARRAY /dev/md1 UUID=93a0b403:18aa4e20:f77b0142:25a55090
ARRAY /dev/md2 UUID=99104b71:9dd6cf88:e1a05948:57032dd7
ARRAY /dev/md3 UUID=5dbd5605:1d61cbaa:ac5c64ee:5356e8a9
ARRAY /dev/md4 UUID=725cfde4:114fef9a:4ed1ccad:18d72d44
ARRAY /dev/md126 UUID=5bad4c7c:780696f4:e201a2f5:7bba85d7
ARRAY /dev/md6 UUID=94171c8e:c47d18a8:c073121c:f9f222fe
D5s2:/#
There's /dev/md126, again. This isn't the proper 'conf' file yet; is
it safe to change /dev/md126 to /dev/md5, when overwriting the proper
'conf' file?
Ah, it's dawning. Did you mean that these uuids, from the -Es
command, and which are labelled ARRAY, are 'not' the uuids to use in
fstab, but the other uuids from blkid, and labelled UUID, are 'ok' to
use in fstab? I think I've got it.
So I've got to
(i) change fstab to have blkid's listed UUIDs,
No, there's a problem. I don't have a UUID for /dev/md5, for my /home
mount in fstab.
(ii) change mdadm.conf to have this new set of ARRAY statements (from
-Es command) instead of the existing set in mdadm.conf, with md126
replaced by md5, and
(iii) get this mdadm.conf copied into initramfs.
I'm not confident about fstab, because of the missing /dev/md5
identifier. So I haven't made any changes, yet.
Phil, thanks for the reply, and advice; sorry to still seem so cautious,
regards, Ron
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 13:21 ` Ron Leach
@ 2014-07-22 15:08 ` Phil Turmel
2014-07-22 15:57 ` Ron Leach
0 siblings, 1 reply; 9+ messages in thread
From: Phil Turmel @ 2014-07-22 15:08 UTC (permalink / raw)
To: Ron Leach; +Cc: linux-raid
Hi Ron,
On 07/22/2014 09:21 AM, Ron Leach wrote:
[trim /]
> Ah, it's dawning. Did you mean that these uuids, from the -Es command,
> and which are labelled ARRAY, are 'not' the uuids to use in fstab, but
> the other uuids from blkid, and labelled UUID, are 'ok' to use in
> fstab? I think I've got it.
>
> So I've got to
>
> (i) change fstab to have blkid's listed UUIDs,
> No, there's a problem. I don't have a UUID for /dev/md5, for my /home
> mount in fstab.
>
> (ii) change mdadm.conf to have this new set of ARRAY statements (from
> -Es command) instead of the existing set in mdadm.conf, with md126
> replaced by md5, and
>
> (iii) get this mdadm.conf copied into initramfs.
Yes, you've got it.
> I'm not confident about fstab, because of the missing /dev/md5
> identifier. So I haven't made any changes, yet.
>
> Phil, thanks for the reply, and advice; sorry to still seem so cautious,
The changes to mdadm.conf and the changes to fstab can be done
independently. If you do (ii) and (iii), and the fstab has the
corresponding /dev/mdX for the correct filesystems, you'll be able to
boot successfully.
I'm not sure why /dev/md126 didn't show up in blkid's report. You might
still have a problem there. Show the content of /proc/mdstat, please.
It might also help to document your system layout with "lsdrv" [1].
Phil
[1] https://github.com/pturmel/lsdrv
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 15:08 ` Phil Turmel
@ 2014-07-22 15:57 ` Ron Leach
2014-07-22 16:30 ` SOLVED " Ron Leach
2014-07-22 16:57 ` Chris Murphy
0 siblings, 2 replies; 9+ messages in thread
From: Ron Leach @ 2014-07-22 15:57 UTC (permalink / raw)
To: linux-raid
On 22/07/2014 16:08, Phil Turmel wrote:
> I'm not sure why /dev/md126 didn't show up in blkid's report. You might
> still have a problem there. Show the content of /proc/mdstat, please.
>
D5s2:/# cat /proc/mdstat
Personalities : [raid1]
md6 : active raid1 sda8[0] sdb8[1]
1894420672 blocks [2/2] [UU]
md4 : active raid1 sda6[0] sdb6[1]
976448 blocks [2/2] [UU]
md3 : active raid1 sda4[0] sdb4[1]
4882688 blocks [2/2] [UU]
md2 : active raid1 sda3[0] sdb3[1]
9765504 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
2929600 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
499904 blocks [2/2] [UU]
unused devices: <none>
D5s2:/#
No mention of sda7, sdb7, /dev/md126 or /dev/md5
> It might also help to document your system layout with "lsdrv" [1].
>
Thanks for that link. I think lsdrv doesn't see sda7 or sdb7, either:
D5s2:/# ./lsdrv
**Warning** The following utility(ies) failed to execute:
smartctl
pvs
lvs
Some information may be missing.
Controller platform [None]
└platform floppy.0
└fd0 4.00k [2:0] Empty/Unknown
USB [usb-storage] Bus 001 Device 002: ID 13fd:0842 Initio Corporation
{SATASLIM0000104729f}
└scsi 0:0:0:0 TSSTcorp CDDVDW SE-S084C
└sr0 640.83m [11:0] Empty/Unknown
PCI [ata_piix] 00:1f.2 IDE interface: Intel Corporation N10/ICH7
Family SATA IDE Controller (rev 01)
├scsi 1:0:1:0 ATA ST32000542AS {xxxxxxxx}
│└sda 1.82t [8:0] Empty/Unknown
│ ├sda1 488.28m [8:1] MD raid1 (0/2) (w/ sdb1) in_sync
{eb3b45e8:e1d73b1a:63042e90:fced6612}
│ │└md0 488.19m [9:0] MD v0.90 raid1 (2) clean
{eb3b45e8:e1d73b1a:63042e90:fced6612}
│ │ │ ext2 'boot' {67c165a8-020a-4931-98d4-21b3dcb5d53c}
│ │ └Mounted as /dev/md0 @ /boot
│ ├sda2 2.79g [8:2] MD raid1 (0/2) (w/ sdb2) in_sync
{93a0b403:18aa4e20:f77b0142:25a55090}
│ │└md1 2.79g [9:1] MD v0.90 raid1 (2) clean
{93a0b403:18aa4e20:f77b0142:25a55090}
│ │ │ xfs 'slash' {6fa78f26-4ca9-4e41-909d-ac4c8877f317}
│ │ └Mounted as /dev/root @ /
│ ├sda3 9.31g [8:3] MD raid1 (0/2) (w/ sdb3) in_sync
{99104b71:9dd6cf88:e1a05948:57032dd7}
│ │└md2 9.31g [9:2] MD v0.90 raid1 (2) clean
{99104b71:9dd6cf88:e1a05948:57032dd7}
│ │ │ xfs 'usr' {9ba54810-c299-424c-b312-e13325e00e4f}
│ │ └Mounted as /dev/md2 @ /usr
│ ├sda4 4.66g [8:4] MD raid1 (0/2) (w/ sdb4) in_sync
{5dbd5605:1d61cbaa:ac5c64ee:5356e8a9}
│ │└md3 4.66g [9:3] MD v0.90 raid1 (2) clean
{5dbd5605:1d61cbaa:ac5c64ee:5356e8a9}
│ │ │ xfs 'var' {7d4918f3-eb9e-493a-a106-b9c21eff412c}
│ │ └Mounted as /dev/md3 @ /var
│ ├sda5 953.67m [8:5] swap
│ ├sda6 953.67m [8:6] MD raid1 (0/2) (w/ sdb6) in_sync
{725cfde4:114fef9a:4ed1ccad:18d72d44}
│ │└md4 953.56m [9:4] MD v0.90 raid1 (2) clean
{725cfde4:114fef9a:4ed1ccad:18d72d44}
│ │ │ xfs 'tmp' {cf09135c-cc46-424f-9f0b-a737cfacf27b}
│ │ └Mounted as /dev/md4 @ /tmp
│ ├sda7 37.25g [8:7] MD raid1 (2) inactive
{5bad4c7c:780696f4:e201a2f5:7bba85d7}
│ └sda8 1.76t [8:8] MD raid1 (0/2) (w/ sdb8) in_sync
{94171c8e:c47d18a8:c073121c:f9f222fe}
│ └md6 1.76t [9:6] MD v0.90 raid1 (2) clean
{94171c8e:c47d18a8:c073121c:f9f222fe}
│ │ xfs 'Data97' {a2e22925-f763-4b70-9559-d959b1eb9329}
│ └Mounted as /dev/md6 @ /Data97
└scsi 2:0:0:0 ATA ST2000DL003-9VT1 {xxxxxxxx}
└sdb 1.82t [8:16] Empty/Unknown
├sdb1 488.28m [8:17] MD raid1 (1/2) (w/ sda1) in_sync
{eb3b45e8:e1d73b1a:63042e90:fced6612}
│└md0 488.19m [9:0] MD v0.90 raid1 (2) clean
{eb3b45e8:e1d73b1a:63042e90:fced6612}
│ ext2 'boot' {67c165a8-020a-4931-98d4-21b3dcb5d53c}
├sdb2 2.79g [8:18] MD raid1 (1/2) (w/ sda2) in_sync
{93a0b403:18aa4e20:f77b0142:25a55090}
│└md1 2.79g [9:1] MD v0.90 raid1 (2) clean
{93a0b403:18aa4e20:f77b0142:25a55090}
│ xfs 'slash' {6fa78f26-4ca9-4e41-909d-ac4c8877f317}
├sdb3 9.31g [8:19] MD raid1 (1/2) (w/ sda3) in_sync
{99104b71:9dd6cf88:e1a05948:57032dd7}
│└md2 9.31g [9:2] MD v0.90 raid1 (2) clean
{99104b71:9dd6cf88:e1a05948:57032dd7}
│ xfs 'usr' {9ba54810-c299-424c-b312-e13325e00e4f}
├sdb4 4.66g [8:20] MD raid1 (1/2) (w/ sda4) in_sync
{5dbd5605:1d61cbaa:ac5c64ee:5356e8a9}
│└md3 4.66g [9:3] MD v0.90 raid1 (2) clean
{5dbd5605:1d61cbaa:ac5c64ee:5356e8a9}
│ xfs 'var' {7d4918f3-eb9e-493a-a106-b9c21eff412c}
├sdb5 953.67m [8:21] swap
├sdb6 953.67m [8:22] MD raid1 (1/2) (w/ sda6) in_sync
{725cfde4:114fef9a:4ed1ccad:18d72d44}
│└md4 953.56m [9:4] MD v0.90 raid1 (2) clean
{725cfde4:114fef9a:4ed1ccad:18d72d44}
│ xfs 'tmp' {cf09135c-cc46-424f-9f0b-a737cfacf27b}
├sdb7 37.25g [8:23] MD raid1 (2) inactive
{5bad4c7c:780696f4:e201a2f5:7bba85d7}
└sdb8 1.76t [8:24] MD raid1 (1/2) (w/ sda8) in_sync
{94171c8e:c47d18a8:c073121c:f9f222fe}
└md6 1.76t [9:6] MD v0.90 raid1 (2) clean
{94171c8e:c47d18a8:c073121c:f9f222fe}
xfs 'Data97' {a2e22925-f763-4b70-9559-d959b1eb9329}
Other Block Devices
├loop0 0.00k [7:0] Empty/Unknown
├loop1 0.00k [7:1] Empty/Unknown
├loop2 0.00k [7:2] Empty/Unknown
├loop3 0.00k [7:3] Empty/Unknown
├loop4 0.00k [7:4] Empty/Unknown
├loop5 0.00k [7:5] Empty/Unknown
├loop6 0.00k [7:6] Empty/Unknown
├loop7 0.00k [7:7] Empty/Unknown
├ram0 8.00m [1:0] Empty/Unknown
├ram1 8.00m [1:1] Empty/Unknown
├ram2 8.00m [1:2] Empty/Unknown
├ram3 8.00m [1:3] Empty/Unknown
├ram4 8.00m [1:4] Empty/Unknown
├ram5 8.00m [1:5] Empty/Unknown
├ram6 8.00m [1:6] Empty/Unknown
├ram7 8.00m [1:7] Empty/Unknown
├ram8 8.00m [1:8] Empty/Unknown
├ram9 8.00m [1:9] Empty/Unknown
├ram10 8.00m [1:10] Empty/Unknown
├ram11 8.00m [1:11] Empty/Unknown
├ram12 8.00m [1:12] Empty/Unknown
├ram13 8.00m [1:13] Empty/Unknown
├ram14 8.00m [1:14] Empty/Unknown
└ram15 8.00m [1:15] Empty/Unknown
D5s2:/#
No sign of sda7/sdb7, the partitions that underlie /dev/md126. I ran
gdisk on sda (sdb is identical - I had used gdisk to replicate the
partition structure when I did that RAID 1 repair a few days ago):
D5s2:/# gdisk
GPT fdisk (gdisk) version 0.8.1
Type device filename, or press <Enter> to exit: /dev/sda
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): ?
b back up GPT data to a file
c change a partition's name
d delete a partition
i show detailed information on a partition
l list known partition types
n add a new partition
o create a new empty GUID partition table (GPT)
p print the partition table
q quit without saving changes
r recovery and transformation options (experts only)
s sort partitions
t change a partition's type code
v verify disk
w write table to disk and exit
x extra functionality (experts only)
? print this menu
Command (? for help): i
Partition number (1-8): 7
Partition GUID code: A19D880F-05FC-4D3B-A006-743F0F84911E (Linux RAID)
Partition unique GUID: 34925916-6AB9-4461-8F2D-32DA766C2116
First sector: 40062540 (at 19.1 GiB)
Last sector: 118187540 (at 56.4 GiB)
Partition size: 78125001 sectors (37.3 GiB)
Attribute flags: 0000000000000000
Partition name: ''
Command (? for help):
Note: That partition name is two, separate, apostrophes. So the
partition name is blank, and so are the names of most of the
partitions. So sda7 exists (and it has /home on it, if we could but
see it).
Phil, thanks very much for the time you've spent on this. I'm going
to carefully make the changes to mdadm.conf, fstab, and initramfs now,
and reboot; It seems as though it might boot again ok.
regards, Ron
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: SOLVED Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 15:57 ` Ron Leach
@ 2014-07-22 16:30 ` Ron Leach
2014-07-22 16:57 ` Chris Murphy
1 sibling, 0 replies; 9+ messages in thread
From: Ron Leach @ 2014-07-22 16:30 UTC (permalink / raw)
To: linux-raid
Phil, following all your advice, the server's mounts are coming up,
RAID1 is synchronised, and /home is usable (and used); the machine is
working.
While updating the initramfs, the update routine logged:
mdadm: /dev/md5 - no such device
(or something similar, I've lost the screen since a reboot)
I reverted to referring to the array as /dev/md126, in both
mdadm.conf, and in fstab, and the initramfs update complained about
that, as well. So I left it as that, and rebooted.
Reboot was perfect. Now all the /mdXs are up, with one labelled as
/dev/md126, but that's ok, it's the same in the .conf file as in
fstab. The fstab had a number of redundant entries from periods when
there were other discs also in the machine, so I cleaned those up, as
well.
On, now, with the rest of the Debian upgrade. Again, very many thanks
for your help and patience.
regards, Ron
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 15:57 ` Ron Leach
2014-07-22 16:30 ` SOLVED " Ron Leach
@ 2014-07-22 16:57 ` Chris Murphy
2014-07-22 17:39 ` Phil Turmel
1 sibling, 1 reply; 9+ messages in thread
From: Chris Murphy @ 2014-07-22 16:57 UTC (permalink / raw)
To: Ron Leach; +Cc: linux-raid
On Jul 22, 2014, at 9:57 AM, Ron Leach <ronleach@tesco.net> wrote:
>
> D5s2:/# ./lsdrv
[snip]
> │ ├sda7 37.25g [8:7] MD raid1 (2) inactive {5bad4c7c:780696f4:e201a2f5:7bba85d7}
[snip]
> ├sdb7 37.25g [8:23] MD raid1 (2) inactive {5bad4c7c:780696f4:e201a2f5:7bba85d7}
They are in the lsdrv listing, but the raid is not activated. The problem is a RAID UUID mismatch between mdadm.conf and libblkid (I'm assuming the tree lsdrv is generating ultimately comes from libblkid, I could be wrong.)
5bad4c7c:780696f4:fbaacbb9:204d67b9 ## mdadm.conf
5bad4c7c:780696f4:e201a2f5:7bba85d7 ## libblkid
Therefore it's not being assembled.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 16:57 ` Chris Murphy
@ 2014-07-22 17:39 ` Phil Turmel
2014-07-22 18:12 ` Chris Murphy
0 siblings, 1 reply; 9+ messages in thread
From: Phil Turmel @ 2014-07-22 17:39 UTC (permalink / raw)
To: Chris Murphy, Ron Leach; +Cc: linux-raid
Hi Chris,
On 07/22/2014 12:57 PM, Chris Murphy wrote:
>
> On Jul 22, 2014, at 9:57 AM, Ron Leach <ronleach@tesco.net> wrote:
>>
>> D5s2:/# ./lsdrv
> [snip]
>> │ ├sda7 37.25g [8:7] MD raid1 (2) inactive {5bad4c7c:780696f4:e201a2f5:7bba85d7}
> [snip]
>> ├sdb7 37.25g [8:23] MD raid1 (2) inactive {5bad4c7c:780696f4:e201a2f5:7bba85d7}
>
>
> They are in the lsdrv listing, but the raid is not activated. The problem is a RAID UUID mismatch between mdadm.conf and libblkid (I'm assuming the tree lsdrv is generating ultimately comes from libblkid, I could be wrong.)
lsdrv calls out to udev's "vol_id" utility if present, otherwise calls
out to "blkid" in "probe" mode. So yes, libblkid.
> 5bad4c7c:780696f4:fbaacbb9:204d67b9 ## mdadm.conf
> 5bad4c7c:780696f4:e201a2f5:7bba85d7 ## libblkid
>
> Therefore it's not being assembled.
Good catch. UUIDs make my eyes cross.
In this case, since the initramfs is auto-assembling everything, its
getting a high minor number instead of the desired minor number.
Ron, while md126 is assembled, you should get a report from blkid for
that filesystem. Then your fstab can pick it up by UUID instead of
device name.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Booting after Debian upgrade: /dev/md5 does not exist
2014-07-22 17:39 ` Phil Turmel
@ 2014-07-22 18:12 ` Chris Murphy
0 siblings, 0 replies; 9+ messages in thread
From: Chris Murphy @ 2014-07-22 18:12 UTC (permalink / raw)
To: linux-raid@vger.kernel.org List
On Jul 22, 2014, at 11:39 AM, Phil Turmel <philip@turmel.org> wrote:
> Hi Chris,
>
> On 07/22/2014 12:57 PM, Chris Murphy wrote:
>>
>> On Jul 22, 2014, at 9:57 AM, Ron Leach <ronleach@tesco.net> wrote:
>>>
>>> D5s2:/# ./lsdrv
>> [snip]
>>> │ ├sda7 37.25g [8:7] MD raid1 (2) inactive {5bad4c7c:780696f4:e201a2f5:7bba85d7}
>> [snip]
>>> ├sdb7 37.25g [8:23] MD raid1 (2) inactive {5bad4c7c:780696f4:e201a2f5:7bba85d7}
>>
>>
>> They are in the lsdrv listing, but the raid is not activated. The problem is a RAID UUID mismatch between mdadm.conf and libblkid (I'm assuming the tree lsdrv is generating ultimately comes from libblkid, I could be wrong.)
>
> lsdrv calls out to udev's "vol_id" utility if present, otherwise calls
> out to "blkid" in "probe" mode. So yes, libblkid.
>
>> 5bad4c7c:780696f4:fbaacbb9:204d67b9 ## mdadm.conf
>> 5bad4c7c:780696f4:e201a2f5:7bba85d7 ## libblkid
>>
>> Therefore it's not being assembled.
>
> Good catch. UUIDs make my eyes cross.
>
> In this case, since the initramfs is auto-assembling everything, its
> getting a high minor number instead of the desired minor number.
>
> Ron, while md126 is assembled, you should get a report from blkid for
> that filesystem. Then your fstab can pick it up by UUID instead of
> device name.
Right. Once the raid is active, libblkid will become aware of the filesystem/volume UUID, and it's that UUID to put in fstab. So two different UUIDs: raid goes in mdadm.conf, and volume goes in fstab.
Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-07-22 18:12 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-22 8:09 Booting after Debian upgrade: /dev/md5 does not exist Ron Leach
2014-07-22 12:29 ` Phil Turmel
2014-07-22 13:21 ` Ron Leach
2014-07-22 15:08 ` Phil Turmel
2014-07-22 15:57 ` Ron Leach
2014-07-22 16:30 ` SOLVED " Ron Leach
2014-07-22 16:57 ` Chris Murphy
2014-07-22 17:39 ` Phil Turmel
2014-07-22 18:12 ` Chris Murphy
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.