* GRUB warning after replacing disk drive in RAID1
@ 2017-02-27 23:37 Peter Sangas
2017-02-28 9:23 ` Reindl Harald
0 siblings, 1 reply; 19+ messages in thread
From: Peter Sangas @ 2017-02-27 23:37 UTC (permalink / raw)
To: linux-raid
Hi all.
I have a RAID1 with 3 disks sda,sdb,sdc. After replacing sdc and re-syncing
it to the array I issued the following command to load grub but I get this
warning:
grub-install /dev/sdc
Installing for i386-pc platform.
grub-install: warning: Couldn't find physical volume `(null)'. Some modules
may be missing from core image..
grub-install: warning: Couldn't find physical volume `(null)'. Some modules
may be missing from core image..
Installation finished. No error reported.
Does anyone know why I get this warning and how to avoid it.
uname -a
Linux green 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
mdadm -V
mdadm - v3.3 - 3rd September 2013
Thanks,
Pete
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-02-27 23:37 GRUB warning after replacing disk drive in RAID1 Peter Sangas
@ 2017-02-28 9:23 ` Reindl Harald
2017-02-28 21:01 ` Peter Sangas
0 siblings, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2017-02-28 9:23 UTC (permalink / raw)
To: linux-raid
Am 28.02.2017 um 00:37 schrieb Peter Sangas:
> I have a RAID1 with 3 disks sda,sdb,sdc. After replacing sdc and re-syncing
> it to the array I issued the following command to load grub but I get this
> warning:
>
> grub-install /dev/sdc
>
> Installing for i386-pc platform.
> grub-install: warning: Couldn't find physical volume `(null)'. Some modules
> may be missing from core image..
> grub-install: warning: Couldn't find physical volume `(null)'. Some modules
> may be missing from core image..
> Installation finished. No error reported.
>
> Does anyone know why I get this warning and how to avoid it
it's harmless and disappears after the resync finished
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: GRUB warning after replacing disk drive in RAID1
2017-02-28 9:23 ` Reindl Harald
@ 2017-02-28 21:01 ` Peter Sangas
2017-02-28 22:34 ` Reindl Harald
0 siblings, 1 reply; 19+ messages in thread
From: Peter Sangas @ 2017-02-28 21:01 UTC (permalink / raw)
To: 'Reindl Harald', linux-raid
But I issue the grub command AFTER the re-sync is completed.
-----Original Message-----
From: Reindl Harald [mailto:h.reindl@thelounge.net]
Sent: Tuesday, February 28, 2017 1:23 AM
To: linux-raid@vger.kernel.org
Subject: Re: GRUB warning after replacing disk drive in RAID1
Am 28.02.2017 um 00:37 schrieb Peter Sangas:
> I have a RAID1 with 3 disks sda,sdb,sdc. After replacing sdc and
> re-syncing it to the array I issued the following command to load grub
> but I get this
> warning:
>
> grub-install /dev/sdc
>
> Installing for i386-pc platform.
> grub-install: warning: Couldn't find physical volume `(null)'. Some
> modules may be missing from core image..
> grub-install: warning: Couldn't find physical volume `(null)'. Some
> modules may be missing from core image..
> Installation finished. No error reported.
>
> Does anyone know why I get this warning and how to avoid it
it's harmless and disappears after the resync finished
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in the
body of a message to majordomo@vger.kernel.org More majordomo info at
http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-02-28 21:01 ` Peter Sangas
@ 2017-02-28 22:34 ` Reindl Harald
2017-02-28 23:15 ` Peter Sangas
0 siblings, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2017-02-28 22:34 UTC (permalink / raw)
To: linux-raid
Am 28.02.2017 um 22:01 schrieb Peter Sangas:
> But I issue the grub command AFTER the re-sync is completed
output of "cat /proc/mdstat" and your environment missing!
* cat /proc/mdstat
* df -hT
* lsscsi
* lsblk
no pictures and interpretations, just copy&paste from the terminal
(input as well as output)
please help others to help you
> -----Original Message-----
> From: Reindl Harald [mailto:h.reindl@thelounge.net]
> Sent: Tuesday, February 28, 2017 1:23 AM
> To: linux-raid@vger.kernel.org
> Subject: Re: GRUB warning after replacing disk drive in RAID1
>
> Am 28.02.2017 um 00:37 schrieb Peter Sangas:
>> I have a RAID1 with 3 disks sda,sdb,sdc. After replacing sdc and
>> re-syncing it to the array I issued the following command to load grub
>> but I get this
>> warning:
>>
>> grub-install /dev/sdc
>>
>> Installing for i386-pc platform.
>> grub-install: warning: Couldn't find physical volume `(null)'. Some
>> modules may be missing from core image..
>> grub-install: warning: Couldn't find physical volume `(null)'. Some
>> modules may be missing from core image..
>> Installation finished. No error reported.
>>
>> Does anyone know why I get this warning and how to avoid it
>
> it's harmless and disappears after the resync finished
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: GRUB warning after replacing disk drive in RAID1
2017-02-28 22:34 ` Reindl Harald
@ 2017-02-28 23:15 ` Peter Sangas
2017-03-01 0:12 ` Reindl Harald
2017-03-01 18:29 ` Phil Turmel
0 siblings, 2 replies; 19+ messages in thread
From: Peter Sangas @ 2017-02-28 23:15 UTC (permalink / raw)
To: 'Reindl Harald', linux-raid
Thanks for your help. See below for output
-----Original Message-----
From: Reindl Harald [mailto:h.reindl@thelounge.net]
Sent: Tuesday, February 28, 2017 2:34 PM
To: linux-raid@vger.kernel.org
Subject: Re: GRUB warning after replacing disk drive in RAID1
Am 28.02.2017 um 22:01 schrieb Peter Sangas:
> But I issue the grub command AFTER the re-sync is completed
>>>output of "cat /proc/mdstat" and your environment missing!
>>>* cat /proc/mdstat
>>>* df -hT
>>>* lsscsi
>>>* lsblk
>>>no pictures and interpretations, just copy&paste from the terminal (input
as well as output)
>>>please help others to help you
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
[raid10]
md3 : active raid1 sdc5[3] sdb5[1] sda5[0]
97589248 blocks super 1.2 [3/3] [UUU]
md1 : active raid1 sdc2[3] sdb2[1] sda2[0]
126887936 blocks super 1.2 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md5 : active raid1 sdc7[3] sdb7[1] sda7[0]
244169728 blocks super 1.2 [3/3] [UUU]
bitmap: 0/2 pages [0KB], 65536KB chunk
md2 : active raid1 sdc3[3] sdb3[1] sda3[0]
195181568 blocks super 1.2 [3/3] [UUU]
bitmap: 1/2 pages [4KB], 65536KB chunk
md4 : active raid1 sdc6[3] sdb6[1] sda6[0]
97589248 blocks super 1.2 [3/3] [UUU]
md0 : active raid1 sdc1[3] sdb1[1] sda1[0]
19514368 blocks super 1.2 [3/3] [UUU]
unused devices: <none>
uname -a
Linux green 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 63G 0 63G 0% /dev
tmpfs tmpfs 13G 746M 12G 6% /run
/dev/md2 ext4 184G 31G 144G 18% /
tmpfs tmpfs 63G 0 63G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/md0 ext4 19G 289M 17G 2% /boot
/dev/md3 ext4 92G 40G 48G 46% /cl
/dev/md5 ext4 230G 31G 187G 15% /sd
/dev/md4 ext4 92G 20G 68G 22% /pc
tan:/clbck nfs4 596G 169G 398G 30% /clbck
tan:/sdbck nfs4 596G 169G 398G 30% /sdbck
tmpfs tmpfs 13G 4.0K 13G 1% /run/user/275
/dev/sde1 ext3 2.7T 676G 1.9T 26% /archive
tmpfs tmpfs 13G 4.0K 13G 1% /run/user/286
/dev/sdd1 ext3 1.8T 1.6T 182G 90% /backupdisk
tmpfs tmpfs 13G 12K 13G 1% /run/user/277
tmpfs tmpfs 13G 0 13G 0% /run/user/283
tmpfs tmpfs 13G 4.0K 13G 1% /run/user/280
tmpfs tmpfs 13G 4.0K 13G 1% /run/user/285
tmpfs tmpfs 13G 0 13G 0% /run/user/299
tmpfs tmpfs 13G 0 13G 0% /run/user/1100
tmpfs tmpfs 13G 0 13G 0% /run/user/1685
lsscsi
[2:0:0:0] disk ATA WDC WD30EZRS-00J 0A80 /dev/sde
[3:0:0:0] disk ATA WDC WD2000FYYZ-0 1K03 /dev/sdd
[4:0:0:0] disk ATA INTEL SSDSC2BX80 0140 /dev/sda
[5:0:0:0] disk ATA INTEL SSDSC2BX80 0140 /dev/sdb
[6:0:0:0] disk ATA INTEL SSDSC2BX80 0140 /dev/sdc
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 745.2G 0 disk
+-sda1 8:1 0 18.6G 0 part
│ L-md0 9:0 0 18.6G 0 raid1 /boot
+-sda2 8:2 0 121.1G 0 part
│ L-md1 9:1 0 121G 0 raid1 [SWAP]
+-sda3 8:3 0 186.3G 0 part
│ L-md2 9:2 0 186.1G 0 raid1 /
+-sda4 8:4 0 1K 0 part
+-sda5 8:5 0 93.1G 0 part
│ L-md3 9:3 0 93.1G 0 raid1 /cl
+-sda6 8:6 0 93.1G 0 part
│ L-md4 9:4 0 93.1G 0 raid1 /pc
L-sda7 8:7 0 233G 0 part
L-md5 9:5 0 232.9G 0 raid1 /sd
sdb 8:16 0 745.2G 0 disk
+-sdb1 8:17 0 18.6G 0 part
│ L-md0 9:0 0 18.6G 0 raid1 /boot
+-sdb2 8:18 0 121.1G 0 part
│ L-md1 9:1 0 121G 0 raid1 [SWAP]
+-sdb3 8:19 0 186.3G 0 part
│ L-md2 9:2 0 186.1G 0 raid1 /
+-sdb4 8:20 0 1K 0 part
+-sdb5 8:21 0 93.1G 0 part
│ L-md3 9:3 0 93.1G 0 raid1 /cl
+-sdb6 8:22 0 93.1G 0 part
│ L-md4 9:4 0 93.1G 0 raid1 /pc
L-sdb7 8:23 0 233G 0 part
L-md5 9:5 0 232.9G 0 raid1 /sd
sdc 8:32 0 745.2G 0 disk
+-sdc1 8:33 0 18.6G 0 part
│ L-md0 9:0 0 18.6G 0 raid1 /boot
+-sdc2 8:34 0 121.1G 0 part
│ L-md1 9:1 0 121G 0 raid1 [SWAP]
+-sdc3 8:35 0 186.3G 0 part
│ L-md2 9:2 0 186.1G 0 raid1 /
+-sdc4 8:36 0 1K 0 part
+-sdc5 8:37 0 93.1G 0 part
│ L-md3 9:3 0 93.1G 0 raid1 /cl
+-sdc6 8:38 0 93.1G 0 part
│ L-md4 9:4 0 93.1G 0 raid1 /pc
L-sdc7 8:39 0 233G 0 part
L-md5 9:5 0 232.9G 0 raid1 /sd
sdd 8:48 0 1.8T 0 disk
L-sdd1 8:49 0 1.8T 0 part /backupdisk
sde 8:64 0 2.7T 0 disk
L-sde1 8:65 0 2.7T 0 part /archive
> -----Original Message-----
> From: Reindl Harald [mailto:h.reindl@thelounge.net]
> Sent: Tuesday, February 28, 2017 1:23 AM
> To: linux-raid@vger.kernel.org
> Subject: Re: GRUB warning after replacing disk drive in RAID1
>
> Am 28.02.2017 um 00:37 schrieb Peter Sangas:
>> I have a RAID1 with 3 disks sda,sdb,sdc. After replacing sdc and
>> re-syncing it to the array I issued the following command to load
>> grub but I get this
>> warning:
>>
>> grub-install /dev/sdc
>>
>> Installing for i386-pc platform.
>> grub-install: warning: Couldn't find physical volume `(null)'. Some
>> modules may be missing from core image..
>> grub-install: warning: Couldn't find physical volume `(null)'. Some
>> modules may be missing from core image..
>> Installation finished. No error reported.
>>
>> Does anyone know why I get this warning and how to avoid it
>
> it's harmless and disappears after the resync finished
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in the
body of a message to majordomo@vger.kernel.org More majordomo info at
http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-02-28 23:15 ` Peter Sangas
@ 2017-03-01 0:12 ` Reindl Harald
2017-03-01 23:36 ` Peter Sangas
2017-03-01 18:29 ` Phil Turmel
1 sibling, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2017-03-01 0:12 UTC (permalink / raw)
To: linux-raid
Am 01.03.2017 um 00:15 schrieb Peter Sangas:
> Thanks for your help. See below for output
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
not sure if it means something but i only see the on the machines used
raid-levels at that line - looks like i am out of ideas for now, i saw
the "grub-install: warning: Couldn't find physical volume `(null)'"
messages (while it at the same time said it was successful) as i removed
2 out of my 4 drives from the RAID10 by using the script below and after
both where in sync "grub2-sintall /dev/sd[a-d]" was completly silent again
___________________________________________________
#!/bin/bash
GOOD_DISK="/dev/sda"
BAD_DISK="/dev/sdc"
# clone MBR
dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1
# force OS to read partition tables
partprobe $BAD_DISK
# start RAID recovery
mdadm /dev/md0 --add ${BAD_DISK}1
mdadm /dev/md1 --add ${BAD_DISK}2
mdadm /dev/md2 --add ${BAD_DISK}3
# print RAID status on screen
sleep 5
cat /proc/mdstat
# install bootloader on replacement disk
grub2-install "$BAD_DISK"
___________________________________________________
> -----Original Message-----
> From: Reindl Harald [mailto:h.reindl@thelounge.net]
> Sent: Tuesday, February 28, 2017 2:34 PM
> To: linux-raid@vger.kernel.org
> Subject: Re: GRUB warning after replacing disk drive in RAID1
>
>
>
> Am 28.02.2017 um 22:01 schrieb Peter Sangas:
>> But I issue the grub command AFTER the re-sync is completed
>
>>>> output of "cat /proc/mdstat" and your environment missing!
>
>>>> * cat /proc/mdstat
>>>> * df -hT
>>>> * lsscsi
>>>> * lsblk
>
>>>> no pictures and interpretations, just copy&paste from the terminal (input
> as well as output)
>
>>>> please help others to help you
>
>
> cat /proc/mdstat
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
> [raid10]
> md3 : active raid1 sdc5[3] sdb5[1] sda5[0]
> 97589248 blocks super 1.2 [3/3] [UUU]
>
> md1 : active raid1 sdc2[3] sdb2[1] sda2[0]
> 126887936 blocks super 1.2 [3/3] [UUU]
> bitmap: 0/1 pages [0KB], 65536KB chunk
>
> md5 : active raid1 sdc7[3] sdb7[1] sda7[0]
> 244169728 blocks super 1.2 [3/3] [UUU]
> bitmap: 0/2 pages [0KB], 65536KB chunk
>
> md2 : active raid1 sdc3[3] sdb3[1] sda3[0]
> 195181568 blocks super 1.2 [3/3] [UUU]
> bitmap: 1/2 pages [4KB], 65536KB chunk
>
> md4 : active raid1 sdc6[3] sdb6[1] sda6[0]
> 97589248 blocks super 1.2 [3/3] [UUU]
>
> md0 : active raid1 sdc1[3] sdb1[1] sda1[0]
> 19514368 blocks super 1.2 [3/3] [UUU]
>
> unused devices: <none>
>
> uname -a
> Linux green 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016
> x86_64 x86_64 x86_64 GNU/Linux
>
> df -hT
> Filesystem Type Size Used Avail Use% Mounted on
> udev devtmpfs 63G 0 63G 0% /dev
> tmpfs tmpfs 13G 746M 12G 6% /run
> /dev/md2 ext4 184G 31G 144G 18% /
> tmpfs tmpfs 63G 0 63G 0% /dev/shm
> tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
> tmpfs tmpfs 63G 0 63G 0% /sys/fs/cgroup
> /dev/md0 ext4 19G 289M 17G 2% /boot
> /dev/md3 ext4 92G 40G 48G 46% /cl
> /dev/md5 ext4 230G 31G 187G 15% /sd
> /dev/md4 ext4 92G 20G 68G 22% /pc
> tan:/clbck nfs4 596G 169G 398G 30% /clbck
> tan:/sdbck nfs4 596G 169G 398G 30% /sdbck
> tmpfs tmpfs 13G 4.0K 13G 1% /run/user/275
> /dev/sde1 ext3 2.7T 676G 1.9T 26% /archive
> tmpfs tmpfs 13G 4.0K 13G 1% /run/user/286
> /dev/sdd1 ext3 1.8T 1.6T 182G 90% /backupdisk
> tmpfs tmpfs 13G 12K 13G 1% /run/user/277
> tmpfs tmpfs 13G 0 13G 0% /run/user/283
> tmpfs tmpfs 13G 4.0K 13G 1% /run/user/280
> tmpfs tmpfs 13G 4.0K 13G 1% /run/user/285
> tmpfs tmpfs 13G 0 13G 0% /run/user/299
> tmpfs tmpfs 13G 0 13G 0% /run/user/1100
> tmpfs tmpfs 13G 0 13G 0% /run/user/1685
>
>
> lsscsi
> [2:0:0:0] disk ATA WDC WD30EZRS-00J 0A80 /dev/sde
> [3:0:0:0] disk ATA WDC WD2000FYYZ-0 1K03 /dev/sdd
> [4:0:0:0] disk ATA INTEL SSDSC2BX80 0140 /dev/sda
> [5:0:0:0] disk ATA INTEL SSDSC2BX80 0140 /dev/sdb
> [6:0:0:0] disk ATA INTEL SSDSC2BX80 0140 /dev/sdc
>
> lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 745.2G 0 disk
> +-sda1 8:1 0 18.6G 0 part
> │ L-md0 9:0 0 18.6G 0 raid1 /boot
> +-sda2 8:2 0 121.1G 0 part
> │ L-md1 9:1 0 121G 0 raid1 [SWAP]
> +-sda3 8:3 0 186.3G 0 part
> │ L-md2 9:2 0 186.1G 0 raid1 /
> +-sda4 8:4 0 1K 0 part
> +-sda5 8:5 0 93.1G 0 part
> │ L-md3 9:3 0 93.1G 0 raid1 /cl
> +-sda6 8:6 0 93.1G 0 part
> │ L-md4 9:4 0 93.1G 0 raid1 /pc
> L-sda7 8:7 0 233G 0 part
> L-md5 9:5 0 232.9G 0 raid1 /sd
> sdb 8:16 0 745.2G 0 disk
> +-sdb1 8:17 0 18.6G 0 part
> │ L-md0 9:0 0 18.6G 0 raid1 /boot
> +-sdb2 8:18 0 121.1G 0 part
> │ L-md1 9:1 0 121G 0 raid1 [SWAP]
> +-sdb3 8:19 0 186.3G 0 part
> │ L-md2 9:2 0 186.1G 0 raid1 /
> +-sdb4 8:20 0 1K 0 part
> +-sdb5 8:21 0 93.1G 0 part
> │ L-md3 9:3 0 93.1G 0 raid1 /cl
> +-sdb6 8:22 0 93.1G 0 part
> │ L-md4 9:4 0 93.1G 0 raid1 /pc
> L-sdb7 8:23 0 233G 0 part
> L-md5 9:5 0 232.9G 0 raid1 /sd
> sdc 8:32 0 745.2G 0 disk
> +-sdc1 8:33 0 18.6G 0 part
> │ L-md0 9:0 0 18.6G 0 raid1 /boot
> +-sdc2 8:34 0 121.1G 0 part
> │ L-md1 9:1 0 121G 0 raid1 [SWAP]
> +-sdc3 8:35 0 186.3G 0 part
> │ L-md2 9:2 0 186.1G 0 raid1 /
> +-sdc4 8:36 0 1K 0 part
> +-sdc5 8:37 0 93.1G 0 part
> │ L-md3 9:3 0 93.1G 0 raid1 /cl
> +-sdc6 8:38 0 93.1G 0 part
> │ L-md4 9:4 0 93.1G 0 raid1 /pc
> L-sdc7 8:39 0 233G 0 part
> L-md5 9:5 0 232.9G 0 raid1 /sd
> sdd 8:48 0 1.8T 0 disk
> L-sdd1 8:49 0 1.8T 0 part /backupdisk
> sde 8:64 0 2.7T 0 disk
> L-sde1 8:65 0 2.7T 0 part /archive
>
>> -----Original Message-----
>> From: Reindl Harald [mailto:h.reindl@thelounge.net]
>> Sent: Tuesday, February 28, 2017 1:23 AM
>> To: linux-raid@vger.kernel.org
>> Subject: Re: GRUB warning after replacing disk drive in RAID1
>>
>> Am 28.02.2017 um 00:37 schrieb Peter Sangas:
>>> I have a RAID1 with 3 disks sda,sdb,sdc. After replacing sdc and
>>> re-syncing it to the array I issued the following command to load
>>> grub but I get this
>>> warning:
>>>
>>> grub-install /dev/sdc
>>>
>>> Installing for i386-pc platform.
>>> grub-install: warning: Couldn't find physical volume `(null)'. Some
>>> modules may be missing from core image..
>>> grub-install: warning: Couldn't find physical volume `(null)'. Some
>>> modules may be missing from core image..
>>> Installation finished. No error reported.
>>>
>>> Does anyone know why I get this warning and how to avoid it
>>
>> it's harmless and disappears after the resync finished
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-02-28 23:15 ` Peter Sangas
2017-03-01 0:12 ` Reindl Harald
@ 2017-03-01 18:29 ` Phil Turmel
2017-03-01 22:13 ` Reindl Harald
` (2 more replies)
1 sibling, 3 replies; 19+ messages in thread
From: Phil Turmel @ 2017-03-01 18:29 UTC (permalink / raw)
To: Peter Sangas, 'Reindl Harald', linux-raid
Hi Peter, Reindl,
{ Convention on kernel.org is to reply-to-all, trim unneeded quoted
material, and bottom post or interleave. Please do so. }
On 02/28/2017 06:15 PM, Peter Sangas wrote:
> cat /proc/mdstat
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
> [raid10]
> md0 : active raid1 sdc1[3] sdb1[1] sda1[0]
> 19514368 blocks super 1.2 [3/3] [UUU]
Grub1 needs its boot partitions to use v0.90 or v1.0 superblocks. Grub2
needs the md module in its core to boot from v1.1 or v1.2 superblocks.
Anyways, because the content of a v1.2 array does not
start at the beginning of the member devices, stupid grub doesn't
connect sd[abc]1 with your /boot mount and therefore delivers 'null'.
And then doesn't know how to link the core together.
Since this worked before, I would guess your grub was updated and its md
support was left out. Hopefully someone with more grub experience can
chip in here -- I don't use any bootloader on my servers any more.
Phil
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-01 18:29 ` Phil Turmel
@ 2017-03-01 22:13 ` Reindl Harald
2017-03-02 2:42 ` Phil Turmel
2017-03-01 23:51 ` Peter Sangas
2017-03-02 13:17 ` Wols Lists
2 siblings, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2017-03-01 22:13 UTC (permalink / raw)
To: linux-raid
Am 01.03.2017 um 19:29 schrieb Phil Turmel:
> Hi Peter, Reindl,
>
> { Convention on kernel.org is to reply-to-all, trim unneeded quoted
> material, and bottom post or interleave. Please do so. }
why should someone reply to the list and everyboy else subscribed to the
list to trigger multiple copies?
> On 02/28/2017 06:15 PM, Peter Sangas wrote:
>
>> cat /proc/mdstat
>> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
>> [raid10]
>
>> md0 : active raid1 sdc1[3] sdb1[1] sda1[0]
>> 19514368 blocks super 1.2 [3/3] [UUU]
>
> Grub1 needs its boot partitions to use v0.90 or v1.0 superblocks. Grub2
> needs the md module in its core to boot from v1.1 or v1.2 superblocks.
> Anyways, because the content of a v1.2 array does not
> start at the beginning of the member devices, stupid grub doesn't
> connect sd[abc]1 with your /boot mount and therefore delivers 'null'.
> And then doesn't know how to link the core together.
>
> Since this worked before, I would guess your grub was updated and its md
> support was left out. Hopefully someone with more grub experience can
> chip in here -- I don't use any bootloader on my servers any more
i am not the OP *but* i can assure you that i get the same warnings when
the /boot RAID1 is not synced, afetr hat grub2-install works just fine
without that warning and so you can be sure grub has no problem and is
missing nothing, otherwise the command won't work without warnings afetr
all arrays are in ync again
[root@srv-rhsoft:~]$ cat /proc/mdstat
Personalities : [raid1] [raid10]
md2 : active raid10 sdc3[4] sdd3[7] sda3[6] sdb3[5]
3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sdc1[4] sdd1[7] sdb1[5] sda1[6]
511988 blocks super 1.0 [4/4] [UUUU]
md1 : active raid10 sdc2[4] sdd2[7] sdb2[5] sda2[6]
30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: GRUB warning after replacing disk drive in RAID1
2017-03-01 0:12 ` Reindl Harald
@ 2017-03-01 23:36 ` Peter Sangas
2017-03-02 9:54 ` Reindl Harald
0 siblings, 1 reply; 19+ messages in thread
From: Peter Sangas @ 2017-03-01 23:36 UTC (permalink / raw)
To: 'Reindl Harald', linux-raid
Hi Reindl,
My comments are interleaved. thanks:
___________________________________________________
>#!/bin/bash
>GOOD_DISK="/dev/sda"
>BAD_DISK="/dev/sdc"
># clone MBR
>dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1
Here I run the command sfdisk -d /dev/$GOOD_DISK | sfdisk -f /dev/$BAD_DISK.
I think dd and sfdisk are doing the same thing which is cloning the
partitions and copy the MBR ?
># force OS to read partition tables
>partprobe $BAD_DISK
Why run partprobe if the partitions have not changed?
># install bootloader on replacement disk grub2-install "$BAD_DISK"
Here don't you mean grub-install not grub2-install?
___________________________________________________
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: GRUB warning after replacing disk drive in RAID1
2017-03-01 18:29 ` Phil Turmel
2017-03-01 22:13 ` Reindl Harald
@ 2017-03-01 23:51 ` Peter Sangas
2017-03-02 0:05 ` Phil Turmel
2017-03-02 13:17 ` Wols Lists
2 siblings, 1 reply; 19+ messages in thread
From: Peter Sangas @ 2017-03-01 23:51 UTC (permalink / raw)
To: 'Phil Turmel', 'Reindl Harald', linux-raid
>{ Convention on kernel.org is to reply-to-all, trim unneeded quoted material, and bottom post or interleave. Please do so. }
Yes. thank you.
>Grub1 needs its boot partitions to use v0.90 or v1.0 superblocks. Grub2 needs the md module in its core to boot from v1.1 or v1.2 superblocks.
>Anyways, because the content of a v1.2 array does not start at the beginning of the member devices, stupid grub doesn't connect sd[abc]1 with your /boot mount >and therefore delivers 'null'.
>And then doesn't know how to link the core together.
can't say I understand all that but does this mean the server can't boot from the replacement drive?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-01 23:51 ` Peter Sangas
@ 2017-03-02 0:05 ` Phil Turmel
2017-03-02 23:00 ` Peter Sangas
0 siblings, 1 reply; 19+ messages in thread
From: Phil Turmel @ 2017-03-02 0:05 UTC (permalink / raw)
To: Peter Sangas, 'Reindl Harald', linux-raid
On 03/01/2017 06:51 PM, Peter Sangas wrote:
>
>> { Convention on kernel.org is to reply-to-all, trim unneeded quoted material, and bottom post or interleave. Please do so. }
>
> Yes. thank you.
>
>> Grub1 needs its boot partitions to use v0.90 or v1.0 superblocks. Grub2 needs the md module in its core to boot from v1.1 or v1.2 superblocks.
>> Anyways, because the content of a v1.2 array does not start at the beginning of the member devices, stupid grub doesn't connect sd[abc]1 with your /boot mount >and therefore delivers 'null'.
>> And then doesn't know how to link the core together.
>
> can't say I understand all that but does this mean the server can't boot from the replacement drive?
Correct. But it shouldn't be able to boot from the others either, with
grub1. Something has changed in your grub install since you set up the
original drives. Only grub2 would be able to boot from your v1.2 md0.
{ Sorry. I don't know how to fix grub2's md module. }
Phil
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-01 22:13 ` Reindl Harald
@ 2017-03-02 2:42 ` Phil Turmel
2017-03-02 13:15 ` Wols Lists
0 siblings, 1 reply; 19+ messages in thread
From: Phil Turmel @ 2017-03-02 2:42 UTC (permalink / raw)
To: Reindl Harald, linux-raid
On 03/01/2017 05:13 PM, Reindl Harald wrote:
> Am 01.03.2017 um 19:29 schrieb Phil Turmel:
>> Hi Peter, Reindl,
>>
>> { Convention on kernel.org is to reply-to-all, trim unneeded quoted
>> material, and bottom post or interleave. Please do so. }
>
> why should someone reply to the list and everyboy else subscribed to the
> list to trigger multiple copies?
Because kernel.org doesn't require subscriptions to post, and does
expect participants to include non-subscribers. Since one can't
normally tell who is subscribed and who is not, reply-to-all is the rule
here. Other lists have other policies.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-01 23:36 ` Peter Sangas
@ 2017-03-02 9:54 ` Reindl Harald
0 siblings, 0 replies; 19+ messages in thread
From: Reindl Harald @ 2017-03-02 9:54 UTC (permalink / raw)
To: Peter Sangas, linux-raid
Am 02.03.2017 um 00:36 schrieb Peter Sangas:
> Hi Reindl,
>
> My comments are interleaved. thanks:
> ___________________________________________________
>
>> #!/bin/bash
>
>> GOOD_DISK="/dev/sda"
>> BAD_DISK="/dev/sdc"
>
>> # clone MBR
>> dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1
>
> Here I run the command sfdisk -d /dev/$GOOD_DISK | sfdisk -f /dev/$BAD_DISK.
>
> I think dd and sfdisk are doing the same thing which is cloning the
> partitions and copy the MBR ?
yes
>> # force OS to read partition tables
>> partprobe $BAD_DISK
>
> Why run partprobe if the partitions have not changed?
because they *have* changed when you replace a disk with a complete
empty one and clone the MBR and partition table with 3 partitions and
have bootet with a empty partition table - not every hardware supports
hotswap proper and even if it don't harm
>> # install bootloader on replacement disk grub2-install "$BAD_DISK"
>
> Here don't you mean grub-install not grub2-install?
no, i mean what i say since that script replaced multiple disks on
multiple machines - but how does it matter which name a binary has on
whatever distribution?
[harry@srv-rhsoft:~]$ rpm -q --filesbypkg grub2 | grep install
[harry@srv-rhsoft:~]$ rpm -q --filesbypkg grub2-tools | grep install
grub2-tools /usr/sbin/grub2-install
grub2-tools /usr/share/man/man8/grub2-install.8.gz
[harry@srv-rhsoft:~]$ cat /etc/redhat-release
Generic release 24 (Generic)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-02 2:42 ` Phil Turmel
@ 2017-03-02 13:15 ` Wols Lists
0 siblings, 0 replies; 19+ messages in thread
From: Wols Lists @ 2017-03-02 13:15 UTC (permalink / raw)
To: Phil Turmel, Reindl Harald, linux-raid
On 02/03/17 02:42, Phil Turmel wrote:
> On 03/01/2017 05:13 PM, Reindl Harald wrote:
>> Am 01.03.2017 um 19:29 schrieb Phil Turmel:
>>> Hi Peter, Reindl,
>>>
>>> { Convention on kernel.org is to reply-to-all, trim unneeded quoted
>>> material, and bottom post or interleave. Please do so. }
>>
>> why should someone reply to the list and everyboy else subscribed to the
>> list to trigger multiple copies?
>
> Because kernel.org doesn't require subscriptions to post, and does
> expect participants to include non-subscribers. Since one can't
> normally tell who is subscribed and who is not, reply-to-all is the rule
> here. Other lists have other policies.
>
fwiw, some mailing list software detects this, and if you're cc'd on a
mail the list won't send you another copy.
And some client software also de-dupes for you without asking. I run TB
so I need an add-on - especially as something in my mail setup
duplicates emails madly every now and then ... :-)
Cheers,
Wol
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-01 18:29 ` Phil Turmel
2017-03-01 22:13 ` Reindl Harald
2017-03-01 23:51 ` Peter Sangas
@ 2017-03-02 13:17 ` Wols Lists
2017-03-06 22:13 ` Peter Sangas
2 siblings, 1 reply; 19+ messages in thread
From: Wols Lists @ 2017-03-02 13:17 UTC (permalink / raw)
To: Phil Turmel, Peter Sangas, 'Reindl Harald', linux-raid
On 01/03/17 18:29, Phil Turmel wrote:
> Since this worked before, I would guess your grub was updated and its md
> support was left out. Hopefully someone with more grub experience can
> chip in here -- I don't use any bootloader on my servers any more.
Look at the raid wiki
https://raid.wiki.kernel.org/index.php/Converting_an_existing_system
it mentions grub.
Also, look up grub2 and raid on the gentoo wiki - I wrote a lot of that,
and arch also apparently has very good documentation.
Cheers,
Wol
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: GRUB warning after replacing disk drive in RAID1
2017-03-02 0:05 ` Phil Turmel
@ 2017-03-02 23:00 ` Peter Sangas
0 siblings, 0 replies; 19+ messages in thread
From: Peter Sangas @ 2017-03-02 23:00 UTC (permalink / raw)
To: 'Phil Turmel', 'Reindl Harald', linux-raid
> Correct. But it shouldn't be able to boot from the others either, with grub1.
> Something has changed in your grub install since you set up the original drives. Only
> grub2 would be able to boot from your v1.2 md0.
>
> { Sorry. I don't know how to fix grub2's md module. }
>
Just to so I'm crystal clear : the server needs to have grub2 installed so doesn't this confirm grub2 is installed.
grub-install -V
grub-install (GRUB) 2.02~beta2-36ubuntu3.2
and since I'm running Ubuntu 16.04.1 LTS
from https://help.ubuntu.com/community/Grub2
"GRUB 2 is the default boot loader and manager for Ubuntu since version 9.10"
"GRUB 2 is version 1.98 or later"
What Am I missing?
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: GRUB warning after replacing disk drive in RAID1
2017-03-02 13:17 ` Wols Lists
@ 2017-03-06 22:13 ` Peter Sangas
2017-03-07 12:54 ` Wols Lists
0 siblings, 1 reply; 19+ messages in thread
From: Peter Sangas @ 2017-03-06 22:13 UTC (permalink / raw)
To: 'Wols Lists', 'Phil Turmel',
'Reindl Harald',
linux-raid
> On March 02, 2017 5:18 AM, Wols Lists wrote :
> Look at the raid wiki
>
> https://raid.wiki.kernel.org/index.php/Converting_an_existing_system
>
> it mentions grub.
Wol, I'm trying to understand what you and others are suggesting is causing this grub warning.
according to the raid.wiki you referenced the last line of grub.cnf should look like this to support booting from RAID:
initrd /boot/initramfs-genkernel-x86_64-4.4.6-gentoo
In addition, the wiki says " you will need to configure your distro to use an initramfs.
here are the last few lines of my grub.cnf:
echo 'Loading Linux 4.4.0-36-generic ...'
linux /vmlinuz-4.4.0-36-generic root=UUID=cddffa50-9713-4205-aab6-86745735958b ro recovery nomodeset
echo 'Loading initial ramdisk ...'
initrd /initrd.img-4.4.0-36-generic
should I be concerned my grub.cnf uses initrd.img and not initramfs? My grub.cnf was last modified 11/2016 and I've rebooted successfully into a RAID1 since.
Finally, on Friday 3/3 I swapped one of the disks (sdc) in this RAID1 with a brand new identical disk. After creating the partitions and syncing I issued
install-grub /dev/sdc and there were no warnings. If my grub was broken why would there be no warning but a week ago doing the exact disk swap/same sequence of commands I got a warning. Nothing on the system was changed during that time.
Thank you.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-06 22:13 ` Peter Sangas
@ 2017-03-07 12:54 ` Wols Lists
2017-03-07 13:00 ` Reindl Harald
0 siblings, 1 reply; 19+ messages in thread
From: Wols Lists @ 2017-03-07 12:54 UTC (permalink / raw)
To: Peter Sangas, 'Phil Turmel', 'Reindl Harald', linux-raid
On 06/03/17 22:13, Peter Sangas wrote:
> In addition, the wiki says " you will need to configure your distro to use an initramfs.
>
> here are the last few lines of my grub.cnf:
>
> echo 'Loading Linux 4.4.0-36-generic ...'
> linux /vmlinuz-4.4.0-36-generic root=UUID=cddffa50-9713-4205-aab6-86745735958b ro recovery nomodeset
> echo 'Loading initial ramdisk ...'
> initrd /initrd.img-4.4.0-36-generic
>
> should I be concerned my grub.cnf uses initrd.img and not initramfs? My grub.cnf was last modified 11/2016 and I've rebooted successfully into a RAID1 since.
I think your problem is right there !!!
Look at the wiki, but I can NOT see the magic command "domdadm", without
which mdadm doesn't get loaded and the array doesn't get assembled. No
array, no root, no system ...
(Oh - initrd, initramfs, they're probably the same thing. :-)
Cheers,
Wol
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: GRUB warning after replacing disk drive in RAID1
2017-03-07 12:54 ` Wols Lists
@ 2017-03-07 13:00 ` Reindl Harald
0 siblings, 0 replies; 19+ messages in thread
From: Reindl Harald @ 2017-03-07 13:00 UTC (permalink / raw)
To: Wols Lists, Peter Sangas, 'Phil Turmel', linux-raid
Am 07.03.2017 um 13:54 schrieb Wols Lists:
> On 06/03/17 22:13, Peter Sangas wrote:
>> In addition, the wiki says " you will need to configure your distro to use an initramfs.
>>
>> here are the last few lines of my grub.cnf:
>>
>> echo 'Loading Linux 4.4.0-36-generic ...'
>> linux /vmlinuz-4.4.0-36-generic root=UUID=cddffa50-9713-4205-aab6-86745735958b ro recovery nomodeset
>> echo 'Loading initial ramdisk ...'
>> initrd /initrd.img-4.4.0-36-generic
>>
>> should I be concerned my grub.cnf uses initrd.img and not initramfs? My grub.cnf was last modified 11/2016 and I've rebooted successfully into a RAID1 since.
>
> I think your problem is right there !!!
>
> Look at the wiki, but I can NOT see the magic command "domdadm", without
> which mdadm doesn't get loaded and the array doesn't get assembled. No
> array, no root, no system ...
not really because subject is "GRUB warning after replacing disk drive
in RAID1" and so mdadm is running and the array assembled
> (Oh - initrd, initramfs, they're probably the same thing. :-)
yes
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2017-03-07 13:00 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-27 23:37 GRUB warning after replacing disk drive in RAID1 Peter Sangas
2017-02-28 9:23 ` Reindl Harald
2017-02-28 21:01 ` Peter Sangas
2017-02-28 22:34 ` Reindl Harald
2017-02-28 23:15 ` Peter Sangas
2017-03-01 0:12 ` Reindl Harald
2017-03-01 23:36 ` Peter Sangas
2017-03-02 9:54 ` Reindl Harald
2017-03-01 18:29 ` Phil Turmel
2017-03-01 22:13 ` Reindl Harald
2017-03-02 2:42 ` Phil Turmel
2017-03-02 13:15 ` Wols Lists
2017-03-01 23:51 ` Peter Sangas
2017-03-02 0:05 ` Phil Turmel
2017-03-02 23:00 ` Peter Sangas
2017-03-02 13:17 ` Wols Lists
2017-03-06 22:13 ` Peter Sangas
2017-03-07 12:54 ` Wols Lists
2017-03-07 13:00 ` Reindl Harald
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.