All of lore.kernel.org
 help / color / mirror / Atom feed
* Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
@ 2023-01-15  3:12 H
  2023-01-15  8:41 ` Wols Lists
                   ` (2 more replies)
  0 siblings, 3 replies; 49+ messages in thread
From: H @ 2023-01-15  3:12 UTC (permalink / raw)
  To: Linux RAID Mailing List

I need to transfer an existing CentOS 7 non-RAID setup using one single SSD to a mdadm RAID1 using two much larger SSDs. All three disks exist in the same computer at the same time. Both the existing SSD and the new ones use LVM and LUKS and the objective is to preserve the OS, all other installed software and data etc. Thus no new installation should be needed.

Since all disks, partitions, LUKS etc have their unique UUIDs I have not figured out how to do this and could use help and advice.

In preparation for the above, I have:

- Used rsync with the flags -aAXHv to copy all files on the existing SSD to an external harddisk for backup.

- Partitioned the new SSDs as desired, including LVM and LUKS. My configuration uses one RAID1 for /boot, another RAID1 partition for /boot/efi, and a third one for the rest which also uses LVM and LUKS. I actually used a DVD image of Centos 7 (minimal installation) to accomplish this which also completed the minimal installation of the OS on the new disks. It boots as expected and the RAID partitions seem to work as expected.

Since I want to actually move my existing installation from the existing SSDs, I am not sure whether I should just use rsync to copy everything from the old SSD to the new larger ones. However, I expect that to also transfer all OS files using the old, now incorrect UUIDs, to the new disks after which nothing will work, thus I have not yet done that. I could erase the minimal installation of the OS on the new disks before rsyncing but have not yet done so.

I fully expect to have to do some manual editing of files but am not quite sure of all the files I would need to edit after such a copy. I have some knowledge of linux but could use some help and advice. For instance, I expect that /etc/fstab and /etc/crypttab would need to be edited reflecting the UUIDs of the new disks, partitions and LUKS, but which other files? Grub2 would also need to be edited I would think.

The only good thing is that since both the old disk and the new disks are in the same computer, no other hardware will change.

Is there another, better (read: simpler) way of accomplishing this transfer?

Finally, since I do have a backup of the old SSD and there is nothing of value on the new mdadm RAID1 disks, except the partition information, I do have, if necessary, the luxury of multiple tries. What I cannot do, however, is to make any modifications to the existing old SSD since I cannot afford not being able to go back to the old SSD if necessary.

Thanks.



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  3:12 Transferring an existing system from non-RAID disks to RAID1 disks in the same computer H
@ 2023-01-15  8:41 ` Wols Lists
  2023-01-15  9:02   ` Reindl Harald
  2023-01-15 11:31 ` Pascal Hambourg
  2023-01-15 17:25 ` H
  2 siblings, 1 reply; 49+ messages in thread
From: Wols Lists @ 2023-01-15  8:41 UTC (permalink / raw)
  To: H, Linux RAID Mailing List

On 15/01/2023 03:12, H wrote:
> I need to transfer an existing CentOS 7 non-RAID setup using one single SSD to a mdadm RAID1 using two much larger SSDs. All three disks exist in the same computer at the same time. Both the existing SSD and the new ones use LVM and LUKS and the objective is to preserve the OS, all other installed software and data etc. Thus no new installation should be needed.
> 
> Since all disks, partitions, LUKS etc have their unique UUIDs I have not figured out how to do this and could use help and advice.

Most of this should just happen automagically. The main thing you need 
to know is the UUIDs of your filesystem partitions. However, I don#t 
know LUKS, so I can't tell you what will happen there.

I don't remember any special setup for my LVM/RAID. If, when you boot 
the old system, everything is visible to it, there should be no problem 
switching over ...
> 
> In preparation for the above, I have:
> 
> - Used rsync with the flags -aAXHv to copy all files on the existing SSD to an external harddisk for backup.
> 
> - Partitioned the new SSDs as desired, including LVM and LUKS. My configuration uses one RAID1 for /boot, another RAID1 partition for /boot/efi, and a third one for the rest which also uses LVM and LUKS. I actually used a DVD image of Centos 7 (minimal installation) to accomplish this which also completed the minimal installation of the OS on the new disks. It boots as expected and the RAID partitions seem to work as expected.

Are your /boot and /boot/efi using superblock 1.0? My system is 
bios/grub, so not the same, but I use plain partitions here because 
otherwise you're likely to get in a circular dependency - you need efi 
to boot, but the system can't access efi until it's booted ... oops!
> 
> Since I want to actually move my existing installation from the existing SSDs, I am not sure whether I should just use rsync to copy everything from the old SSD to the new larger ones. However, I expect that to also transfer all OS files using the old, now incorrect UUIDs, to the new disks after which nothing will work, thus I have not yet done that. I could erase the minimal installation of the OS on the new disks before rsyncing but have not yet done so.
> 
Most filesytems are capable of growing. WORKING FROM A RESCUE DISK I 
would use dd to copy the filesystems (root in particular), then grow them.

> I fully expect to have to do some manual editing of files but am not quite sure of all the files I would need to edit after such a copy. I have some knowledge of linux but could use some help and advice. For instance, I expect that /etc/fstab and /etc/crypttab would need to be edited reflecting the UUIDs of the new disks, partitions and LUKS, but which other files? Grub2 would also need to be edited I would think.
> 
> The only good thing is that since both the old disk and the new disks are in the same computer, no other hardware will change.
> 
> Is there another, better (read: simpler) way of accomplishing this transfer?
> 
> Finally, since I do have a backup of the old SSD and there is nothing of value on the new mdadm RAID1 disks, except the partition information, I do have, if necessary, the luxury of multiple tries. What I cannot do, however, is to make any modifications to the existing old SSD since I cannot afford not being able to go back to the old SSD if necessary.
> 
Okay, baby steps. Try to boot the OLD system from the NEW efi. I suspect 
you'll fail. Fix that.

Copy the old root filesystem to the new. Fix the new efi/grub to boot 
into the new root.

Now you can simply copy all the old filesystems like /home etc across. 
Provided they are not mounted, you don't need a rescue disk, you can do 
it with the live system.


Actually, my preferred way, though, would be to (a) fix the efi problem. 
(b) copy all the file systems across. (c) remove the old SSD. (d) with a 
rescue disk handy, just try and boot into the new system, fixing it 
problem by problem. Very similar to my baby steps above, just using a 
live CD to recover the system rather than worrying about having two live 
systems which could interfere with each other without me realising 
what's going on. That's my worry about your approach - if you're not 
clear about EXACTLY what's happening, it's easy to make a mistake, and 
then you're wondering what the hell's going on, because your mental 
model is out of touch with reality. Been there done that! :-)

Cheers,
Wol


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  8:41 ` Wols Lists
@ 2023-01-15  9:02   ` Reindl Harald
  2023-01-15  9:20     ` Wols Lists
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-15  9:02 UTC (permalink / raw)
  To: Wols Lists, H, Linux RAID Mailing List



Am 15.01.23 um 09:41 schrieb Wols Lists:
> Are your /boot and /boot/efi using superblock 1.0? My system is 
> bios/grub, so not the same, but I use plain partitions here because 
> otherwise you're likely to get in a circular dependency - you need efi 
> to boot, but the system can't access efi until it's booted ... oops!
the UEFI don't care where the ESP is mounted later
from the viewpoint of the UEFI all paths are /-prefixed

that's only relevant for the OS at the time of kernel-install / updates 
and the ESP is vfat and don't support RAID anyways

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  9:02   ` Reindl Harald
@ 2023-01-15  9:20     ` Wols Lists
  2023-01-15  9:39       ` Reindl Harald
  2023-01-20  2:48       ` Phil Turmel
  0 siblings, 2 replies; 49+ messages in thread
From: Wols Lists @ 2023-01-15  9:20 UTC (permalink / raw)
  To: Reindl Harald, H, Linux RAID Mailing List

On 15/01/2023 09:02, Reindl Harald wrote:
> 
Reindl and me wind each other up, so watch out for a flame war :-)
> 
> Am 15.01.23 um 09:41 schrieb Wols Lists:
>> Are your /boot and /boot/efi using superblock 1.0? My system is 
>> bios/grub, so not the same, but I use plain partitions here because 
>> otherwise you're likely to get in a circular dependency - you need efi 
>> to boot, but the system can't access efi until it's booted ... oops!
> the UEFI don't care where the ESP is mounted later
> from the viewpoint of the UEFI all paths are /-prefixed
> 
> that's only relevant for the OS at the time of kernel-install / updates 
> and the ESP is vfat and don't support RAID anyways

But ext4 doesn't support raid either. Btrfs, ZFS and XFS don't support 
md-raid. That's the whole point of having a layered stack, rather than a 
"one size fits all" filesystem.

IF YOU CAN GUARANTEE that /boot/efi is only ever modified inside linux, 
then raid it. Why not? Personally, I'm not sure that guarantee holds.

If you do raid it, then you MUST use the 1.0 superblock, otherwise it 
will be inaccessible outside of linux. Seeing as the system needs it 
before linux boots, that's your classic catch-22.

Basically the rule is, if you want to access raid-ed linux partitions 
outside of linux, you must be able to guarantee they aren't modified 
outside of linux. And you have to use superblock 1.0. If you can't 
guarantee both of those, don't go there ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  9:20     ` Wols Lists
@ 2023-01-15  9:39       ` Reindl Harald
  2023-01-15 10:45         ` Wols Lists
  2023-01-20  2:48       ` Phil Turmel
  1 sibling, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-15  9:39 UTC (permalink / raw)
  To: Wols Lists, H, Linux RAID Mailing List



Am 15.01.23 um 10:20 schrieb Wols Lists:
> On 15/01/2023 09:02, Reindl Harald wrote:
>>
> Reindl and me wind each other up, so watch out for a flame war :-)

yes, because you as usually don't get the point and when you say "My 
system is bios/grub" consider refrain talking about things you don't 
have see working in real life

that usually ends in unbacked theory helping nobody

the point is that "you need efi to boot, but the system can't access efi 
until it's booted" is nonsense because the whole point of the ESP is 
that the UEFI is driectly starting UEFI-binaries at the ESP

your UEFI bootloadr or even the plain kernel are just that: UEFI binaries

>> Am 15.01.23 um 09:41 schrieb Wols Lists:
>>> Are your /boot and /boot/efi using superblock 1.0? My system is 
>>> bios/grub, so not the same, but I use plain partitions here because 
>>> otherwise you're likely to get in a circular dependency - you need 
>>> efi to boot, but the system can't access efi until it's booted ... oops!
>> the UEFI don't care where the ESP is mounted later
>> from the viewpoint of the UEFI all paths are /-prefixed
>>
>> that's only relevant for the OS at the time of kernel-install / 
>> updates and the ESP is vfat and don't support RAID anyways
> 
> But ext4 doesn't support raid either. 

irrelevant - i am talking about THE ESP DO NOT SUPPORT RAID - 
filesystems don't need to support RAID because with your argumentation 
the tail is waving with the dog

> Btrfs, ZFS and XFS don't support 
> md-raid. That's the whole point of having a layered stack, rather than a 
> "one size fits all" filesystem.
> 
> IF YOU CAN GUARANTEE that /boot/efi is only ever modified inside linux

you can't

> then raid it. Why not? Personally, I'm not sure that guarantee holds.

and that's the problem

> Basically the rule is, if you want to access raid-ed linux partitions 
> outside of linux, you must be able to guarantee they aren't modified 
> outside of linux. And you have to use superblock 1.0. If you can't 
> guarantee both of those, don't go there

and hence don't go there

[root@srv-rhsoft:~]$ cat /etc/fstab | grep efi
UUID=87FD-D5DF  /efi  vfat 
defaults,discard,noatime,noexec,nosuid,nodev,noauto,umask=0022,gid=0,uid=0,x-systemd.automount,x-systemd.idle-timeout=60 
  0 1
UUID=8875-F946  /efi-bkp  vfat 
defaults,discard,noatime,noexec,nosuid,nodev,noauto,umask=0022,gid=0,uid=0,x-systemd.automount,x-systemd.idle-timeout=60 
  0 1



[root@srv-rhsoft:~]$ df
Dateisystem    Typ  Größe Benutzt Verf. Verw% Eingehängt auf
/dev/md0       ext4   29G    8,9G   20G   31% /
/dev/md1       ext4  3,6T    2,1T  1,6T   57% /mnt/data
/dev/nvme0n1p1 f2fs  239G    9,7G  229G    5% /mnt/nvme
/dev/sda1      vfat  400M     69M  332M   18% /efi
/dev/sdb1      vfat  400M     69M  332M   18% /efi-bkp



[root@srv-rhsoft:~]$ ls /efi
insgesamt 68M
drwxr-xr-x 6 root root 8,0K 2022-11-16 18:18 EFI
drwxr-xr-x 2 root root 8,0K 2023-01-12 18:35 grub2
drwxr-xr-x 3 root root 8,0K 2023-01-14 21:13 loader
drwxr-xr-x 2 root root 8,0K 2022-10-20 14:34 sgdisk
-rwxr-xr-x 1 root root  16M 2023-01-11 11:53 
initramfs-6.0.18-200.fc36.x86_64.img
-rwxr-xr-x 1 root root  16M 2023-01-12 23:10 
initramfs-6.1.5-100.fc36.x86_64.img
-rwxr-xr-x 1 root root 246K 2023-01-07 18:28 config-6.0.18-200.fc36.x86_64
-rwxr-xr-x 1 root root 248K 2023-01-12 17:30 config-6.1.5-100.fc36.x86_64
-rwxr-xr-x 1 root root 6,9M 2023-01-07 18:28 
System.map-6.0.18-200.fc36.x86_64
-rwxr-xr-x 1 root root 5,8M 2023-01-12 17:30 
System.map-6.1.5-100.fc36.x86_64
-rwxr-xr-x 1 root root  13M 2023-01-07 18:28 vmlinuz-6.0.18-200.fc36.x86_64
-rwxr-xr-x 1 root root  13M 2023-01-12 17:30 vmlinuz-6.1.5-100.fc36.x86_64



[root@srv-rhsoft:~]$ ls /efi-bkp/
insgesamt 68M
drwxr-xr-x 6 root root 8,0K 2022-11-16 18:18 EFI
drwxr-xr-x 2 root root 8,0K 2023-01-12 18:35 grub2
drwxr-xr-x 3 root root 8,0K 2023-01-13 15:28 loader
drwxr-xr-x 2 root root 8,0K 2022-10-20 14:34 sgdisk
-rwxr-xr-x 1 root root  16M 2023-01-11 11:53 
initramfs-6.0.18-200.fc36.x86_64.img
-rwxr-xr-x 1 root root  16M 2023-01-12 23:10 
initramfs-6.1.5-100.fc36.x86_64.img
-rwxr-xr-x 1 root root 246K 2023-01-07 18:28 config-6.0.18-200.fc36.x86_64
-rwxr-xr-x 1 root root 248K 2023-01-12 17:30 config-6.1.5-100.fc36.x86_64
-rwxr-xr-x 1 root root 6,9M 2023-01-07 18:28 
System.map-6.0.18-200.fc36.x86_64
-rwxr-xr-x 1 root root 5,8M 2023-01-12 17:30 
System.map-6.1.5-100.fc36.x86_64
-rwxr-xr-x 1 root root  13M 2023-01-07 18:28 vmlinuz-6.0.18-200.fc36.x86_64
-rwxr-xr-x 1 root root  13M 2023-01-12 17:30 vmlinuz-6.1.5-100.fc36.x86_64



[root@srv-rhsoft:~]$ cat /scripts/backup-efi.sh
#!/usr/bin/bash
# Automounts triggern
ls /efi/ > /dev/null
ls /efi-bkp/ > /dev/null
# Sicherstellen dass "/efi" eingebunden ist
EFI_MOUNTED="$(mount | grep '/efi type' 2> '/dev/null' | grep 
'systemd.automount' | wc -l)"
if [ "$EFI_MOUNTED" == "0" ]; then
  echo "BACKUP-EFI: /efi nicht gemounted"
  logger --tag="BACKUP-EFI" "/efi nicht gemounted"
  exit 0
fi
# Sicherstellen dass "/efi-bkp" eingebunden ist
EFI_BKP_MOUNTED="$(mount | grep '/efi-bkp type' 2> '/dev/null' | grep 
'systemd.automount' | wc -l)"
if [ "$EFI_BKP_MOUNTED" == "0" ]; then
  echo "BACKUP-EFI: /efi-bkp nicht gemounted"
  logger --tag="BACKUP-EFI" "/efi-bkp nicht gemounted"
  exit 0
fi
# Boot-Umgebung auf zweite Festplatte sichern
echo "BACKUP-EFI: rsync --recursive --delete-after --times /efi/ /efi-bkp/"
logger --tag="BACKUP-EFI" "rsync --recursive --delete-after --times 
/efi/ /efi-bkp/"
if rsync --recursive --delete-after --times /efi/ /efi-bkp/; then
  echo "BACKUP-EFI: Erfolgreich"
  logger --tag="BACKUP-EFI" "Erfolgreich"
  ls -l -h -X --time-style=long-iso /efi-bkp/
else
  echo "BACKUP-EFI: FEHLGESCHLAGEN"
  logger --tag="BACKUP-EFI" "FEHLGESCHLAGEN"
fi
# Sicherstellen dass dieses Script niemals einen Fehler wirft
exit 0

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  9:39       ` Reindl Harald
@ 2023-01-15 10:45         ` Wols Lists
  0 siblings, 0 replies; 49+ messages in thread
From: Wols Lists @ 2023-01-15 10:45 UTC (permalink / raw)
  To: Reindl Harald, H, Linux RAID Mailing List

On 15/01/2023 09:39, Reindl Harald wrote:
> the point is that "you need efi to boot, but the system can't access efi 
> until it's booted" is nonsense because the whole point of the ESP is 
> that the UEFI is driectly starting UEFI-binaries at the ESP

And here the fact that English appears not to be your native language is 
biting you in the bum.

Correct - the UEFI is starting your ESP binaries, which reside in 
/boot/efi, AND IF /BOOT/EFI IS A NORMAL RAID, the UEFI can't read it.

Cue a circular dependency - the UEFI can't read /boot/efi because it's 
not a bog-standard fat.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  3:12 Transferring an existing system from non-RAID disks to RAID1 disks in the same computer H
  2023-01-15  8:41 ` Wols Lists
@ 2023-01-15 11:31 ` Pascal Hambourg
  2023-01-20  2:51   ` Phil Turmel
  2023-01-15 17:25 ` H
  2 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-15 11:31 UTC (permalink / raw)
  To: H, Linux RAID Mailing List

On 15/01/2023 at 04:12, H wrote:
> I need to transfer an existing CentOS 7 non-RAID setup using one single SSD to a mdadm RAID1 using two much larger SSDs. All three disks exist in the same computer at the same time. Both the existing SSD and the new ones use LVM and LUKS and the objective is to preserve the OS, all other installed software and data etc. Thus no new installation should be needed.
(...)
> What I cannot do, however, is to make any modifications to the existing old SSD since I cannot afford not being able to go back to the old SSD if necessary.

Too bad. Without that constraint, I would use pvmove to transparently 
move LVM data from the old drive to the new ones.

Create EFI partitions on both new drives. Using RAID1 with metadata 1.0 
(superblock at the end) with EFI partitions is a hack (corruption may 
happen if the UEFI firmware or anything outside Linux RAID writes to any 
of them).
Create the RAID arrays for LVM and /boot.
Create and open the new LUKS device in the RAID array with cryptsetup.
Add the related line it in /etc/crypttab.
Add the new encrypted device to the existing LVM volume group.
Move away data from the old encrypted device to the new one with pvmove.
Remove the old encrypted volume from the volume group with vgreduce.
Delete the related line from /etc/crypttab.
Copy data from the old /boot partition to the new /boot RAID array with 
rsync, cp or so.
Mount the new /boot RAID array and the EFI partitions on /boot, 
/boot/efi and /boot/efi2 instead of the old partitions.
Update /etc/fstab accordingly. Add the "nofail" option to the /boot/efi* 
lines.
Update the initramfs if it includes information from crypttab or fstab. 
Make sure it includes RAID support.
Run update-grub or grub-mkconfig to update grub.cfg.
Reinstall GRUB on each EFI partition with grub-install.
If CentOS system configuration manager or GRUB package supports multiple 
EFI partitions, you can use that instead of reinstalling GRUB by hand.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  3:12 Transferring an existing system from non-RAID disks to RAID1 disks in the same computer H
  2023-01-15  8:41 ` Wols Lists
  2023-01-15 11:31 ` Pascal Hambourg
@ 2023-01-15 17:25 ` H
  2023-01-22  5:05   ` H
  2 siblings, 1 reply; 49+ messages in thread
From: H @ 2023-01-15 17:25 UTC (permalink / raw)
  To: Linux RAID Mailing List

On 01/14/2023 10:12 PM, H wrote:
> I need to transfer an existing CentOS 7 non-RAID setup using one single SSD to a mdadm RAID1 using two much larger SSDs. All three disks exist in the same computer at the same time. Both the existing SSD and the new ones use LVM and LUKS and the objective is to preserve the OS, all other installed software and data etc. Thus no new installation should be needed.
>
> Since all disks, partitions, LUKS etc have their unique UUIDs I have not figured out how to do this and could use help and advice.
>
> In preparation for the above, I have:
>
> - Used rsync with the flags -aAXHv to copy all files on the existing SSD to an external harddisk for backup.
>
> - Partitioned the new SSDs as desired, including LVM and LUKS. My configuration uses one RAID1 for /boot, another RAID1 partition for /boot/efi, and a third one for the rest which also uses LVM and LUKS. I actually used a DVD image of Centos 7 (minimal installation) to accomplish this which also completed the minimal installation of the OS on the new disks. It boots as expected and the RAID partitions seem to work as expected.
>
> Since I want to actually move my existing installation from the existing SSDs, I am not sure whether I should just use rsync to copy everything from the old SSD to the new larger ones. However, I expect that to also transfer all OS files using the old, now incorrect UUIDs, to the new disks after which nothing will work, thus I have not yet done that. I could erase the minimal installation of the OS on the new disks before rsyncing but have not yet done so.
>
> I fully expect to have to do some manual editing of files but am not quite sure of all the files I would need to edit after such a copy. I have some knowledge of linux but could use some help and advice. For instance, I expect that /etc/fstab and /etc/crypttab would need to be edited reflecting the UUIDs of the new disks, partitions and LUKS, but which other files? Grub2 would also need to be edited I would think.
>
> The only good thing is that since both the old disk and the new disks are in the same computer, no other hardware will change.
>
> Is there another, better (read: simpler) way of accomplishing this transfer?
>
> Finally, since I do have a backup of the old SSD and there is nothing of value on the new mdadm RAID1 disks, except the partition information, I do have, if necessary, the luxury of multiple tries. What I cannot do, however, is to make any modifications to the existing old SSD since I cannot afford not being able to go back to the old SSD if necessary.
>
> Thanks.
>
>
Upon further thinking, I am wondering if the process below would work? As stated above, I have two working disk setups in the same computer and depending on the order of disks in the BIOS I can boot any of the two setups.

My existing setup uses one disk and no RAID (obviously), LUKS and LVM for everything but /boot and /boot/efi, total of three partitions. The OS is Centos 7 and I have made a complete backup to an external harddisk using rsync ("BACKUP1").

The new one uses two disks, RAID1, LUKS and LVM for everything but /boot and /boot/efi, total of four partitions (swap has its own partition - not sure why I made it that way). A minimal installation of Centos 7 was made to this setup and is working. In other words, UUIDs of disks, partitions and LUKS are already configured and working.

So, I am now thinking the following might work:

- Make a rsync backup of the new disks to the external harddisk ("BACKUP2").

- Delete all files from the new disks except from /boot and /boot/efi.

- Copy all files from all partitions except /boot and /boot/efi from BACKUP1 to the new disks. In other words, everything except /boot and /boot/efi will now be overwritten.

- I would expect this system not to boot since both /etc/fstab and /etc/crypttab on the new disks contain the UUIDs from the old system.

- Copy just /etc/fstab and /etc/crypttab from BACKUP2 to the new disks. This should update the new disks with the previously created UUIDs from when doing the minimal installation of CentOS 7.

What do you think?



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15  9:20     ` Wols Lists
  2023-01-15  9:39       ` Reindl Harald
@ 2023-01-20  2:48       ` Phil Turmel
  1 sibling, 0 replies; 49+ messages in thread
From: Phil Turmel @ 2023-01-20  2:48 UTC (permalink / raw)
  To: Wols Lists, Reindl Harald, H, Linux RAID Mailing List

On 1/15/23 04:20, Wols Lists wrote:

> IF YOU CAN GUARANTEE that /boot/efi is only ever modified inside linux, 
> then raid it. Why not? Personally, I'm not sure that guarantee holds.

Your EFI bios can write to these partitions, too, under operator control.

> If you do raid it, then you MUST use the 1.0 superblock, otherwise it 
> will be inaccessible outside of linux. Seeing as the system needs it 
> before linux boots, that's your classic catch-22.

I recommend *not* using any raid on your EFI partitions.  Make them all 
separate, so your BIOS can tell that they are distinct filesystems (raid 
would duplicate the FS UUID/label/CRC).

When separate, you can use efibootmgr to make the extra one(s) fallback 
boot devices.

Use a hook in initramfs-tools to make /boot/efi[2..N] match /boot/efi 
whenever kernels get installed/modified.

Phil

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15 11:31 ` Pascal Hambourg
@ 2023-01-20  2:51   ` Phil Turmel
  2023-01-20 19:27     ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Phil Turmel @ 2023-01-20  2:51 UTC (permalink / raw)
  To: Pascal Hambourg, H, Linux RAID Mailing List

On 1/15/23 06:31, Pascal Hambourg wrote:
> On 15/01/2023 at 04:12, H wrote:
>> I need to transfer an existing CentOS 7 non-RAID setup using one 
>> single SSD to a mdadm RAID1 using two much larger SSDs. All three 
>> disks exist in the same computer at the same time. Both the existing 
>> SSD and the new ones use LVM and LUKS and the objective is to preserve 
>> the OS, all other installed software and data etc. Thus no new 
>> installation should be needed.
> (...)
>> What I cannot do, however, is to make any modifications to the 
>> existing old SSD since I cannot afford not being able to go back to 
>> the old SSD if necessary.
> 
> Too bad. Without that constraint, I would use pvmove to transparently 
> move LVM data from the old drive to the new ones.
> 
> Create EFI partitions on both new drives. Using RAID1 with metadata 1.0 
> (superblock at the end) with EFI partitions is a hack (corruption may 
> happen if the UEFI firmware or anything outside Linux RAID writes to any 
> of them).
> Create the RAID arrays for LVM and /boot.
> Create and open the new LUKS device in the RAID array with cryptsetup.
> Add the related line it in /etc/crypttab.
> Add the new encrypted device to the existing LVM volume group.
> Move away data from the old encrypted device to the new one with pvmove.
> Remove the old encrypted volume from the volume group with vgreduce.
> Delete the related line from /etc/crypttab.
> Copy data from the old /boot partition to the new /boot RAID array with 
> rsync, cp or so.
> Mount the new /boot RAID array and the EFI partitions on /boot, 
> /boot/efi and /boot/efi2 instead of the old partitions.
> Update /etc/fstab accordingly. Add the "nofail" option to the /boot/efi* 
> lines.
> Update the initramfs if it includes information from crypttab or fstab. 
> Make sure it includes RAID support.
> Run update-grub or grub-mkconfig to update grub.cfg.
> Reinstall GRUB on each EFI partition with grub-install.
> If CentOS system configuration manager or GRUB package supports multiple 
> EFI partitions, you can use that instead of reinstalling GRUB by hand.

FWIW, except for the raid of the EFI partitions, this is what I would 
do.  With the sweet benefit of no downtime.  Even the root fs can be 
moved this way, while running.

{ I have, in fact, done similar upgrades to running systems. }

Phil

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-20  2:51   ` Phil Turmel
@ 2023-01-20 19:27     ` Pascal Hambourg
  2023-01-20 20:26       ` Wol
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-20 19:27 UTC (permalink / raw)
  To: Phil Turmel, H, Linux RAID Mailing List

On 20/01/2023 at 03:51, Phil Turmel wrote:
> On 1/15/23 06:31, Pascal Hambourg wrote:
>>
>> Create EFI partitions on both new drives. Using RAID1 with metadata 
>> 1.0 (superblock at the end) with EFI partitions is a hack (corruption 
>> may happen if the UEFI firmware or anything outside Linux RAID writes 
>> to any of them).
>> Create the RAID arrays for LVM and /boot.
(...)
> FWIW, except for the raid of the EFI partitions, this is what I would 
> do.

I did not suggest using RAID on the EFI partitions, I warned againt it.

> I recommend *not* using any raid on your EFI partitions.  Make them all 
> separate, so your BIOS can tell that they are distinct filesystems (raid 
> would duplicate the FS UUID/label/CRC).

AFAICS on GPT, EFI boot entries contain only the partition UUID, not the 
filesystem UUID, so duplicate filesystem UUIDs should not harm.

> Use a hook in initramfs-tools to make /boot/efi[2..N] match /boot/efi 
> whenever kernels get installed/modified.

Why in initramfs-tools ? The initramfs has nothing to do with the 
bootloader installation nor the EFI partition so there is no need to 
resync EFI partitions on initramfs update (unless GRUB menu entries or 
kernel and initramfs images are in the EFI partition, which is not a 
great idea IMO). IMO the right place would be a hook called after the 
system configuration manager or the GRUB package runs grub-install, if 
that exists.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-20 19:27     ` Pascal Hambourg
@ 2023-01-20 20:26       ` Wol
  2023-01-20 21:01         ` Pascal Hambourg
  2023-01-21 21:20         ` Phil Turmel
  0 siblings, 2 replies; 49+ messages in thread
From: Wol @ 2023-01-20 20:26 UTC (permalink / raw)
  To: Pascal Hambourg, Phil Turmel, H, Linux RAID Mailing List

On 20/01/2023 19:27, Pascal Hambourg wrote:
> Why in initramfs-tools ? The initramfs has nothing to do with the 
> bootloader installation nor the EFI partition so there is no need to 
> resync EFI partitions on initramfs update (unless GRUB menu entries or 
> kernel and initramfs images are in the EFI partition, which is not a 
> great idea IMO). IMO the right place would be a hook called after the 
> system configuration manager or the GRUB package runs grub-install, if 
> that exists.

I think you've just put your finger on it. Multiple EFI partitions is 
outside the remit of linux, and having had two os's arguing over which 
was the right EFI, I really don't think the system manager - be it yast, 
yum, apt, whatever - is capable of even trying. With a simple 
configuration you don't have mirrored EFI, some systems have one EFI per 
OS, others have one EFI for several OSes, ...

At the end of the day, it's down to the user, and if you can shove a 
quick rsync in the initramfs or boot sequence to sync EFIs, then that's 
probably the best place. Then it doesn't get missed ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-20 20:26       ` Wol
@ 2023-01-20 21:01         ` Pascal Hambourg
  2023-01-21  8:49           ` Wols Lists
  2023-01-21 12:17           ` Reindl Harald
  2023-01-21 21:20         ` Phil Turmel
  1 sibling, 2 replies; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-20 21:01 UTC (permalink / raw)
  To: Wol, Phil Turmel, H, Linux RAID Mailing List

On 20/01/2023 at 21:26, Wol wrote:
> 
> I think you've just put your finger on it. Multiple EFI partitions is 
> outside the remit of linux

I do not subscribe to this point of view. Distributions used to handle 
multiple boot sectors, why could they not handle multiple EFI partitions 
as well ?

> I really don't think the system manager - be it yast, 
> yum, apt, whatever - is capable of even trying.

yum and apt are package managers, not system managers. FWIW, Ubuntu's 
grub-efi packages can deal with multiple EFI partitions in the same way 
grub-pc can deal with multiple boot sectors.

> At the end of the day, it's down to the user, and if you can shove a 
> quick rsync in the initramfs or boot sequence to sync EFIs, then that's 
> probably the best place. Then it doesn't get missed ...

No, these are not adequate places. Too early or too late. The right 
place is when anything is written to the EFI partition.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-20 21:01         ` Pascal Hambourg
@ 2023-01-21  8:49           ` Wols Lists
  2023-01-21 13:32             ` Pascal Hambourg
  2023-01-21 12:17           ` Reindl Harald
  1 sibling, 1 reply; 49+ messages in thread
From: Wols Lists @ 2023-01-21  8:49 UTC (permalink / raw)
  To: Pascal Hambourg, Phil Turmel, H, Linux RAID Mailing List

On 20/01/2023 21:01, Pascal Hambourg wrote:
> On 20/01/2023 at 21:26, Wol wrote:
>>
>> I think you've just put your finger on it. Multiple EFI partitions is 
>> outside the remit of linux
> 
> I do not subscribe to this point of view. Distributions used to handle 
> multiple boot sectors, why could they not handle multiple EFI partitions 
> as well ?

Because that means that distros need to know all about EVERY OTHER 
OPERATING SYSTEM?
> 
>> I really don't think the system manager - be it yast, yum, apt, 
>> whatever - is capable of even trying.
> 
> yum and apt are package managers, not system managers. FWIW, Ubuntu's 
> grub-efi packages can deal with multiple EFI partitions in the same way 
> grub-pc can deal with multiple boot sectors.

???

I don't know who's fault it was, probably Microsoft's, but I gave up 
trying to dual-boot a laptop ...
> 
>> At the end of the day, it's down to the user, and if you can shove a 
>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>> that's probably the best place. Then it doesn't get missed ...
> 
> No, these are not adequate places. Too early or too late. The right 
> place is when anything is written to the EFI partition.

I would agree with you. But that requires EVERY OS on the computer to 
co-operate. I think you are being - shall we say - optimistic?

Fact: Other systems outside of linux meddle with the EFI. Conclusion: 
modifying linux to sync EFI *at the point of modification* is going to 
fail. Badly.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-20 21:01         ` Pascal Hambourg
  2023-01-21  8:49           ` Wols Lists
@ 2023-01-21 12:17           ` Reindl Harald
  2023-01-21 14:15             ` Pascal Hambourg
  1 sibling, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 12:17 UTC (permalink / raw)
  To: Pascal Hambourg, Wol, Phil Turmel, H, Linux RAID Mailing List



Am 20.01.23 um 22:01 schrieb Pascal Hambourg:
> On 20/01/2023 at 21:26, Wol wrote:
>>
>> I think you've just put your finger on it. Multiple EFI partitions is 
>> outside the remit of linux
> 
> I do not subscribe to this point of view. Distributions used to handle 
> multiple boot sectors, why could they not handle multiple EFI partitions 
> as well ?
> 
>> I really don't think the system manager - be it yast, yum, apt, 
>> whatever - is capable of even trying.
> 
> yum and apt are package managers, not system managers. FWIW, Ubuntu's 
> grub-efi packages can deal with multiple EFI partitions in the same way 
> grub-pc can deal with multiple boot sectors.
> 
>> At the end of the day, it's down to the user, and if you can shove a 
>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>> that's probably the best place. Then it doesn't get missed ...
> 
> No, these are not adequate places. Too early or too late. The right 
> place is when anything is written to the EFI partition

these ARE adequate places and a "too late" simp,y can't exist - after 
the package transistion /efi/ should be rsynced to /efi-bk/

sadly for noew that's not supported and you have to call your 
"backup-afi.sh" by hand

https://bugzilla.redhat.com/show_bug.cgi?id=2133294

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21  8:49           ` Wols Lists
@ 2023-01-21 13:32             ` Pascal Hambourg
  2023-01-21 14:04               ` Wols Lists
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 13:32 UTC (permalink / raw)
  To: Wols Lists, Phil Turmel, H, Linux RAID Mailing List



On 21/01/2023 at 09:49, Wols Lists wrote:
> On 20/01/2023 21:01, Pascal Hambourg wrote:
>> On 20/01/2023 at 21:26, Wol wrote:
>>>
>>> I think you've just put your finger on it. Multiple EFI partitions is 
>>> outside the remit of linux
>>
>> I do not subscribe to this point of view. Distributions used to handle 
>> multiple boot sectors, why could they not handle multiple EFI 
>> partitions as well ?
> 
> Because that means that distros need to know all about EVERY OTHER 
> OPERATING SYSTEM?

Why would the distributions need to know all about every other OS, and 
what do you mean by "all" ?

> I don't know who's fault it was, probably Microsoft's, but I gave up 
> trying to dual-boot a laptop ...

I blame UEFI firmware vendors first, for improperly implementing the 
UEFI boot specification. Then I blame operating system vendors for not 
natively handling the case of multiple redundant EFI partitions.

Back on topic, if you mean Windows+Linux dual boot, it seems unlikely to 
me that this can be achieved with Linux software RAID, because Windows 
does not support it and Windows software RAID usually works on whole drives.
If you mean Linux dual-boot, you do not need multiple boot loaders, one 
single boot loader can boot all Linux systems.

>>> At the end of the day, it's down to the user, and if you can shove a 
>>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>>> that's probably the best place. Then it doesn't get missed ...
>>
>> No, these are not adequate places. Too early or too late. The right 
>> place is when anything is written to the EFI partition.
> 
> I would agree with you. But that requires EVERY OS on the computer to 
> co-operate. I think you are being - shall we say - optimistic?
> 
> Fact: Other systems outside of linux meddle with the EFI. Conclusion: 
> modifying linux to sync EFI *at the point of modification* is going to 
> fail. Badly.

I suspect we are misunderstanding each other about the meaning of "sync 
EFI partitions" and it may be my fault for not being clear enough. 
First, I do not mean to sync the whole EFI partition contents but only 
the part which belongs to the running operating system. This way, 
neither knowledge or cooperation of other operating systems is required.
Second, I do not mean "sync" in the sense of doing what rsync does but 
in the sense of writing the same stuff in all EFI partitions in the 
first place, instead of writing stuff in one "main" EFI partition only 
and sync'ing other EFI partitions at a later time.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 13:32             ` Pascal Hambourg
@ 2023-01-21 14:04               ` Wols Lists
  2023-01-21 14:33                 ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Wols Lists @ 2023-01-21 14:04 UTC (permalink / raw)
  To: Pascal Hambourg, H, Linux RAID Mailing List

On 21/01/2023 13:32, Pascal Hambourg wrote:
> Back on topic, if you mean Windows+Linux dual boot, it seems unlikely to 
> me that this can be achieved with Linux software RAID, because Windows 
> does not support it and Windows software RAID usually works on whole 
> drives.
> If you mean Linux dual-boot, you do not need multiple boot loaders, one 
> single boot loader can boot all Linux systems.

Given that this all started with *MIRRORING* EFI partitions, I think 
you've lost the thread ...

I'm fully in agreement that - if we want to keep our EFI partitions in 
sync - then doing so when the partition is updated is the best TIME (not 
place) to do it. (Which is why mirroring makes sense.)

It's just that - as soon as you bring multiple OSes (of any sort) into 
it - this ceases to be a practical solution.

Even something as simple as using a single boot loader with multiple 
linux distros is fraught with problems. I did exactly that, and the 
automated grub updater left *ALL* OSes unbootable. I was forced to boot 
with an emergency CD and edit grub.cfg by hand to get even one linux 
back, so I could then try to fix the rest.

THERE'S TOO MANY WAYS TO SKIN THIS CAT and trying to automate it will in 
almost all cases lead to tears :-(

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 12:17           ` Reindl Harald
@ 2023-01-21 14:15             ` Pascal Hambourg
  2023-01-21 14:31               ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 14:15 UTC (permalink / raw)
  To: Reindl Harald, Wol, Phil Turmel, H, Linux RAID Mailing List

On 21/01/2023 at 13:17, Reindl Harald wrote:
> Am 20.01.23 um 22:01 schrieb Pascal Hambourg:
>> On 20/01/2023 at 21:26, Wol wrote:
>>>
>>> if you can shove a 
>>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>>> that's probably the best place. Then it doesn't get missed ...
>>
>> No, these are not adequate places. Too early or too late. The right 
>> place is when anything is written to the EFI partition
> 
> these ARE adequate places and a "too late" simp,y can't exist - after 
> the package transistion /efi/ should be rsynced to /efi-bk/

Using only Debian-based distributions and GRUB, I confess that I know 
very little about other distributions and boot loaders such as Red Hat 
and systemd-boot. In Debian, package upgrades do not happen during the 
boot sequence, so waiting for the next boot to sync EFI partitions 
implies that they will be out of sync when the boot loader is run at the 
next boot, possibly breaking the boot sequence.

> https://bugzilla.redhat.com/show_bug.cgi?id=2133294

Quoting:
"with uefi you can no longer have everything needed for boot on a RAID"

AFAIK, that was never possible with legacy BIOS boot either. The MBR and 
GRUB core image were outside RAID.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 14:15             ` Pascal Hambourg
@ 2023-01-21 14:31               ` Reindl Harald
  2023-01-21 14:38                 ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 14:31 UTC (permalink / raw)
  To: Pascal Hambourg, Wol, Phil Turmel, H, Linux RAID Mailing List



Am 21.01.23 um 15:15 schrieb Pascal Hambourg:
> On 21/01/2023 at 13:17, Reindl Harald wrote:
>> Am 20.01.23 um 22:01 schrieb Pascal Hambourg:
>>> On 20/01/2023 at 21:26, Wol wrote:
>>>>
>>>> if you can shove a quick rsync in the initramfs or boot sequence to 
>>>> sync EFIs, then that's probably the best place. Then it doesn't get 
>>>> missed ...
>>>
>>> No, these are not adequate places. Too early or too late. The right 
>>> place is when anything is written to the EFI partition
>>
>> these ARE adequate places and a "too late" simp,y can't exist - after 
>> the package transistion /efi/ should be rsynced to /efi-bk/
> 
> Using only Debian-based distributions and GRUB, I confess that I know 
> very little about other distributions and boot loaders such as Red Hat 
> and systemd-boot. In Debian, package upgrades do not happen during the 
> boot sequence, so waiting for the next boot to sync EFI partitions 
> implies that they will be out of sync when the boot loader is run at the 
> next boot, possibly breaking the boot sequence.
> 
>> https://bugzilla.redhat.com/show_bug.cgi?id=2133294
> 
> Quoting:
> "with uefi you can no longer have everything needed for boot on a RAID"
> 
> AFAIK, that was never possible with legacy BIOS boot either. The MBR and 
> GRUB core image were outside RAID

yeah, we all know that you need "grub-install /dev/sdx" on each device - 
that's common knowledge and stuff outside the RAID partitions

and it's not something which got changed at kernel-updates and so irrelevant

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 14:04               ` Wols Lists
@ 2023-01-21 14:33                 ` Pascal Hambourg
  2023-01-21 15:21                   ` Wols Lists
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 14:33 UTC (permalink / raw)
  To: Wols Lists, H, Linux RAID Mailing List

On 21/01/2023 at 15:04, Wols Lists wrote:
> On 21/01/2023 13:32, Pascal Hambourg wrote:
>> Back on topic, if you mean Windows+Linux dual boot, it seems unlikely 
>> to me that this can be achieved with Linux software RAID, because 
>> Windows does not support it and Windows software RAID usually works on 
>> whole drives.
>> If you mean Linux dual-boot, you do not need multiple boot loaders, 
>> one single boot loader can boot all Linux systems.
> 
> Given that this all started with *MIRRORING* EFI partitions, I think 
> you've lost the thread ...
> 
> I'm fully in agreement that - if we want to keep our EFI partitions in 
> sync - then doing so when the partition is updated is the best TIME (not 
> place) to do it. (Which is why mirroring makes sense.)
> 
> It's just that - as soon as you bring multiple OSes (of any sort) into 
> it - this ceases to be a practical solution.

It depends if you mean "mirroring" with rsync or with RAID.
Also, it depends what the OS sorts are. With only Linux systems all 
using EFI partitions in RAID1, it might work.

> THERE'S TOO MANY WAYS TO SKIN THIS CAT and trying to automate it will in 
> almost all cases lead to tears :-(

This is why I claim that the only universal solution is that each OS 
supports multiple EFI partitions natively when writing any file in an 
EFI partition. Mirroring is a dead end.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 14:31               ` Reindl Harald
@ 2023-01-21 14:38                 ` Pascal Hambourg
  2023-01-21 14:52                   ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 14:38 UTC (permalink / raw)
  To: Reindl Harald, Wol, Phil Turmel, H, Linux RAID Mailing List

On 21/01/2023 at 15:31, Reindl Harald wrote:
> Am 21.01.23 um 15:15 schrieb Pascal Hambourg:
>> On 21/01/2023 at 13:17, Reindl Harald wrote:
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=2133294
>>
>> Quoting:
>> "with uefi you can no longer have everything needed for boot on a RAID"
>>
>> AFAIK, that was never possible with legacy BIOS boot either. The MBR 
>> and GRUB core image were outside RAID
> 
> yeah, we all know that you need "grub-install /dev/sdx" on each device - 
> that's common knowledge and stuff outside the RAID partitions
> 
> and it's not something which got changed at kernel-updates and so 
> irrelevant

Then what is your point ?

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 14:38                 ` Pascal Hambourg
@ 2023-01-21 14:52                   ` Reindl Harald
  2023-01-21 15:17                     ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 14:52 UTC (permalink / raw)
  To: Pascal Hambourg, Wol, Phil Turmel, H, Linux RAID Mailing List



Am 21.01.23 um 15:38 schrieb Pascal Hambourg:
> On 21/01/2023 at 15:31, Reindl Harald wrote:
>> Am 21.01.23 um 15:15 schrieb Pascal Hambourg:
>>> On 21/01/2023 at 13:17, Reindl Harald wrote:
>>>>
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2133294
>>>
>>> Quoting:
>>> "with uefi you can no longer have everything needed for boot on a RAID"
>>>
>>> AFAIK, that was never possible with legacy BIOS boot either. The MBR 
>>> and GRUB core image were outside RAID
>>
>> yeah, we all know that you need "grub-install /dev/sdx" on each device 
>> - that's common knowledge and stuff outside the RAID partitions
>>
>> and it's not something which got changed at kernel-updates and so 
>> irrelevant
> 
> Then what is your point?

what was your point when we all know that the MBR wanst't part of the 
RAID and frankly wasn't stored inside a partition at all

my point in that bugreport is that i don't want to manually call 
"backup-efi.sh" after kernel updates which are happening often on Fedora

kernel-install.sh is responsible for create the initrd and so on - when 
i can tell that "call /scripts/backup-efi.sh" after you are done my ESP 
partitions on both drives are always in sync

and no the 1:1000000 chance that a crash happens between isn't relevant 
because the whole kenel-install/initrd dance isn't atomic at it's own

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 14:52                   ` Reindl Harald
@ 2023-01-21 15:17                     ` Pascal Hambourg
  2023-01-21 16:24                       ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 15:17 UTC (permalink / raw)
  To: Reindl Harald, Linux RAID Mailing List

On 21/01/2023 at 15:52, Reindl Harald wrote:
> Am 21.01.23 um 15:38 schrieb Pascal Hambourg:
>> On 21/01/2023 at 15:31, Reindl Harald wrote:
>>> Am 21.01.23 um 15:15 schrieb Pascal Hambourg:
>>>> On 21/01/2023 at 13:17, Reindl Harald wrote:
>>>>>
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2133294
>>>>
>>>> Quoting:
>>>> "with uefi you can no longer have everything needed for boot on a RAID"
>>>>
>>>> AFAIK, that was never possible with legacy BIOS boot either. The MBR 
>>>> and GRUB core image were outside RAID
>>>
>>> yeah, we all know that you need "grub-install /dev/sdx" on each 
>>> device - that's common knowledge and stuff outside the RAID partitions
>>>
>>> and it's not something which got changed at kernel-updates and so 
>>> irrelevant
>>
>> Then what is your point?
> 
> what was your point when we all know that the MBR wanst't part of the 
> RAID and frankly wasn't stored inside a partition at all

My point was that UEFI did not change the fact that "you cannot have 
everything needed for boot on a RAID", so nothing new here.

> my point in that bugreport is that i don't want to manually call 
> "backup-efi.sh" after kernel updates which are happening often on Fedora
> 
> kernel-install.sh is responsible for create the initrd and so on - when 
> i can tell that "call /scripts/backup-efi.sh" after you are done my ESP 
> partitions on both drives are always in sync

What is written in the EFI partition on kernel update in Fedora ? In 
Debian, the EFI partition is written only on grub package update or when 
running grub-install.

> and no the 1:1000000 chance that a crash happens between isn't relevant 
> because the whole kenel-install/initrd dance isn't atomic at it's own

Not my point. My point is that if secondary EFI partitions are updated 
only during the boot sequence then they will be out of sync at the next 
boot following an update of the primary EFI partition.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 14:33                 ` Pascal Hambourg
@ 2023-01-21 15:21                   ` Wols Lists
  2023-01-21 18:32                     ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Wols Lists @ 2023-01-21 15:21 UTC (permalink / raw)
  To: Pascal Hambourg, H, Linux RAID Mailing List

On 21/01/2023 14:33, Pascal Hambourg wrote:
> On 21/01/2023 at 15:04, Wols Lists wrote:
>> On 21/01/2023 13:32, Pascal Hambourg wrote:
>>> Back on topic, if you mean Windows+Linux dual boot, it seems unlikely 
>>> to me that this can be achieved with Linux software RAID, because 
>>> Windows does not support it and Windows software RAID usually works 
>>> on whole drives.
>>> If you mean Linux dual-boot, you do not need multiple boot loaders, 
>>> one single boot loader can boot all Linux systems.
>>
>> Given that this all started with *MIRRORING* EFI partitions, I think 
>> you've lost the thread ...
>>
>> I'm fully in agreement that - if we want to keep our EFI partitions in 
>> sync - then doing so when the partition is updated is the best TIME 
>> (not place) to do it. (Which is why mirroring makes sense.)
>>
>> It's just that - as soon as you bring multiple OSes (of any sort) into 
>> it - this ceases to be a practical solution.
> 
> It depends if you mean "mirroring" with rsync or with RAID.
> Also, it depends what the OS sorts are. With only Linux systems all 
> using EFI partitions in RAID1, it might work.
> 
>> THERE'S TOO MANY WAYS TO SKIN THIS CAT and trying to automate it will 
>> in almost all cases lead to tears :-(
> 
> This is why I claim that the only universal solution is that each OS 
> supports multiple EFI partitions natively when writing any file in an 
> EFI partition. Mirroring is a dead end.

Is that one EFI per OS, or multiple identical EFI? :-)

There's too many ways to skin this cat ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 15:17                     ` Pascal Hambourg
@ 2023-01-21 16:24                       ` Reindl Harald
  2023-01-21 18:52                         ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 16:24 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 21.01.23 um 16:17 schrieb Pascal Hambourg:
> My point was that UEFI did not change the fact that "you cannot have 
> everything needed for boot on a RAID", so nothing new here.

useless nitpicking isn't helpful

>> my point in that bugreport is that i don't want to manually call 
>> "backup-efi.sh" after kernel updates which are happening often on Fedora
>>
>> kernel-install.sh is responsible for create the initrd and so on - 
>> when i can tell that "call /scripts/backup-efi.sh" after you are done 
>> my ESP partitions on both drives are always in sync
> 
> What is written in the EFI partition on kernel update in Fedora ? In 
> Debian, the EFI partition is written only on grub package update or when 
> running grub-install.

and where do you think is the kernel-selection stored?

https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault

[root@srv-rhsoft:~]$ ls /efi/loader/entries/
insgesamt 16K
-rwxr-xr-x 1 root root 585 2023-01-15 12:46 
3871a85f73dce2f522a1a97b00001bf2-6.1.6-100.fc36.x86_64.conf
-rwxr-xr-x 1 root root 651 2023-01-19 00:30 
3871a85f73dce2f522a1a97b00001bf2-6.1.7-100.fc36.x86_64.conf

>> and no the 1:1000000 chance that a crash happens between isn't 
>> relevant because the whole kenel-install/initrd dance isn't atomic at 
>> it's own
> 
> Not my point. My point is that if secondary EFI partitions are updated 
> only during the boot sequence then they will be out of sync at the next 
> boot following an update of the primary EFI partition

nobody is takling about update it during the boot sequence

* kernel-install generates initrd and boot entries
* kernel-install needs a drop-in to run a script
   after it's finished
* that script can rsync /efi/ to /efi-bkp/ or whereever
   you mount the ESP on the second drive
* case closed - the ESP on both drives have the same
   content and it just works

for now you need to rsync manually

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 15:21                   ` Wols Lists
@ 2023-01-21 18:32                     ` Pascal Hambourg
  2023-01-21 18:39                       ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 18:32 UTC (permalink / raw)
  To: Wols Lists, Linux RAID Mailing List

On 21/01/2023 at 16:21, Wols Lists wrote:
> On 21/01/2023 14:33, Pascal Hambourg wrote:
>> On 21/01/2023 at 15:04, Wols Lists wrote:
>>> On 21/01/2023 13:32, Pascal Hambourg wrote:
>>>> Back on topic, if you mean Windows+Linux dual boot, it seems 
>>>> unlikely to me that this can be achieved with Linux software RAID, 
>>>> because Windows does not support it and Windows software RAID 
>>>> usually works on whole drives.
>>>> If you mean Linux dual-boot, you do not need multiple boot loaders, 
>>>> one single boot loader can boot all Linux systems.
>>>
>>> Given that this all started with *MIRRORING* EFI partitions, I think 
>>> you've lost the thread ...
>>>
>>> I'm fully in agreement that - if we want to keep our EFI partitions 
>>> in sync - then doing so when the partition is updated is the best 
>>> TIME (not place) to do it. (Which is why mirroring makes sense.)
>>>
>>> It's just that - as soon as you bring multiple OSes (of any sort) 
>>> into it - this ceases to be a practical solution.
>>
>> It depends if you mean "mirroring" with rsync or with RAID.
>> Also, it depends what the OS sorts are. With only Linux systems all 
>> using EFI partitions in RAID1, it might work.
>>
>>> THERE'S TOO MANY WAYS TO SKIN THIS CAT and trying to automate it will 
>>> in almost all cases lead to tears :-(
>>
>> This is why I claim that the only universal solution is that each OS 
>> supports multiple EFI partitions natively when writing any file in an 
>> EFI partition. Mirroring is a dead end.
> 
> Is that one EFI per OS, or multiple identical EFI? :-)

Neither. Multiple possibly not identical EFI partitions.
Mirroring with one EFI partition per OS does not make much sense.
And it would not be universal if it required identical partitions.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 18:32                     ` Pascal Hambourg
@ 2023-01-21 18:39                       ` Reindl Harald
  2023-01-21 18:57                         ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 18:39 UTC (permalink / raw)
  To: Pascal Hambourg, Wols Lists, Linux RAID Mailing List



Am 21.01.23 um 19:32 schrieb Pascal Hambourg:
> On 21/01/2023 at 16:21, Wols Lists wrote:
>> Is that one EFI per OS, or multiple identical EFI? :-)
> 
> Neither. Multiple possibly not identical EFI partitions

is completly off-topic BTW

> Mirroring with one EFI partition per OS does not make much sense.
> And it would not be universal if it required identical partitions.

and how do you expect the UEFI smell which one is suppsoed to be booted 
from?

you can't fix the design-errors of UEFI

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 16:24                       ` Reindl Harald
@ 2023-01-21 18:52                         ` Pascal Hambourg
  2023-01-21 18:57                           ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 18:52 UTC (permalink / raw)
  To: Reindl Harald, Linux RAID Mailing List

On 21/01/2023 at 17:24, Reindl Harald wrote:
> Am 21.01.23 um 16:17 schrieb Pascal Hambourg:
>> My point was that UEFI did not change the fact that "you cannot have 
>> everything needed for boot on a RAID", so nothing new here.
> 
> useless nitpicking isn't helpful

Barking up the wrong tree isn't useful either. EFI is not the culprit.

>> What is written in the EFI partition on kernel update in Fedora ? In 
>> Debian, the EFI partition is written only on grub package update or 
>> when running grub-install.
> 
> and where do you think is the kernel-selection stored?

I already wrote I don't know anything about Fedora.

> https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault

I did not find any information about where the kernel-selection is 
stored in this page.

> [root@srv-rhsoft:~]$ ls /efi/loader/entries/

I guess /efi it the EFI partition mount point. Well then, any system 
tool that populates this directory should be able to manage multiple EFI 
partitions.

>> Not my point. My point is that if secondary EFI partitions are updated 
>> only during the boot sequence then they will be out of sync at the 
>> next boot following an update of the primary EFI partition
> 
> nobody is takling about update it during the boot sequence

Please re-read the thread.
In Message-ID: <d1a78f14-843a-e6f1-b909-67e091c5fa3f@youngman.org.uk>, 
Wol wrote:
> quick rsync in the initramfs or boot sequence to sync EFIs, then that's 
> probably the best place.

And in Message-ID: <acc6add5-347b-7ecb-f6e9-056d21783984@thelounge.net>, 
after I expressed disagreement, you wrote:
> these ARE adequate places



> 
> * kernel-install generates initrd and boot entries
> * kernel-install needs a drop-in to run a script
>    after it's finished
> * that script can rsync /efi/ to /efi-bkp/ or whereever
>    you mount the ESP on the second drive
> * case closed - the ESP on both drives have the same
>    content and it just works
> 
> for now you need to rsync manually

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 18:52                         ` Pascal Hambourg
@ 2023-01-21 18:57                           ` Reindl Harald
  2023-01-21 20:04                             ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 18:57 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
> On 21/01/2023 at 17:24, Reindl Harald wrote:
>> Am 21.01.23 um 16:17 schrieb Pascal Hambourg:
>>> My point was that UEFI did not change the fact that "you cannot have 
>>> everything needed for boot on a RAID", so nothing new here.
>>
>> useless nitpicking isn't helpful
> 
> Barking up the wrong tree isn't useful either. EFI is not the culprit.

but the root cause - cause and effect

>>> What is written in the EFI partition on kernel update in Fedora ? In 
>>> Debian, the EFI partition is written only on grub package update or 
>>> when running grub-install.
>>
>> and where do you think is the kernel-selection stored?
> 
> I already wrote I don't know anything about Fedora.
> 
>> https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault
> 
> I did not find any information about where the kernel-selection is 
> stored in this page.

in /efi/loader/entries/

>> [root@srv-rhsoft:~]$ ls /efi/loader/entries/
> 
> I guess /efi it the EFI partition mount point. Well then, any system 
> tool that populates this directory should be able to manage multiple EFI 
> partitions.

yeah, and some tools think they have to rewrite every snippet in that 
dir no matter from which os-install

>>> Not my point. My point is that if secondary EFI partitions are 
>>> updated only during the boot sequence then they will be out of sync 
>>> at the next boot following an update of the primary EFI partition
>>
>> nobody is takling about update it during the boot sequence
> 
> Please re-read the thread.
> In Message-ID: <d1a78f14-843a-e6f1-b909-67e091c5fa3f@youngman.org.uk>, 
> Wol wrote:
>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>> that's probably the best place.

yeah, initramfs is fine becaus ethat's generated due kernel-install

> And in Message-ID: <acc6add5-347b-7ecb-f6e9-056d21783984@thelounge.net>, 
> after I expressed disagreement, you wrote:
>> these ARE adequate places

ok, my mistake: initramfs generation is fine because at that point 
everything is already there, the initrd is locate don the EFI and when 
that's finished is the point to sync a backup-ESP

>> * kernel-install generates initrd and boot entries
>> * kernel-install needs a drop-in to run a script
>>    after it's finished
>> * that script can rsync /efi/ to /efi-bkp/ or whereever
>>    you mount the ESP on the second drive
>> * case closed - the ESP on both drives have the same
>>    content and it just works
>>
>> for now you need to rsync manually
and that was my point

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 18:39                       ` Reindl Harald
@ 2023-01-21 18:57                         ` Pascal Hambourg
  2023-01-21 19:08                           ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 18:57 UTC (permalink / raw)
  To: Reindl Harald, Wols Lists, Linux RAID Mailing List

On 21/01/2023 at 19:39, Reindl Harald wrote:
> Am 21.01.23 um 19:32 schrieb Pascal Hambourg:
>> On 21/01/2023 at 16:21, Wols Lists wrote:
>>> Is that one EFI per OS, or multiple identical EFI? :-)
>>
>> Neither. Multiple possibly not identical EFI partitions
> 
> is completly off-topic BTW

Can you elaborate ?

>> Mirroring with one EFI partition per OS does not make much sense.
>> And it would not be universal if it required identical partitions.
> 
> and how do you expect the UEFI smell which one is supposed to be booted 
> from?

Boot* and BootOrder EFI variables exist for a purpose.

> you can't fix the design-errors of UEFI

But you can (and should) deal with them.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 18:57                         ` Pascal Hambourg
@ 2023-01-21 19:08                           ` Reindl Harald
  2023-01-21 22:43                             ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 19:08 UTC (permalink / raw)
  To: Pascal Hambourg, Wols Lists, Linux RAID Mailing List



Am 21.01.23 um 19:57 schrieb Pascal Hambourg:
> On 21/01/2023 at 19:39, Reindl Harald wrote:
>> Am 21.01.23 um 19:32 schrieb Pascal Hambourg:
>>> On 21/01/2023 at 16:21, Wols Lists wrote:
>>>> Is that one EFI per OS, or multiple identical EFI? :-)
>>>
>>> Neither. Multiple possibly not identical EFI partitions
>>
>> is completly off-topic BTW

how is that related to RAID at all?

> Can you elaborate ?
> 
>>> Mirroring with one EFI partition per OS does not make much sense.
>>> And it would not be universal if it required identical partitions.
>>
>> and how do you expect the UEFI smell which one is supposed to be 
>> booted from?
> 
> Boot* and BootOrder EFI variables exist for a purpose.

yeah and every of your operating systems is dealing with them authistic

>> you can't fix the design-errors of UEFI
> 
> But you can (and should) deal with them

and you do by simply have ONE ESP per machine and mirror them by hand as 
long as kernl-install is too dumb for simple post-scripts

finally: i am done with that thread, it's only wasting everybodys time 
and likely you need to make ypur own expierience and learn the hard way 
that multi-boot always was shit and become even more shit with EFI

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 18:57                           ` Reindl Harald
@ 2023-01-21 20:04                             ` Pascal Hambourg
  2023-01-21 20:44                               ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 20:04 UTC (permalink / raw)
  To: Reindl Harald, Linux RAID Mailing List

On 21/01/2023 at 19:57, Reindl Harald wrote:
> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>> On 21/01/2023 at 17:24, Reindl Harald wrote:
>>> Am 21.01.23 um 16:17 schrieb Pascal Hambourg:
>>>> My point was that UEFI did not change the fact that "you cannot have 
>>>> everything needed for boot on a RAID", so nothing new here.
>>>
>>> useless nitpicking isn't helpful
>>
>> Barking up the wrong tree isn't useful either. EFI is not the culprit.
> 
> but the root cause - cause and effect

No, EFI is not the root cause either. The root cause is carelessly 
storing stuff in the bootloader area as if it was part of the standard 
Linux filesystem. Guess what ? It is not. Even though the EFI partition 
contains a filesystem, it is not a part of the standard Linux filesystem 
and requires special consideration, just like the MBR, the post-MBR gap 
or the BIOS boot partition.

You can blame Fedora for this. I blame Debian for this. I praise Ubuntu 
for managing multiple EFI partitions at last, and I do not often praise 
Ubuntu, believe me.

>>> https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault
>>
>> I did not find any information about where the kernel-selection is 
>> stored in this page.
> 
> in /efi/loader/entries/

Weird, the wiki mentions /boot/loader/entries/.

>> Wol wrote:
>>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>>> that's probably the best place.
> 
> yeah, initramfs is fine because that's generated due kernel-install

Aren't you confusing the initramfs execution and generation ?

> ok, my mistake: initramfs generation is fine because at that point 
> everything is already there, the initrd is locate don the EFI and when 
> that's finished is the point to sync a backup-ESP

But that's not enough, because other parts of the system may write to 
the EFI partition, so it does not completely solve the issue.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 20:04                             ` Pascal Hambourg
@ 2023-01-21 20:44                               ` Reindl Harald
  2023-01-21 22:56                                 ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 20:44 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
> On 21/01/2023 at 19:57, Reindl Harald wrote:
>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>> On 21/01/2023 at 17:24, Reindl Harald wrote:
>>>> Am 21.01.23 um 16:17 schrieb Pascal Hambourg:
>>>>> My point was that UEFI did not change the fact that "you cannot 
>>>>> have everything needed for boot on a RAID", so nothing new here.
>>>>
>>>> useless nitpicking isn't helpful
>>>
>>> Barking up the wrong tree isn't useful either. EFI is not the culprit.
>>
>> but the root cause - cause and effect
> 
> No, EFI is not the root cause either. The root cause is carelessly 
> storing stuff in the bootloader area as if it was part of the standard 
> Linux filesystem. Guess what ? It is not. 

LSB is dead

> Even though the EFI partition 
> contains a filesystem, it is not a part of the standard Linux filesystem 
> and requires special consideration, just like the MBR, the post-MBR gap 
> or the BIOS boot partition.

LSB is dead

> You can blame Fedora for this. I blame Debian for this. I praise Ubuntu 
> for managing multiple EFI partitions at last, and I do not often praise 
> Ubuntu, believe me.

i don't blame anybody

"BootLoaderSpec" is supposed to fix all the mess around UEFI and bootloaders

>>>> https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault
>>>
>>> I did not find any information about where the kernel-selection is 
>>> stored in this page.
>>
>> in /efi/loader/entries/
> 
> Weird, the wiki mentions /boot/loader/entries/.

because that will be the default for everyone in the future

>>> Wol wrote:
>>>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>>>> that's probably the best place.
>>
>> yeah, initramfs is fine because that's generated due kernel-install
> 
> Aren't you confusing the initramfs execution and generation ?

initramfs execution is completly irrelevant for the topic

>> ok, my mistake: initramfs generation is fine because at that point 
>> everything is already there, the initrd is locate don the EFI and when 
>> that's finished is the point to sync a backup-ESP
> 
> But that's not enough, because other parts of the system may write to 
> the EFI partition, so it does not completely solve the issue

the issue "my primary drive died and i need to boot from the second 
drive" is solved - period

nobody gives a shit about runtime stuff probably written to the ESP when 
a drive dies - the only question at that moment is "does my machine boot 
regulary after one of both drives died and can i start a RAID-rebuild 
from my normal environment and continue my work"

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-20 20:26       ` Wol
  2023-01-20 21:01         ` Pascal Hambourg
@ 2023-01-21 21:20         ` Phil Turmel
  1 sibling, 0 replies; 49+ messages in thread
From: Phil Turmel @ 2023-01-21 21:20 UTC (permalink / raw)
  To: Wol, Pascal Hambourg, H, Linux RAID Mailing List

[-- Attachment #1: Type: text/plain, Size: 2102 bytes --]

Sigh,

On 1/20/23 15:26, Wol wrote:
> On 20/01/2023 19:27, Pascal Hambourg wrote:
>> Why in initramfs-tools ? The initramfs has nothing to do with the 
>> bootloader installation nor the EFI partition so there is no need to 
>> resync EFI partitions on initramfs update (unless GRUB menu entries or 
>> kernel and initramfs images are in the EFI partition, which is not a 
>> great idea IMO). IMO the right place would be a hook called after the 
>> system configuration manager or the GRUB package runs grub-install, if 
>> that exists.
> 
> I think you've just put your finger on it. Multiple EFI partitions is 
> outside the remit of linux, and having had two os's arguing over which 
> was the right EFI, I really don't think the system manager - be it yast, 
> yum, apt, whatever - is capable of even trying. With a simple 
> configuration you don't have mirrored EFI, some systems have one EFI per 
> OS, others have one EFI for several OSes, ...

Linux has efibootmgr, which certainly can manage multiple EFI partitions.

If using grub on multiple efi partions, you would use efibootmgr one 
time (after grub-install) to ensure all of your partitions were listed 
in the order you want them tried.  If one gets corrupted the BIOS will 
fall back to the next one.

{ This is the #1 reason to use EFI instead of MBR boot. }

If using EFI boot *without* GRUB, you want your actual bootable kernel 
*and* initramfs in place in all of your EFI partitions.

I use an initramfs hook for this on some of my production servers. 
Kernal install reliably installs an initramfs, too, so this hook is the 
right place for my use case.

> At the end of the day, it's down to the user, and if you can shove a 
> quick rsync in the initramfs or boot sequence to sync EFIs, then that's 
> probably the best place. Then it doesn't get missed ...

As noted, rsync on boot is too late.  rsync on kernel or initramfs 
install is best.

> Cheers,
> Wol

My script for direct booting (instead of grub) is attached.  Works with 
distro kernels that have EFI_STUB turned on.  (Ubuntu Server, in my case.)

Regards,

Phil

[-- Attachment #2: 99-direct-efi --]
[-- Type: text/plain, Size: 1822 bytes --]

#! /bin/bash
#
# Normally installed in /etc/initramfs/post-update.d/ and marked
# executable.
#
# Move newly installed initramfs to /boot/efi and ensure corresponding
# kernel is also moved.  If the kernel has to be moved, also update
# the EFI Boot Manager with the new kernel and prune old kernels.
#
# This routine is called the kernel version in argument #1 and the 
# name of the initramfs file in argument #2

# Obtain root fs info for boot command line
source <(blkid -o export $(df / |grep -o '^/dev/[^ ]\+')) 
if test "${DEVNAME:0:12}" == "/dev/mapper/" ; then
	export RootSpec="$DEVNAME"
else
	if test "${DEVNAME:0:5}" == "/dev/" ; then
		vglv="${DEVNAME:5}"
		vg="${vglv%%/*}"
		lv="${vglv#$vg}"
		if test -n "$lv" ; then
			export RootSpec="$DEVNAME"
		else
			export RootSpec="UUID=$UUID"
		fi
	else
		export RootSpec="UUID=$UUID"
	fi
fi

# Destinations must have a trailing slash.
for DEST in /bootB/ /bootA/
do
	# First, copy the updated initramfs.
	cp "$2" "$DEST"

	# Construct the target kernel efi file name and check for existence
	export KF="${DEST}vmlinuz-$1.efi"
	test -f "$KF" && continue

	# Need to copy and register the kernel.
	echo "Copying $KF ..."
	cp "/boot/vmlinuz-$1" "$KF"

	# Obtain EFI boot fs info for boot partition info and file locations
	source <(blkid -o export $(df "$DEST" |grep -o '^/dev/[^ ]\+')) 

	BOOT="$(sed -r -e 's/(.+[^0-9])([0-9]+)/\1:\2/' <<< "$DEVNAME")"
	read dummy1 MOUNTPT other <<< "$(grep "^$DEVNAME " /proc/mounts)"
	EFIROOT="$(echo ${DEST##$MOUNTPT} |sed 's/\//\\/g')"

	# Set the new boot record
	efibootmgr -q -c --disk "${BOOT%%:*}" --part "${BOOT##*:}" \
		--label "EFI Direct $DEST $1" \
		--loader "${EFIROOT}vmlinuz-$1.efi" \
		-u "initrd=${EFIROOT}$(basename "$2") root=${RootSpec}"

	echo "Configured EFI Direct $DEST $1 as $EFIROOT on $BOOT"

done

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 19:08                           ` Reindl Harald
@ 2023-01-21 22:43                             ` Pascal Hambourg
  2023-01-21 22:56                               ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 22:43 UTC (permalink / raw)
  To: Reindl Harald, Wols Lists, Linux RAID Mailing List

On 21/01/2023 at 20:08, Reindl Harald wrote:
> Am 21.01.23 um 19:57 schrieb Pascal Hambourg:
>>> Am 21.01.23 um 19:32 schrieb Pascal Hambourg:
>>>> On 21/01/2023 at 16:21, Wols Lists wrote:
>>>>> Is that one EFI per OS, or multiple identical EFI? :-)
>>>>
>>>> Neither. Multiple possibly not identical EFI partitions
> 
> how is that related to RAID at all?

RAID provides redundancy while the system is running. For boot 
redundancy, the boot loader must be installed on each disk. With EFI 
boot, it means there must be an EFI partition on each disk. But EFI 
partitions do not need to be identical as long as any of them can boot 
the system.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 22:43                             ` Pascal Hambourg
@ 2023-01-21 22:56                               ` Reindl Harald
  2023-01-22  9:04                                 ` Pascal Hambourg
  0 siblings, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 22:56 UTC (permalink / raw)
  To: Pascal Hambourg, Wols Lists, Linux RAID Mailing List



Am 21.01.23 um 23:43 schrieb Pascal Hambourg:
> On 21/01/2023 at 20:08, Reindl Harald wrote:
>> Am 21.01.23 um 19:57 schrieb Pascal Hambourg:
>>>> Am 21.01.23 um 19:32 schrieb Pascal Hambourg:
>>>>> On 21/01/2023 at 16:21, Wols Lists wrote:
>>>>>> Is that one EFI per OS, or multiple identical EFI? :-)
>>>>>
>>>>> Neither. Multiple possibly not identical EFI partitions
>>
>> how is that related to RAID at all?
> 
> RAID provides redundancy while the system is running 

RAID provides redundancy for devices - no matter if something is running

> For boot 
> redundancy, the boot loader must be installed on each disk

so what

> With EFI 
> boot, it means there must be an EFI partition on each disk. But EFI 
> partitions do not need to be identical as long as any of them can boot 
> the system

so what - the topic is "Transferring an existing system from non-RAID 
disks to RAID1 disks in the same computer"

anything else for the sake of god belongs to a different topic

the same as for "What does TRIM/discard in RAID do" which is a question 
about the state of play

would you be so kind and stop mixing completly different topics in the 
same thread - that way people who don't give a shit about several topics 
they can simply ignore them




^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 20:44                               ` Reindl Harald
@ 2023-01-21 22:56                                 ` Pascal Hambourg
  2023-01-21 22:59                                   ` Reindl Harald
  2023-01-21 23:02                                   ` Reindl Harald
  0 siblings, 2 replies; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-21 22:56 UTC (permalink / raw)
  To: Reindl Harald, Linux RAID Mailing List

On 21/01/2023 at 21:44, Reindl Harald wrote:
> Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
>> On 21/01/2023 at 19:57, Reindl Harald wrote:
>>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>
>> No, EFI is not the root cause either. The root cause is carelessly 
>> storing stuff in the bootloader area as if it was part of the standard 
>> Linux filesystem. Guess what ? It is not. 
> 
> LSB is dead

LSB has nothing to do with this.

> "BootLoaderSpec" is supposed to fix all the mess around UEFI and 
> bootloaders

You are not seriously believing this, are you ?

>>>> Wol wrote:
>>>>> quick rsync in the initramfs or boot sequence to sync EFIs, then 
>>>>> that's probably the best place.
>>>
>>> yeah, initramfs is fine because that's generated due kernel-install
>>
>> Aren't you confusing the initramfs execution and generation ?
> 
> initramfs execution is completly irrelevant for the topic

Why they did you agree that it was a fine place to sync EFI partitions ?

>>> ok, my mistake: initramfs generation is fine because at that point 
>>> everything is already there, the initrd is located on the EFI and 
>>> when that's finished is the point to sync a backup-ESP
>>
>> But that's not enough, because other parts of the system may write to 
>> the EFI partition, so it does not completely solve the issue
> 
> the issue "my primary drive died and i need to boot from the second 
> drive" is solved - period
> 
> nobody gives a shit about runtime stuff probably written to the ESP when 
> a drive dies

I'm not talking about that. I'm talking about other boot loader updates 
that may happen independently of kernel updates.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 22:56                                 ` Pascal Hambourg
@ 2023-01-21 22:59                                   ` Reindl Harald
  2023-01-21 23:04                                     ` Reindl Harald
  2023-01-21 23:02                                   ` Reindl Harald
  1 sibling, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 22:59 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 21.01.23 um 23:56 schrieb Pascal Hambourg:
> On 21/01/2023 at 21:44, Reindl Harald wrote:
>> Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
>>> On 21/01/2023 at 19:57, Reindl Harald wrote:
>>>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>>
>>> No, EFI is not the root cause either. The root cause is carelessly 
>>> storing stuff in the bootloader area as if it was part of the 
>>> standard Linux filesystem. Guess what ? It is not. 
>>
>> LSB is dead
> 
> LSB has nothing to do with this

fine, so nobody else but you knows what "part of standard Linux 
filesystem" means - i can't read your mind

it's difficult wehn people mix different topics as well as questions 
about "what is" and "what would be nice if it would"


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 22:56                                 ` Pascal Hambourg
  2023-01-21 22:59                                   ` Reindl Harald
@ 2023-01-21 23:02                                   ` Reindl Harald
  2023-01-22  9:22                                     ` Pascal Hambourg
  1 sibling, 1 reply; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 23:02 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 21.01.23 um 23:56 schrieb Pascal Hambourg:
> On 21/01/2023 at 21:44, Reindl Harald wrote:
>> Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
>>> On 21/01/2023 at 19:57, Reindl Harald wrote:
>>>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>>
>>> No, EFI is not the root cause either. The root cause is carelessly 
>>> storing stuff in the bootloader area as if it was part of the 
>>> standard Linux filesystem. Guess what ? It is not. 
>>
>> LSB is dead
> 
> LSB has nothing to do with this.
> 
>> "BootLoaderSpec" is supposed to fix all the mess around UEFI and 
>> bootloaders
> 
> You are not seriously believing this, are you ?

i have no idea what you don't understand in "is supposed to"

>> initramfs execution is completly irrelevant for the topic
> 
> Why they did you agree that it was a fine place to sync EFI partitions ?

i did not

> I'm not talking about that. I'm talking about other boot loader updates 
> that may happen independently of kernel updates

which is the reason why multi-boot is dead - it didnt't work well 20 
years ago and it works worser these days



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 22:59                                   ` Reindl Harald
@ 2023-01-21 23:04                                     ` Reindl Harald
  0 siblings, 0 replies; 49+ messages in thread
From: Reindl Harald @ 2023-01-21 23:04 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 21.01.23 um 23:59 schrieb Reindl Harald:
> 
> 
> Am 21.01.23 um 23:56 schrieb Pascal Hambourg:
>> On 21/01/2023 at 21:44, Reindl Harald wrote:
>>> Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
>>>> On 21/01/2023 at 19:57, Reindl Harald wrote:
>>>>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>>>
>>>> No, EFI is not the root cause either. The root cause is carelessly 
>>>> storing stuff in the bootloader area as if it was part of the 
>>>> standard Linux filesystem. Guess what ? It is not. 
>>>
>>> LSB is dead
>>
>> LSB has nothing to do with this
> 
> fine, so nobody else but you knows what "part of standard Linux 
> filesystem" means - i can't read your mind
> 
> it's difficult wehn people mix different topics as well as questions 
> about "what is" and "what would be nice if it would"

and if i would mix different topics like you i would throw 
https://bugzilla.redhat.com/show_bug.cgi?id=2162871 into this therad - 
but it don't belong to the tpoic because it's about "why the f** a 
system don't reboot/poweroff" while this topic is about "how do i get a 
system to boot"

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-15 17:25 ` H
@ 2023-01-22  5:05   ` H
  2023-01-22  8:52     ` Pascal Hambourg
                       ` (2 more replies)
  0 siblings, 3 replies; 49+ messages in thread
From: H @ 2023-01-22  5:05 UTC (permalink / raw)
  To: Linux RAID Mailing List

On January 15, 2023 12:25:11 PM EST, H <agents@meddatainc.com> wrote:
>On 01/14/2023 10:12 PM, H wrote:
>> I need to transfer an existing CentOS 7 non-RAID setup using one
>single SSD to a mdadm RAID1 using two much larger SSDs. All three disks
>exist in the same computer at the same time. Both the existing SSD and
>the new ones use LVM and LUKS and the objective is to preserve the OS,
>all other installed software and data etc. Thus no new installation
>should be needed.
>>
>> Since all disks, partitions, LUKS etc have their unique UUIDs I have
>not figured out how to do this and could use help and advice.
>>
>> In preparation for the above, I have:
>>
>> - Used rsync with the flags -aAXHv to copy all files on the existing
>SSD to an external harddisk for backup.
>>
>> - Partitioned the new SSDs as desired, including LVM and LUKS. My
>configuration uses one RAID1 for /boot, another RAID1 partition for
>/boot/efi, and a third one for the rest which also uses LVM and LUKS. I
>actually used a DVD image of Centos 7 (minimal installation) to
>accomplish this which also completed the minimal installation of the OS
>on the new disks. It boots as expected and the RAID partitions seem to
>work as expected.
>>
>> Since I want to actually move my existing installation from the
>existing SSDs, I am not sure whether I should just use rsync to copy
>everything from the old SSD to the new larger ones. However, I expect
>that to also transfer all OS files using the old, now incorrect UUIDs,
>to the new disks after which nothing will work, thus I have not yet
>done that. I could erase the minimal installation of the OS on the new
>disks before rsyncing but have not yet done so.
>>
>> I fully expect to have to do some manual editing of files but am not
>quite sure of all the files I would need to edit after such a copy. I
>have some knowledge of linux but could use some help and advice. For
>instance, I expect that /etc/fstab and /etc/crypttab would need to be
>edited reflecting the UUIDs of the new disks, partitions and LUKS, but
>which other files? Grub2 would also need to be edited I would think.
>>
>> The only good thing is that since both the old disk and the new disks
>are in the same computer, no other hardware will change.
>>
>> Is there another, better (read: simpler) way of accomplishing this
>transfer?
>>
>> Finally, since I do have a backup of the old SSD and there is nothing
>of value on the new mdadm RAID1 disks, except the partition
>information, I do have, if necessary, the luxury of multiple tries.
>What I cannot do, however, is to make any modifications to the existing
>old SSD since I cannot afford not being able to go back to the old SSD
>if necessary.
>>
>> Thanks.
>>
>>
>Upon further thinking, I am wondering if the process below would work?
>As stated above, I have two working disk setups in the same computer
>and depending on the order of disks in the BIOS I can boot any of the
>two setups.
>
>My existing setup uses one disk and no RAID (obviously), LUKS and LVM
>for everything but /boot and /boot/efi, total of three partitions. The
>OS is Centos 7 and I have made a complete backup to an external
>harddisk using rsync ("BACKUP1").
>
>The new one uses two disks, RAID1, LUKS and LVM for everything but
>/boot and /boot/efi, total of four partitions (swap has its own
>partition - not sure why I made it that way). A minimal installation of
>Centos 7 was made to this setup and is working. In other words, UUIDs
>of disks, partitions and LUKS are already configured and working.
>
>So, I am now thinking the following might work:
>
>- Make a rsync backup of the new disks to the external harddisk
>("BACKUP2").
>
>- Delete all files from the new disks except from /boot and /boot/efi.
>
>- Copy all files from all partitions except /boot and /boot/efi from
>BACKUP1 to the new disks. In other words, everything except /boot and
>/boot/efi will now be overwritten.
>
>- I would expect this system not to boot since both /etc/fstab and
>/etc/crypttab on the new disks contain the UUIDs from the old system.
>
>- Copy just /etc/fstab and /etc/crypttab from BACKUP2 to the new disks.
>This should update the new disks with the previously created UUIDs from
>when doing the minimal installation of CentOS 7.
>
>What do you think?

I am happy to share that my plan as outlined below worked. I now have /boot, /boot/efi and / on separate RAID partitions with the latter managed by LVM and encrypted.  All data from the old disk is now on the new setup and everything seems to be working.

However, going back to the issue of /boot/efi possibly not being duplicated by CentOS, would not mdadm take care of that automatically? How can I check?

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-22  5:05   ` H
@ 2023-01-22  8:52     ` Pascal Hambourg
  2023-01-22 17:19     ` Wol
  2023-01-23  3:44     ` Brad Campbell
  2 siblings, 0 replies; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-22  8:52 UTC (permalink / raw)
  To: H, Linux RAID Mailing List

On 22/01/2023 at 06:05, H wrote:
>>
>> The new one uses two disks, RAID1, LUKS and LVM for everything but
>> /boot and /boot/efi, total of four partitions (swap has its own
>> partition - not sure why I made it that way). A minimal installation of
>> Centos 7 was made to this setup and is working. In other words, UUIDs
>> of disks, partitions and LUKS are already configured and working.
>>
>> So, I am now thinking the following might work:
>>
>> - Make a rsync backup of the new disks to the external harddisk
>> ("BACKUP2").
>>
>> - Delete all files from the new disks except from /boot and /boot/efi.
>>
>> - Copy all files from all partitions except /boot and /boot/efi from
>> BACKUP1 to the new disks. In other words, everything except /boot and
>> /boot/efi will now be overwritten.
>>
>> - I would expect this system not to boot since both /etc/fstab and
>> /etc/crypttab on the new disks contain the UUIDs from the old system.
>>
>> - Copy just /etc/fstab and /etc/crypttab from BACKUP2 to the new disks.
>> This should update the new disks with the previously created UUIDs from
>> when doing the minimal installation of CentOS 7.
>>
>> What do you think?

There are caveats:
- The kernel versions must be the same in the old and new systems so 
that kernel images in /boot and kernel modules in /lib/modules match.
- Make sure that mdadm is installed in the old system. For now it is 
included in the initramfs generated by the new system but if mdadm is 
not installed in the old system, newer initramfs generated by the old 
system will fail to mount the root filesystem.

> I am happy to share that my plan as outlined below worked. I now have
> /boot, /boot/efi and / on separate RAID partitions with the latter
> managed by LVM and encrypted.  All data from the old disk is now on
> the new setup and everything seems to be working.
> 
> However, going back to the issue of /boot/efi possibly not being
> duplicated by CentOS, would not mdadm take care of that automatically?
> How can I check?

Is really /boot/efi on RAID ? You can check with "lsblk".
If it is, can you post the output of

cat /proc/mdstat
fdisk -l
blkid
efibootmgr -v

PS: Your lines are too long.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 22:56                               ` Reindl Harald
@ 2023-01-22  9:04                                 ` Pascal Hambourg
  2023-01-22 11:25                                   ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-22  9:04 UTC (permalink / raw)
  To: Reindl Harald, Wols Lists, Linux RAID Mailing List

On 21/01/2023 at 23:56, Reindl Harald wrote:
> Am 21.01.23 um 23:43 schrieb Pascal Hambourg:
>>
>> RAID provides redundancy while the system is running 
> 
> RAID provides redundancy for devices - no matter if something is running

RAID alone does not provide boot redundancy.
If a drive fails, the system will continue to run until shutdown. But it 
won't boot if the boot area was only set up on the failed disk.

>> For boot redundancy, the boot loader must be installed on each disk
> 
> so what

So that the system can still boot after whichever drive failed.

>> With EFI boot, it means there must be an EFI partition on each disk. 
>> But EFI partitions do not need to be identical as long as any of them 
>> can boot the system
> 
> so what - the topic is "Transferring an existing system from non-RAID 
> disks to RAID1 disks in the same computer"

People who install the system on RAID usually expect boot redundancy.

(trim off-topic rant)

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-21 23:02                                   ` Reindl Harald
@ 2023-01-22  9:22                                     ` Pascal Hambourg
  2023-01-22 11:29                                       ` Reindl Harald
  0 siblings, 1 reply; 49+ messages in thread
From: Pascal Hambourg @ 2023-01-22  9:22 UTC (permalink / raw)
  To: Reindl Harald, Linux RAID Mailing List

On 22/01/2023 at 00:02, Reindl Harald wrote:
> Am 21.01.23 um 23:56 schrieb Pascal Hambourg:
>> On 21/01/2023 at 21:44, Reindl Harald wrote:
>>> Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
>>>> On 21/01/2023 at 19:57, Reindl Harald wrote:
>>>>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>>>
>>>> No, EFI is not the root cause either. The root cause is carelessly 
>>>> storing stuff in the bootloader area as if it was part of the 
>>>> standard Linux filesystem. Guess what ? It is not. 
>>>
>>> LSB is dead
>>
>> LSB has nothing to do with this.
>
> fine, so nobody else but you knows what "part of standard Linux
> filesystem" means

I mean
- POSIX-compliant filesystem. A Linux operating system usually expects 
features such as case-sensitiveness, ownership and permissions, hard 
links and symlinks, special files... which are not supported by an EFI 
partition FAT filesystem.
- A standard Linux system can be installed on top of any block device 
layer supported by the kernel+initramfs and the boot loader (RAID, LVM, 
LUKS...). An EFI partition cannot because these are not supported by 
UEFI firmware.

>>> initramfs execution is completly irrelevant for the topic
>>
>> Why they did you agree that it was a fine place to sync EFI partitions ?
> 
> i did not

You did in <acc6add5-347b-7ecb-f6e9-056d21783984@thelounge.net>.

>> I'm not talking about that. I'm talking about other boot loader 
>> updates that may happen independently of kernel updates
> 
> which is the reason why multi-boot is dead

I am not talking about multi-boot. I am talking about boot loader 
components such as GRUB, shim... They get updates too, and then all EFI 
partitions should be updated.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-22  9:04                                 ` Pascal Hambourg
@ 2023-01-22 11:25                                   ` Reindl Harald
  0 siblings, 0 replies; 49+ messages in thread
From: Reindl Harald @ 2023-01-22 11:25 UTC (permalink / raw)
  To: Pascal Hambourg, Wols Lists, Linux RAID Mailing List



Am 22.01.23 um 10:04 schrieb Pascal Hambourg:
> On 21/01/2023 at 23:56, Reindl Harald wrote:
>> Am 21.01.23 um 23:43 schrieb Pascal Hambourg:
>>>
>>> RAID provides redundancy while the system is running 
>>
>> RAID provides redundancy for devices - no matter if something is running
> 
> RAID alone does not provide boot redundancy.
> If a drive fails, the system will continue to run until shutdown. But it 
> won't boot if the boot area was only set up on the failed disk

what's the point of such nonsense-comments?

yes, when i am an idiot an don't run grub2-install on all disks it#s no 
there - but as saif dozens of times: we all know this and it has to be 
only done ONCE per drive

after that RAID ensures (ensured before UEFI) that /boot and everything 
in that filesystem is redundant

>>> For boot redundancy, the boot loader must be installed on each disk
>>
>> so what
> 
> So that the system can still boot after whichever drive failed.

"so what" meant "tell me something we didn't know before you where born"

>>> With EFI boot, it means there must be an EFI partition on each disk. 
>>> But EFI partitions do not need to be identical as long as any of them 
>>> can boot the system
>>
>> so what - the topic is "Transferring an existing system from non-RAID 
>> disks to RAID1 disks in the same computer"
> 
> People who install the system on RAID usually expect boot redundancy

and i explained multiple times how to get that with ESP

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-22  9:22                                     ` Pascal Hambourg
@ 2023-01-22 11:29                                       ` Reindl Harald
  0 siblings, 0 replies; 49+ messages in thread
From: Reindl Harald @ 2023-01-22 11:29 UTC (permalink / raw)
  To: Pascal Hambourg, Linux RAID Mailing List



Am 22.01.23 um 10:22 schrieb Pascal Hambourg:
> On 22/01/2023 at 00:02, Reindl Harald wrote:
>> Am 21.01.23 um 23:56 schrieb Pascal Hambourg:
>>> On 21/01/2023 at 21:44, Reindl Harald wrote:
>>>> Am 21.01.23 um 21:04 schrieb Pascal Hambourg:
>>>>> On 21/01/2023 at 19:57, Reindl Harald wrote:
>>>>>> Am 21.01.23 um 19:52 schrieb Pascal Hambourg:
>>>>>
>>>>> No, EFI is not the root cause either. The root cause is carelessly 
>>>>> storing stuff in the bootloader area as if it was part of the 
>>>>> standard Linux filesystem. Guess what ? It is not. 
>>>>
>>>> LSB is dead
>>>
>>> LSB has nothing to do with this.
>>
>> fine, so nobody else but you knows what "part of standard Linux
>> filesystem" means
> 
> I mean
> - POSIX-compliant filesystem.

then for the sake of god say what you mean MORON

> A Linux operating system usually expects 
> features such as case-sensitiveness, ownership and permissions, hard 
> links and symlinks, special files... which are not supported by an EFI 
> partition FAT filesystem.

so what (google what that means)

> - A standard Linux system can be installed on top of any block device 
> layer supported by the kernel+initramfs and the boot loader (RAID, LVM, 
> LUKS...). An EFI partition cannot because these are not supported by 
> UEFI firmware.

so what

>>>> initramfs execution is completly irrelevant for the topic
>>>
>>> Why they did you agree that it was a fine place to sync EFI partitions ?
>>
>> i did not
> 
> You did in <acc6add5-347b-7ecb-f6e9-056d21783984@thelounge.net>.

i meant only to agree half of the sentence and told you that already

>>> I'm not talking about that. I'm talking about other boot loader 
>>> updates that may happen independently of kernel updates
>>
>> which is the reason why multi-boot is dead
> 
> I am not talking about multi-boot. I am talking about boot loader 
> components such as GRUB, shim... They get updates too, and then all EFI 
> partitions should be updated

why do you then talk so much nonsense isstead say what you mean?

that's a solved problem by rsync /efi/

can we stop running in circles like a monkey in this idiotic thread? i 
simply have enough of people like you explaining me the world which i 
discovered before you knew what a computer is

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-22  5:05   ` H
  2023-01-22  8:52     ` Pascal Hambourg
@ 2023-01-22 17:19     ` Wol
  2023-01-23  0:25       ` H
  2023-01-23  3:44     ` Brad Campbell
  2 siblings, 1 reply; 49+ messages in thread
From: Wol @ 2023-01-22 17:19 UTC (permalink / raw)
  To: H, Linux RAID Mailing List

On 22/01/2023 05:05, H wrote:
> However, going back to the issue of /boot/efi possibly not being duplicated by CentOS, would not mdadm take care of that automatically? How can I check?

mdadm/raid will take care of /boot/efi provided both (a) it's set up 
correctly, and (b) nothing outside of linux modifies it.

You can always run a raid integrity check (can't remember what it's 
called / the syntax) which will confirm they are identical.

But if something *has* messed with the mirror outside of linux, the only 
way you can find out what happened is to mount the underlying partitions 
(for heavens sake do that read only !!!) and compare them.

A bit of suggested ?light reading for you - get your head round the 
difference between superblocks 1.0 and 1.2, understand how raid can 
mirror a fat partition and why that only works with 1.0, and then 
understand how you can mount the underlying efi fat partitions 
separately from the raided partition.

Read the raid wiki https://raid.wiki.kernel.org/index.php/Linux_Raid and 
try to get to grips with what is actually going on ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-22 17:19     ` Wol
@ 2023-01-23  0:25       ` H
  0 siblings, 0 replies; 49+ messages in thread
From: H @ 2023-01-23  0:25 UTC (permalink / raw)
  To: Wol, Linux RAID Mailing List

On January 22, 2023 12:19:36 PM EST, Wol <antlists@youngman.org.uk> wrote:
>On 22/01/2023 05:05, H wrote:
>> However, going back to the issue of /boot/efi possibly not being
>duplicated by CentOS, would not mdadm take care of that automatically?
>How can I check?
>
>mdadm/raid will take care of /boot/efi provided both (a) it's set up 
>correctly, and (b) nothing outside of linux modifies it.
>
>You can always run a raid integrity check (can't remember what it's 
>called / the syntax) which will confirm they are identical.
>
>But if something *has* messed with the mirror outside of linux, the
>only 
>way you can find out what happened is to mount the underlying
>partitions 
>(for heavens sake do that read only !!!) and compare them.
>
>A bit of suggested ?light reading for you - get your head round the 
>difference between superblocks 1.0 and 1.2, understand how raid can 
>mirror a fat partition and why that only works with 1.0, and then 
>understand how you can mount the underlying efi fat partitions 
>separately from the raided partition.
>
>Read the raid wiki https://raid.wiki.kernel.org/index.php/Linux_Raid
>and 
>try to get to grips with what is actually going on ...
>
>Cheers,
>Wol

Good to know. Thank you.

By the way, I had not set the partition labels when I installed on the new disks and I see that they became localhost:boot etc, all of the labels start with ”localhost:”

Is there any reason I cannot simply use gparted in CentOS to rename them, ie removing the ”localhost:” part” while keeping the second part of each label? I understand that could have been used in fstab but I have not done that.

Any other place they could potentially be used or is the renaming above safe?

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Transferring an existing system from non-RAID disks to RAID1 disks in the same computer
  2023-01-22  5:05   ` H
  2023-01-22  8:52     ` Pascal Hambourg
  2023-01-22 17:19     ` Wol
@ 2023-01-23  3:44     ` Brad Campbell
  2 siblings, 0 replies; 49+ messages in thread
From: Brad Campbell @ 2023-01-23  3:44 UTC (permalink / raw)
  To: H, Linux RAID Mailing List

On 22/1/23 13:05, H wrote:

> I am happy to share that my plan as outlined below worked. I now have /boot, /boot/efi and / on separate RAID partitions with the latter managed by LVM and encrypted.  All data from the old disk is now on the new setup and everything seems to be working.
> 
> However, going back to the issue of /boot/efi possibly not being duplicated by CentOS, would not mdadm take care of that automatically? How can I check?
> 


Well, this one has sparked a bit of an argument.

I do the same. /boot and EFI are on RAID-1 0.90 partitions (ext4 and FAT32 respectively) and everything else is encrypted.

I use rEFInd as a bootloader. It *can* write to the EFI partition to record preferences and settings which can see the RAID out of sync, but it's rare I see that.
In practice this doesn't cause an issue and if the monthly raid check indicates a mismatch I just correct it with a "repair".

In my case the RAID for EFI is for availability.
If I drop a disk the system will still boot regardless of the settings in the partition, and as rEFInd reads its config file from /boot there is no need to write to EFI at all save for upgrading the bootloader.

Regards,
Brad

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2023-01-23  4:24 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-15  3:12 Transferring an existing system from non-RAID disks to RAID1 disks in the same computer H
2023-01-15  8:41 ` Wols Lists
2023-01-15  9:02   ` Reindl Harald
2023-01-15  9:20     ` Wols Lists
2023-01-15  9:39       ` Reindl Harald
2023-01-15 10:45         ` Wols Lists
2023-01-20  2:48       ` Phil Turmel
2023-01-15 11:31 ` Pascal Hambourg
2023-01-20  2:51   ` Phil Turmel
2023-01-20 19:27     ` Pascal Hambourg
2023-01-20 20:26       ` Wol
2023-01-20 21:01         ` Pascal Hambourg
2023-01-21  8:49           ` Wols Lists
2023-01-21 13:32             ` Pascal Hambourg
2023-01-21 14:04               ` Wols Lists
2023-01-21 14:33                 ` Pascal Hambourg
2023-01-21 15:21                   ` Wols Lists
2023-01-21 18:32                     ` Pascal Hambourg
2023-01-21 18:39                       ` Reindl Harald
2023-01-21 18:57                         ` Pascal Hambourg
2023-01-21 19:08                           ` Reindl Harald
2023-01-21 22:43                             ` Pascal Hambourg
2023-01-21 22:56                               ` Reindl Harald
2023-01-22  9:04                                 ` Pascal Hambourg
2023-01-22 11:25                                   ` Reindl Harald
2023-01-21 12:17           ` Reindl Harald
2023-01-21 14:15             ` Pascal Hambourg
2023-01-21 14:31               ` Reindl Harald
2023-01-21 14:38                 ` Pascal Hambourg
2023-01-21 14:52                   ` Reindl Harald
2023-01-21 15:17                     ` Pascal Hambourg
2023-01-21 16:24                       ` Reindl Harald
2023-01-21 18:52                         ` Pascal Hambourg
2023-01-21 18:57                           ` Reindl Harald
2023-01-21 20:04                             ` Pascal Hambourg
2023-01-21 20:44                               ` Reindl Harald
2023-01-21 22:56                                 ` Pascal Hambourg
2023-01-21 22:59                                   ` Reindl Harald
2023-01-21 23:04                                     ` Reindl Harald
2023-01-21 23:02                                   ` Reindl Harald
2023-01-22  9:22                                     ` Pascal Hambourg
2023-01-22 11:29                                       ` Reindl Harald
2023-01-21 21:20         ` Phil Turmel
2023-01-15 17:25 ` H
2023-01-22  5:05   ` H
2023-01-22  8:52     ` Pascal Hambourg
2023-01-22 17:19     ` Wol
2023-01-23  0:25       ` H
2023-01-23  3:44     ` Brad Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.