All of lore.kernel.org
 help / color / mirror / Atom feed
* Setup Recommendation on UEFI/GRUB/RAID1/LVM
@ 2020-04-14 13:02 Stefanie Leisestreichler
  2020-04-14 14:20 ` Wols Lists
                   ` (3 more replies)
  0 siblings, 4 replies; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-14 13:02 UTC (permalink / raw)
  To: linux-raid

Hi List.
I want to set up a new server. Data should be redundant that is why I 
want to use RAID level 1 using 2 HDs, each having 1TB. Like suggested in 
the wiki, I want to have the RAID running on a partition which has 
TOTAL_SIZE - 100M allocated for smoother replacement of an array-disk in 
case of failure.

The firmware is UEFI, Partitioning will be made using GPT/gdisk.

Boot manager should be GRUB (not legacy).

To be safe on system updates I want to use LVM snapshots. I like to make 
a LVM-based snapshot when the system comes up, do the system updates, 
perform the test and dicide either to go with the updates made or revert 
to the original state.

I have read that - when using UEFI - the EFI-System-Partition (ESP) has 
to reside in a own native partition, not in a RAID nor LVM block device.

Also I read a recommendation to put SWAP in a seperate native partition 
and raid it in case you want to avoid a software crash when 1 disk fails.

I wonder, how I should build up this construct. I thought I could build 
one partition with TOTAL_SIZE - 100M, Type FD00, on each device, take 
these two (sda1 + sdb1) and build a RAID 1 array named md0. Next make 
md0 the physical volume of my LVM (pvcreate /dev/md0) and after that add 
a volume group in which I put my logical volumes:
- swap - type EF00
- /boot - with filesystem fat32 for uefi
- /home - ext4
- /tmp - ext4
- / - ext4
- /var/lib/mysql - ext4 with special mount options
- /var/lib/vmimages - ext4 with special mount options

Is this doable or is it not working since UEFI will not find my 
bootimage, because in this config it is sitting not in an own native 
partition?

If it is not doable, do you see any suitable setup to archive my goals? 
I do not want to use btrfs.

Thanks,
Steffi

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 13:02 Setup Recommendation on UEFI/GRUB/RAID1/LVM Stefanie Leisestreichler
@ 2020-04-14 14:20 ` Wols Lists
  2020-04-14 14:27   ` Reindl Harald
                     ` (3 more replies)
  2020-04-14 16:00 ` G
                   ` (2 subsequent siblings)
  3 siblings, 4 replies; 29+ messages in thread
From: Wols Lists @ 2020-04-14 14:20 UTC (permalink / raw)
  To: Stefanie Leisestreichler, linux-raid

On 14/04/20 14:02, Stefanie Leisestreichler wrote:
> Hi List.
> I want to set up a new server. Data should be redundant that is why I
> want to use RAID level 1 using 2 HDs, each having 1TB. Like suggested in
> the wiki, I want to have the RAID running on a partition which has
> TOTAL_SIZE - 100M allocated for smoother replacement of an array-disk in
> case of failure.
> 
> The firmware is UEFI, Partitioning will be made using GPT/gdisk.
> 
> Boot manager should be GRUB (not legacy).

Why? Why not EFI? That can boot linux just as easily as it can boot grub.
> 
> To be safe on system updates I want to use LVM snapshots. I like to make
> a LVM-based snapshot when the system comes up, do the system updates,
> perform the test and dicide either to go with the updates made or revert
> to the original state.
> 
That's a good idea. Have you already got the 1TB disks, and how much
"growth room" does 1TB provide? Make sure you get drives that are
SCT/ERC-capable, and I'd certainly look at going larger. The more spare
space you have, the more snapshots you can keep, maybe keeping a backup
once a fortnight going back back back (or even daily :-)

> I have read that - when using UEFI - the EFI-System-Partition (ESP) has
> to reside in a own native partition, not in a RAID nor LVM block device.
> 
EFI is the replacement for BIOS - in other words it's encoded in the
motherboard ROM and knows nothing about linux and linux-type file
systems. The only reason it knows about VFAT is that the EFI spec
demands it.

> Also I read a recommendation to put SWAP in a seperate native partition
> and raid it in case you want to avoid a software crash when 1 disk fails.

That's true, but is it worth it? RAM is cheap, max out your motherboard,
and try and avoid falling into swap at all. I have one swap partition
per disk, but simply set them up as equal priority so the kernel does
its own raid-0 stripe across them. Yes a disk failure would kill any
apps swapped on to that disk, but my system rarely swaps...
> 
> I wonder, how I should build up this construct. I thought I could build
> one partition with TOTAL_SIZE - 100M, Type FD00, on each device, take
> these two (sda1 + sdb1) and build a RAID 1 array named md0. Next make
> md0 the physical volume of my LVM (pvcreate /dev/md0) and after that add
> a volume group in which I put my logical volumes:
> - swap - type EF00
> - /boot - with filesystem fat32 for uefi
> - /home - ext4
> - /tmp - ext4
> - / - ext4
> - /var/lib/mysql - ext4 with special mount options
> - /var/lib/vmimages - ext4 with special mount options
> 
> Is this doable or is it not working since UEFI will not find my
> bootimage, because in this config it is sitting not in an own native
> partition?

/boot needs to be outside your linux setup, and I'd put swap outside
lvm/raid too
> 
> If it is not doable, do you see any suitable setup to archive my goals?
> I do not want to use btrfs.
> 
okay. sda1 is vfat for EFI and is your /boot. configure sdb the same,
and you'll need to manually copy across every time you update (or make
it a v0.9/v1.0 raid array and only change it from inside linux - tricky)

sda2 - swap. I'd make its size equal to ram - and as I said, same on sdb
configured in linux as equal priority to give me a raid-0.

sda3 / sdb3 - the remaining space, less your 100M, raided together. You
then sit lvm on top in which you put your remaining volumes, /, /home,
/var/lib/mysql and /var/lib/images.

Again, personally, I'd make /tmp a tmpfs rather than a partition of its
own, the spec says that the system should *expect* /tmp to disappear at
any time and especially on reboot... while tmpfs defaults to half ram,
you can specify what size you want, and it'll make use of your swap space.

> Thanks,
> Steffi
> 
Cheers,
Wol

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 14:20 ` Wols Lists
@ 2020-04-14 14:27   ` Reindl Harald
  2020-04-14 15:12   ` Stefanie Leisestreichler
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 29+ messages in thread
From: Reindl Harald @ 2020-04-14 14:27 UTC (permalink / raw)
  To: Wols Lists, Stefanie Leisestreichler, linux-raid



Am 14.04.20 um 16:20 schrieb Wols Lists:
> On 14/04/20 14:02, Stefanie Leisestreichler wrote:
>> Hi List.
>> I want to set up a new server. Data should be redundant that is why I
>> want to use RAID level 1 using 2 HDs, each having 1TB. Like suggested in
>> the wiki, I want to have the RAID running on a partition which has
>> TOTAL_SIZE - 100M allocated for smoother replacement of an array-disk in
>> case of failure.
>>
>> The firmware is UEFI, Partitioning will be made using GPT/gdisk.
>>
>> Boot manager should be GRUB (not legacy).
> 
> Why? Why not EFI? That can boot linux just as easily as it can boot grub

because people prefer to boot the old kernel when something goes wrong
which is somehow difficult by just using uefi-stub

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 14:20 ` Wols Lists
  2020-04-14 14:27   ` Reindl Harald
@ 2020-04-14 15:12   ` Stefanie Leisestreichler
  2020-04-14 17:35     ` Wols Lists
  2020-04-16  1:50   ` David C. Rankin
  2020-04-16  1:58   ` David C. Rankin
  3 siblings, 1 reply; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-14 15:12 UTC (permalink / raw)
  To: Wols Lists, linux-raid


On 14.04.20 16:20, Wols Lists wrote:
> okay. sda1 is vfat for EFI and is your /boot. configure sdb the same,
> and you'll need to manually copy across every time you update (or make
> it a v0.9/v1.0 raid array and only change it from inside linux - tricky)
If I would like to stay with my intial thought and use GRUB, does this 
mean, I have to have one native partition for the UEFI System Partition 
formated with vfat on each disk? If this works and I will create an raid 
array (mdadm --create ... level=1 /dev/sda1 /dev/sda2) from these 2 
partitions, will I still have the need to cross copy after a kernel 
update or not?

> 
> sda2 - swap. I'd make its size equal to ram - and as I said, same on sdb
> configured in linux as equal priority to give me a raid-0.
Thanks for this tip, I would prefer swap and application safety which 
comes with raid1 in this case. Later I will try to optimize swappiness.

> 
> sda3 / sdb3 - the remaining space, less your 100M, raided together. You
> then sit lvm on top in which you put your remaining volumes, /, /home,
> /var/lib/mysql and /var/lib/images.
OK. Does this mean that I have to partition my both drives first and 
after that create the raid arrays, which will end in /dev/md0 for ESP 
(mount point /boot), /dev/md1 (swap), /dev/md2 for the rest?

What Partition Type do I have to use for /dev/sd[a|b]3? Will it be LVM 
or RAID?

> 
> Again, personally, I'd make /tmp a tmpfs rather than a partition of its
> own, the spec says that the system should*expect*  /tmp to disappear at
> any time and especially on reboot... while tmpfs defaults to half ram,
> you can specify what size you want, and it'll make use of your swap space.
Agreed, no LV for /tmp.

Thanks,
Steffi

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 13:02 Setup Recommendation on UEFI/GRUB/RAID1/LVM Stefanie Leisestreichler
  2020-04-14 14:20 ` Wols Lists
@ 2020-04-14 16:00 ` G
  2020-04-14 16:14   ` Stefanie Leisestreichler
  2020-04-14 16:20   ` Reindl Harald
  2020-04-14 18:14 ` Phillip Susi
       [not found] ` <394a3255-251c-41d1-8a65-2451e5503ef9@teksavvy.com>
  3 siblings, 2 replies; 29+ messages in thread
From: G @ 2020-04-14 16:00 UTC (permalink / raw)
  To: Stefanie Leisestreichler, linux-raid

On 2020-04-14 7:02 a.m., Stefanie Leisestreichler wrote:
> Hi List.
> I want to set up a new server. Data should be redundant that is why I 
> want to use RAID level 1 using 2 HDs, each having 1TB. Like suggested 
> in the wiki, I want to have the RAID running on a partition which has 
> TOTAL_SIZE - 100M allocated for smoother replacement of an array-disk 
> in case of failure.
>
> The firmware is UEFI, Partitioning will be made using GPT/gdisk.
>
> Boot manager should be GRUB (not legacy).
>
> To be safe on system updates I want to use LVM snapshots. I like to 
> make a LVM-based snapshot when the system comes up, do the system 
> updates, perform the test and dicide either to go with the updates 
> made or revert to the original state.
>
> I have read that - when using UEFI - the EFI-System-Partition (ESP) 
> has to reside in a own native partition, not in a RAID nor LVM block 
> device.
>
> Also I read a recommendation to put SWAP in a seperate native 
> partition and raid it in case you want to avoid a software crash when 
> 1 disk fails.
>
> I wonder, how I should build up this construct. I thought I could 
> build one partition with TOTAL_SIZE - 100M, Type FD00, on each device, 
> take these two (sda1 + sdb1) and build a RAID 1 array named md0. Next 
> make md0 the physical volume of my LVM (pvcreate /dev/md0) and after 
> that add a volume group in which I put my logical volumes:
> - swap - type EF00
> - /boot - with filesystem fat32 for uefi
> - /home - ext4
> - /tmp - ext4
> - / - ext4
> - /var/lib/mysql - ext4 with special mount options
> - /var/lib/vmimages - ext4 with special mount options
>
> Is this doable or is it not working since UEFI will not find my 
> bootimage, because in this config it is sitting not in an own native 
> partition?
>
> If it is not doable, do you see any suitable setup to archive my 
> goals? I do not want to use btrfs.
>
> Thanks,
> Steffi
>
>
Hi Stefanie

I don't quite understand your portioning requirements; 100M raid for 
what? Is the remaining larger partition on each disk just LVM'd without 
redundancy. If so 100M is probably minimum for a 'boot' partition with 
the OS residing on the non-redundant lvm.

Since you are running disks less than 2TB I would suggest a more 
rudimentary setup using legacy bios booting. This setup will not allow 
disks greater than 2TB because they would not be partitioned GPT. There 
would still be an ability to increase total storage using more disks. 
There would be raid redundancy with the ability for grub to boot off 
either disk.

Two identical partitions on each disk using mbr partition tables.

200-250M 'boot' partition raid1 metadata 0.9 formatted ext2 (holds linux 
images)

remainder 'root' (and data and possibly 'swapfile' (with ability to 
shrink and grow to a max size)) partition raid1 metadata 1.2 formatted 
ext4 (for mirrored redundancy or no raid for non-raid BOD)

Create raids and filesystems before install using live cd from command 
line. Use -m 1 option in mkfs for reduced 1% system reserved space 
instead of the default 5%.

Of course there could be different partitioning layouts (eg separate 
'home partition')

Install OS using manual partitioner to be able to use partitions as 
required. Install grub to both disks.

Hope this may help

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 16:00 ` G
@ 2020-04-14 16:14   ` Stefanie Leisestreichler
  2020-04-17 23:10     ` Nix
  2020-04-14 16:20   ` Reindl Harald
  1 sibling, 1 reply; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-14 16:14 UTC (permalink / raw)
  To: G, linux-raid



On 14.04.20 18:00, G wrote:
> Since you are running disks less than 2TB I would suggest a more 
> rudimentary setup using legacy bios booting. This setup will not allow 
> disks greater than 2TB because they would not be partitioned GPT. There 
> would still be an ability to increase total storage using more disks. 
> There would be raid redundancy with the ability for grub to boot off 
> either disk.
> 
> Two identical partitions on each disk using mbr partition tables.

Thank you for your thoughts and input.
I would prefer using uefi/gpt in favor of legacy setup even you are 
totally right about the 2TB border.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 16:00 ` G
  2020-04-14 16:14   ` Stefanie Leisestreichler
@ 2020-04-14 16:20   ` Reindl Harald
  2020-04-14 16:41     ` Stefanie Leisestreichler
  2020-04-16 17:00     ` G
  1 sibling, 2 replies; 29+ messages in thread
From: Reindl Harald @ 2020-04-14 16:20 UTC (permalink / raw)
  To: G, Stefanie Leisestreichler, linux-raid



Am 14.04.20 um 18:00 schrieb G:
> Since you are running disks less than 2TB I would suggest a more
> rudimentary setup using legacy bios booting. This setup will not allow
> disks greater than 2TB because they would not be partitioned GPT. There
> would still be an ability to increase total storage using more disks.
> There would be raid redundancy with the ability for grub to boot off
> either disk.

terrible idea in 2020

Intel announced yeas ago that they itend to remove legacy bios booting
in 2020 and even if it#s just in 2022: RAID machines are supposed to
live many many years and you just move the disks to your next machine
and boot as yesterday

i argued like you in 2011 and now i have a 4x2 TB RAID10 and need to
deal with EFI-partition on a USB stick with the replacement as soon as
there are icelake or never serious desktop machines because there is no
single reason to reinstall from scratch when you survived already 17
fedora dist-upgrades

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 16:20   ` Reindl Harald
@ 2020-04-14 16:41     ` Stefanie Leisestreichler
  2020-04-16 17:00     ` G
  1 sibling, 0 replies; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-14 16:41 UTC (permalink / raw)
  To: Reindl Harald, G, linux-raid



On 14.04.20 18:20, Reindl Harald wrote:
> RAID machines are supposed to
> live many many years and you just move the disks to your next machine
> and boot as yesterday

This is exactly the idea behind that setup. My OS will be Arch Linux 
which is a rolling distribution. I have some years of very good 
experience with it on my development machine and I love to have the 
newest and supported software/kernel/libs it is providing.

For these very rare cases in which the system update will somehow hurt 
something I need the LVM snapshots to get me running again pretty soon 
if needed. I bought double hardware like motherboard/processor and will 
change disks at latest after 5 years. The system is supposed to run for 
10+ years and will host some virtual machines based on KVM (one of them 
is from 2006 :-) running ubuntu LTS, totally outdated but always there, 
when needed, guess the next 10 years, LOL).

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 15:12   ` Stefanie Leisestreichler
@ 2020-04-14 17:35     ` Wols Lists
  2020-04-14 19:13       ` Stefanie Leisestreichler
  2020-04-17 23:06       ` Nix
  0 siblings, 2 replies; 29+ messages in thread
From: Wols Lists @ 2020-04-14 17:35 UTC (permalink / raw)
  To: Stefanie Leisestreichler, linux-raid

On 14/04/20 16:12, Stefanie Leisestreichler wrote:
> 
> On 14.04.20 16:20, Wols Lists wrote:
>> okay. sda1 is vfat for EFI and is your /boot. configure sdb the same,
>> and you'll need to manually copy across every time you update (or make
>> it a v0.9/v1.0 raid array and only change it from inside linux - tricky)

> If I would like to stay with my intial thought and use GRUB, does this
> mean, I have to have one native partition for the UEFI System Partition
> formated with vfat on each disk? If this works and I will create an raid
> array (mdadm --create ... level=1 /dev/sda1 /dev/sda2) from these 2
> partitions, will I still have the need to cross copy after a kernel
> update or not?
> 
Everything else is mirrored - you should mirror your boot setup ... you
don't want disk 0 to die, and although (almost) everything is there on
disk 1 you can't boot the system because grub/efi/whatever isn't there...

The crucial question is whether your updates to your efi partition
happen under the control of linux, or under the control of efi. If they
happen at the linux level, then they will happen to both disks together.
If they happen at the efi level, then they will only happen to disk 0,
and you will need to re-sync the mirror.
>>
>> sda2 - swap. I'd make its size equal to ram - and as I said, same on sdb
>> configured in linux as equal priority to give me a raid-0.

> Thanks for this tip, I would prefer swap and application safety which
> comes with raid1 in this case. Later I will try to optimize swappiness.
> 
I prefer swap to be at least twice ram. A lot of people think I'm daft
for that, but it always used to be the rule things were better that way.
It's been pointed out to me that this can be used as a denial of service
(a fork bomb, for example, will crucify your system until the OOM killer
takes it out, which will take a LOOONNG time with gigs of VM). Horses
for courses.
>>
>> sda3 / sdb3 - the remaining space, less your 100M, raided together. You
>> then sit lvm on top in which you put your remaining volumes, /, /home,
>> /var/lib/mysql and /var/lib/images.

> OK. Does this mean that I have to partition my both drives first and
> after that create the raid arrays, which will end in /dev/md0 for ESP
> (mount point /boot), /dev/md1 (swap), /dev/md2 for the rest?

Yup. Apart from the fact that they will probably be 126, 125 and 124 not
0, 1, 2. And if I were you I'd name them, for example EFI, SWAP, MAIN or
LVM.
> 
> What Partition Type do I have to use for /dev/sd[a|b]3? Will it be LVM
> or RAID?
> 
I'd just use type linux ...
>>
>> Again, personally, I'd make /tmp a tmpfs rather than a partition of its
>> own, the spec says that the system should*expect*  /tmp to disappear at
>> any time and especially on reboot... while tmpfs defaults to half ram,
>> you can specify what size you want, and it'll make use of your swap
>> space.
> Agreed, no LV for /tmp.
> 
Sounds like you probably know this, but remember that /tmp and /var/tmp
are different - DON'T make /var/tmp a tmpfs, and use a cron job to clean
that - I made that mistake early on ... :-)

You found the kernel raid wiki, did you? You've read
https://raid.wiki.kernel.org/index.php/Setting_up_a_%28new%29_system and
the pages round it? It's not meant to be definitive, but it gives you a
lot of ideas. In particular, dm-integrity. I intend to play with that a
lot as soon as I can get my new system up and running, when I'll
relegate the old system to a test-bed.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 13:02 Setup Recommendation on UEFI/GRUB/RAID1/LVM Stefanie Leisestreichler
  2020-04-14 14:20 ` Wols Lists
  2020-04-14 16:00 ` G
@ 2020-04-14 18:14 ` Phillip Susi
  2020-04-14 19:00   ` Stefanie Leisestreichler
       [not found] ` <394a3255-251c-41d1-8a65-2451e5503ef9@teksavvy.com>
  3 siblings, 1 reply; 29+ messages in thread
From: Phillip Susi @ 2020-04-14 18:14 UTC (permalink / raw)
  To: Stefanie Leisestreichler; +Cc: linux-raid


Stefanie Leisestreichler writes:

> To be safe on system updates I want to use LVM snapshots. I like to make 
> a LVM-based snapshot when the system comes up, do the system updates, 
> perform the test and dicide either to go with the updates made or revert 
> to the original state.

Tradditional LVM snapshots were not suitable for keeping multiple, long
lived snapshots around.  They were really only for temporary use, such
as taking a snapshot to do a dump of without having to shut down
services.  I seem to remember they were developing a new multi snapshot
dm backend that would address some of these shortcomings, but I can't
find anything about that now in the google machine.

> I have read that - when using UEFI - the EFI-System-Partition (ESP) has 
> to reside in a own native partition, not in a RAID nor LVM block device.

Correct.

> I wonder, how I should build up this construct. I thought I could build 
> one partition with TOTAL_SIZE - 100M, Type FD00, on each device, take 
> these two (sda1 + sdb1) and build a RAID 1 array named md0. Next make 
> md0 the physical volume of my LVM (pvcreate /dev/md0) and after that add 
> a volume group in which I put my logical volumes:
> - swap - type EF00

EF00 is a GPT partition type.  Since this is a logical volume rather
than a GPT partition, there is no EF00.

> - /boot - with filesystem fat32 for uefi

Like you mentioned above, this needs to be a native partition, not a
logical volume.  Actually you only need /boot/EFI in a native fat32
partition, not all of /boot.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 18:14 ` Phillip Susi
@ 2020-04-14 19:00   ` Stefanie Leisestreichler
  2020-04-14 20:05     ` antlists
  0 siblings, 1 reply; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-14 19:00 UTC (permalink / raw)
  To: Phillip Susi; +Cc: linux-raid



On 14.04.20 20:14, Phillip Susi wrote:
> 
> Stefanie Leisestreichler writes:
> 
>> To be safe on system updates I want to use LVM snapshots. I like to make
>> a LVM-based snapshot when the system comes up, do the system updates,
>> perform the test and dicide either to go with the updates made or revert
>> to the original state.
> 
> Tradditional LVM snapshots were not suitable for keeping multiple, long
> lived snapshots around.  They were really only for temporary use, such
> as taking a snapshot to do a dump of without having to shut down
> services.  I seem to remember they were developing a new multi snapshot
> dm backend that would address some of these shortcomings, but I can't
> find anything about that now in the google machine.
> 
I could imagine living with this limitation since I
- do not expect to have to revert to the original state at all
- will have a decision about deleting/merging the snapshot pretty soon 
after the tests.

>> I have read that - when using UEFI - the EFI-System-Partition (ESP) has
>> to reside in a own native partition, not in a RAID nor LVM block device.
> 
> Correct.
This is exactly the point which I do not understand. So it is implicitly 
saying that it makes no sense to raid the 2 EFI-System-Partitions (sda1 
+ sdb1), i.e. as /dev/md124, as GRUB can not write to a RAID device and 
instead uses /dev/sda1 and no automatic sync will happen?

If that is true, I wonder how to setup a system using RAID 1 where you 
can - frankly spoken - remove one or the other disk and boot it :-(

> 
>> I wonder, how I should build up this construct. I thought I could build
>> one partition with TOTAL_SIZE - 100M, Type FD00, on each device, take
>> these two (sda1 + sdb1) and build a RAID 1 array named md0. Next make
>> md0 the physical volume of my LVM (pvcreate /dev/md0) and after that add
>> a volume group in which I put my logical volumes:
>> - swap - type EF00
I was wrong with that, swap will be type 8200.


>> - /boot - with filesystem fat32 for uefi
> 
> Like you mentioned above, this needs to be a native partition, not a
> logical volume.  Actually you only need /boot/EFI in a native fat32
> partition, not all of /boot.
> 
> 
OK, got it and will do so.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 17:35     ` Wols Lists
@ 2020-04-14 19:13       ` Stefanie Leisestreichler
  2020-04-14 19:36         ` Wols Lists
  2020-04-17 23:06       ` Nix
  1 sibling, 1 reply; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-14 19:13 UTC (permalink / raw)
  To: Wols Lists, linux-raid



On 14.04.20 19:35, Wols Lists wrote:
> I prefer swap to be at least twice ram. A lot of people think I'm daft
> for that, but it always used to be the rule things were better that way.
Twice RAM means 32G here, no prob, will do so. Not afraid of DoS-Attacks 
also - this machine operates in stealth mode and has got the most clean 
logs one could have seen so far :-)

>> OK. Does this mean that I have to partition my both drives first and
>> after that create the raid arrays, which will end in /dev/md0 for ESP
>> (mount point /boot), /dev/md1 (swap), /dev/md2 for the rest?
> Yup. Apart from the fact that they will probably be 126, 125 and 124 not
> 0, 1, 2. And if I were you I'd name them, for example EFI, SWAP, MAIN or
> LVM.
I will do so, thanks for the hint.

>> What Partition Type do I have to use for /dev/sd[a|b]3? Will it be LVM
>> or RAID?
>>
> I'd just use type linux ...
Could you please be a little more concrete on this one?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 19:13       ` Stefanie Leisestreichler
@ 2020-04-14 19:36         ` Wols Lists
  0 siblings, 0 replies; 29+ messages in thread
From: Wols Lists @ 2020-04-14 19:36 UTC (permalink / raw)
  To: Stefanie Leisestreichler, linux-raid

On 14/04/20 20:13, Stefanie Leisestreichler wrote:
>> I'd just use type linux ...

> Could you please be a little more concrete on this one?

The old scheme linux disks were 83. That makes it 8300 on gpt. Like 82
is swap.

At the end of the day, linux doesn't care much. So why should you? In
the old days you needed disk-type raid to tell the kernel to
auto-assemble it, but you shouldn't be using that nowadays.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 19:00   ` Stefanie Leisestreichler
@ 2020-04-14 20:05     ` antlists
  2020-04-17 23:24       ` Nix
  0 siblings, 1 reply; 29+ messages in thread
From: antlists @ 2020-04-14 20:05 UTC (permalink / raw)
  To: Stefanie Leisestreichler; +Cc: linux-raid

On 14/04/2020 20:00, Stefanie Leisestreichler wrote:
>>> I have read that - when using UEFI - the EFI-System-Partition (ESP) has
>>> to reside in a own native partition, not in a RAID nor LVM block device.
>>
>> Correct.

> This is exactly the point which I do not understand. So it is implicitly 
> saying that it makes no sense to raid the 2 EFI-System-Partitions (sda1 
> + sdb1), i.e. as /dev/md124, as GRUB can not write to a RAID device and 
> instead uses /dev/sda1 and no automatic sync will happen?
> 
> If that is true, I wonder how to setup a system using RAID 1 where you 
> can - frankly spoken - remove one or the other disk and boot it :-(

Because you haven't got to grips with how EFI boots a computer.

The CPU starts, jumps into the mobo rom, and then looks for an EFI 
partition on the disk. Because it understands VFAT, it then reads the 
EFI boot loader from the partition.

I don't quite get this myself, but this is where it gets confusing. EFI 
I think offers you a choice of bootloaders, which CAN include grub, 
which then offers you a choice of operating systems. But EFI can offer 
you a list of operating systems, too, which is why I said why did you 
want to use grub?

The whole point here is nothing to do with "grub can or can't write to a 
raid" - grub doesn't write to the disk! The point is that EFI is active 
BEFORE grub or linux even get a look-in. So if EFI writes to the efi 
partition, then that's before grub or linux even get a chance to do 
anything!

A lot of EFI is managed from the operating system. So if you do a v0.9 
or v1.0 raid-1, anything you do from within linux will be mirrored, and 
there's no problem. The problem is if anything modifies the efi 
partition outside of linux we have a problem. And because the whole 
point of efi is to be there before linux even starts, then it's 
*probable* that something will come along and do just that!

Cheers,
Wol

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
       [not found] ` <394a3255-251c-41d1-8a65-2451e5503ef9@teksavvy.com>
@ 2020-04-15 15:53   ` Stefanie Leisestreichler
  2020-04-15 16:10     ` Stefanie Leisestreichler
  0 siblings, 1 reply; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-15 15:53 UTC (permalink / raw)
  To: J. Brian Kelley, linux-raid



On 15.04.20 17:02, J. Brian Kelley wrote:
> As for the final heresy, BTRFS, really not the time or place...
> 
>    - But it does provide a 'swap FILE' capability (more memory is still 
> preferable).
>    - RAID1 is not affected by the "write-hole".
>    - extremely efficient snapshots
>    - everything in the file system (no LVM, no MDADM)
> 
> The only concern is that you MUST instruct BTRFS to NOT use the entire 
> available partition (btrfs shrink) to allow leeway for a resize.

I was thinking about using btrfs too, but I am afraid of loosing data 
and making inreversable faults which could cause data loss/fragmentation 
(for databases and kvm-images, i.e.) and so on.

I have found a lot of relevant information how to use it somewhere in 
the net and how to do this and that but not the one place where all the 
dos and donts are collected to start with :-(

Your hint about btrfs is new to me also even I read the docs in their 
wiki before. I bet I will find myself in a situation/environment which 
will be hard to administrate because of a gap of information I was 
overseeing when I took an action which I shouldn't.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-15 15:53   ` Stefanie Leisestreichler
@ 2020-04-15 16:10     ` Stefanie Leisestreichler
  0 siblings, 0 replies; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-15 16:10 UTC (permalink / raw)
  To: J. Brian Kelley, linux-raid



On 15.04.20 17:53, Stefanie Leisestreichler wrote:
> Your hint about btrfs is new to me also even I read the docs in their 
> wiki before. I bet I will find myself in a situation/environment which 
> will be hard to administrate because of a gap of information I was 
> overseeing when I took an action which I shouldn't.

For example, found this right now in the arch forums:
https://bbs.archlinux.org/viewtopic.php?id=246390
I don't know if this is still a case but you'd probably need battery 
backup for your system. I've had cases where a power outage borks btrfs 
irrecoverably.

Post is from 2019...

Really don't know if I should cross this road...

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 14:20 ` Wols Lists
  2020-04-14 14:27   ` Reindl Harald
  2020-04-14 15:12   ` Stefanie Leisestreichler
@ 2020-04-16  1:50   ` David C. Rankin
  2020-04-16 11:16     ` Wols Lists
  2020-04-16  1:58   ` David C. Rankin
  3 siblings, 1 reply; 29+ messages in thread
From: David C. Rankin @ 2020-04-16  1:50 UTC (permalink / raw)
  To: mdraid

On 04/14/2020 09:20 AM, Wols Lists wrote:
> That's true, but is it worth it? RAM is cheap, max out your motherboard,
> and try and avoid falling into swap at all. I have one swap partition
> per disk, but simply set them up as equal priority so the kernel does
> its own raid-0 stripe across them. Yes a disk failure would kill any
> apps swapped on to that disk, but my system rarely swaps...

That's a neat approach, I do it just the opposite and care RAID1 partitions
for swap (though I rarely swap as well). I've never had an issue restarting
after a failure (or me doing something dumb like hitting the wrong button on
the UPS)

So that I'm understanding, you simply create a swap partition on each disk not
part of an array, swapon both and let the kernel decide?


-- 
David C. Rankin, J.D.,P.E.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 14:20 ` Wols Lists
                     ` (2 preceding siblings ...)
  2020-04-16  1:50   ` David C. Rankin
@ 2020-04-16  1:58   ` David C. Rankin
  3 siblings, 0 replies; 29+ messages in thread
From: David C. Rankin @ 2020-04-16  1:58 UTC (permalink / raw)
  To: mdraid

On 04/14/2020 09:20 AM, Wols Lists wrote:
> Again, personally, I'd make /tmp a tmpfs rather than a partition of its
> own, the spec says that the system should *expect* /tmp to disappear at
> any time and especially on reboot... while tmpfs defaults to half ram,
> you can specify what size you want, and it'll make use of your swap space.

Agreed, but keep in mind that what is written to /tmp on tmpfs will be stored
in RAM (reducing your available RAM by that amount).

This only becomes an issue if you are running something that makes heavy use
of /tmp (some databases can use huge amounts of temporary storage for queries,
joins, etc...) Or if you are a budding programmer that routinely (or may
accidentally) write very large files to /tmp. If you fall into this category,
you may want to specify that large temporary files are created elsewhere.

I put /tmp on tmpfs for all normal installations and haven't had an issue.

-- 
David C. Rankin, J.D.,P.E.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-16  1:50   ` David C. Rankin
@ 2020-04-16 11:16     ` Wols Lists
  0 siblings, 0 replies; 29+ messages in thread
From: Wols Lists @ 2020-04-16 11:16 UTC (permalink / raw)
  To: David C. Rankin, mdraid

On 16/04/20 02:50, David C. Rankin wrote:
> On 04/14/2020 09:20 AM, Wols Lists wrote:
>> That's true, but is it worth it? RAM is cheap, max out your motherboard,
>> and try and avoid falling into swap at all. I have one swap partition
>> per disk, but simply set them up as equal priority so the kernel does
>> its own raid-0 stripe across them. Yes a disk failure would kill any
>> apps swapped on to that disk, but my system rarely swaps...
> 
> That's a neat approach, I do it just the opposite and care RAID1 partitions
> for swap (though I rarely swap as well). I've never had an issue restarting
> after a failure (or me doing something dumb like hitting the wrong button on
> the UPS)
> 
> So that I'm understanding, you simply create a swap partition on each disk not
> part of an array, swapon both and let the kernel decide?
> 
Don't forget the mount option "pri=1" ... the following is the relevant
lines from my fstab ...

# /dev/sda2             none            swap            sw,pri=1        0 0
UUID=184b6322-eb42-405e-b958-e55014ce3691       none    swap    sw,pri=1
       0 0
# /dev/sdb2             none            swap            sw,pri=1        0 0
UUID=48913c8d-e4c4-482d-9733-e44be3889f07       none    swap    sw,pri=1
       0 0
# /dev/sdc2             none            swap            sw,pri=1        0 0

If you don't use the priority option, each partition is allocated a
different priority and it fills them up one after the other (raid
linear). Make them all equal priority, and it'll stripe them.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 16:20   ` Reindl Harald
  2020-04-14 16:41     ` Stefanie Leisestreichler
@ 2020-04-16 17:00     ` G
  1 sibling, 0 replies; 29+ messages in thread
From: G @ 2020-04-16 17:00 UTC (permalink / raw)
  To: Reindl Harald, Stefanie Leisestreichler, linux-raid

On 2020-04-14 10:20 a.m., Reindl Harald wrote:
>
> Am 14.04.20 um 18:00 schrieb G:
>> Since you are running disks less than 2TB I would suggest a more
>> rudimentary setup using legacy bios booting. This setup will not allow
>> disks greater than 2TB because they would not be partitioned GPT. There
>> would still be an ability to increase total storage using more disks.
>> There would be raid redundancy with the ability for grub to boot off
>> either disk.
> terrible idea in 2020
>
> Intel announced yeas ago that they itend to remove legacy bios booting
> in 2020 and even if it#s just in 2022: RAID machines are supposed to
> live many many years and you just move the disks to your next machine
> and boot as yesterday
>
> i argued like you in 2011 and now i have a 4x2 TB RAID10 and need to
> deal with EFI-partition on a USB stick with the replacement as soon as
> there are icelake or never serious desktop machines because there is no
> single reason to reinstall from scratch when you survived already 17
> fedora dist-upgrades
>
I guess my assumptions were that in using 2TB disks this was or would be 
a relatively short-lived/limited situation possibly using an older 
motherboard. ;-)

Since the setup will be grub/UEFI on the 2TB disks what would be the 
process in upgrading to larger disks?

Thanks

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 17:35     ` Wols Lists
  2020-04-14 19:13       ` Stefanie Leisestreichler
@ 2020-04-17 23:06       ` Nix
  2020-04-20  9:23         ` Stefanie Leisestreichler
  2020-04-20  9:45         ` Stefanie Leisestreichler
  1 sibling, 2 replies; 29+ messages in thread
From: Nix @ 2020-04-17 23:06 UTC (permalink / raw)
  To: Wols Lists; +Cc: Stefanie Leisestreichler, linux-raid

On 14 Apr 2020, Wols Lists said:

> On 14/04/20 16:12, Stefanie Leisestreichler wrote:
>> 
>> On 14.04.20 16:20, Wols Lists wrote:
>>> okay. sda1 is vfat for EFI and is your /boot. configure sdb the same,
>>> and you'll need to manually copy across every time you update (or make
>>> it a v0.9/v1.0 raid array and only change it from inside linux - tricky)
>
>> If I would like to stay with my intial thought and use GRUB, does this
>> mean, I have to have one native partition for the UEFI System Partition
>> formated with vfat on each disk? If this works and I will create an raid
>> array (mdadm --create ... level=1 /dev/sda1 /dev/sda2) from these 2
>> partitions, will I still have the need to cross copy after a kernel
>> update or not?
>> 
> Everything else is mirrored - you should mirror your boot setup ... you
> don't want disk 0 to die, and although (almost) everything is there on
> disk 1 you can't boot the system because grub/efi/whatever isn't there...

Yep.

> The crucial question is whether your updates to your efi partition
> happen under the control of linux, or under the control of efi. If they
> happen at the linux level, then they will happen to both disks together.
> If they happen at the efi level, then they will only happen to disk 0,
> and you will need to re-sync the mirror.

Rather than trying to make some sort of clever mirroring setup work with
the ESP, I just made one partition per disk, added all of them to the
efibootmgr:

Boot000B* Current kernel	HD(1,GPT,b6697409-a6ec-470d-994c-0d4828d08861,0x800,0x200000)/File(\efi\nix\current.efi)
Boot000E* Current kernel (secondary disk)	HD(1,GPT,9f5912c7-46e7-45bf-a49d-969250f0a388,0x800,0x200000)/File(\efi\nix\current.efi)
Boot0011* Current kernel (tertiary disk)	HD(1,GPT,8a5cf352-2e92-43ac-bb23-b0d9f27109e9,0x800,0x200000)/File(\efi\nix\current.efi)
Boot0012* Current kernel (quaternary disk)	HD(1,GPT,83ec2441-79e9-4f3c-86ec-378545f776c6,0x800,0x200000)/File(\efi\nix\current.efi)

... and used simple rsync at kernel install time to keep them in sync
(the variables in here are specific to my autobuilder, but the general
idea should be clear enough):

    # For EFI, we install the kernel by hand.  /boot may be a mountpoint,
    # kept unmounted in normal operation.
    mountpoint -q /boot && mount /boot
    KERNELVER="$(file $BUILDMAKEPATH/.o/arch/x86/boot/bzImage | sed 's,^.*version \([0-9\.]*\).*$,\1,g')"
    install -o root -g root $BUILDMAKEPATH/.o/System.map /boot/System.map-$KERNELVER
    install -o root -g root $BUILDMAKEPATH/.o/arch/x86/boot/bzImage /boot/efi/nix/vmlinux-$KERNELVER.efi
    [[ -f /boot/efi/nix/current.efi ]] && mv /boot/efi/nix/current.efi /boot/efi/nix/old.efi
    install -o root -g root $BUILDMAKEPATH/.o/arch/x86/boot/bzImage /boot/efi/nix/current.efi
    for backup in 1 2 3; do
        test -d /boot/backups/$backup || continue
        mount /boot/backups/$backup
        rsync -rtq --modify-window=2 --one-file-system /boot/ /boot/backups/$backup
        umount /boot/backups/$backup
    done
    mountpoint -q /boot && umount /boot

The firmware should automatically handle booting the first kernel that's
on a disk that works. (I use CONFIG_EFI_STUB, so the kernel is itself
bootable directly by the firmware and I can avoid all that boot manager
rubbish and just let the firmware be my boot manager. I have 4 entries
each in the boot manager menu for the current kernel, the previous one I
installed, and a manually-copied 'stable kernel' that I update whenever
I think a kernel was working well enough to do so.)

>>> sda2 - swap. I'd make its size equal to ram - and as I said, same on sdb
>>> configured in linux as equal priority to give me a raid-0.
>
>> Thanks for this tip, I would prefer swap and application safety which
>> comes with raid1 in this case. Later I will try to optimize swappiness.
>> 
> I prefer swap to be at least twice ram. A lot of people think I'm daft
> for that, but it always used to be the rule things were better that way.
> It's been pointed out to me that this can be used as a denial of service
> (a fork bomb, for example, will crucify your system until the OOM killer
> takes it out, which will take a LOOONNG time with gigs of VM). Horses
> for courses.

Frankly... I would make this stuff a lot simpler and just not make a
swap partition at all. Instead, make a swapfile: RAID will just happen
for you automatically, you can expand the thing (or shrink it!) with
ease while the system is up, and it's exactly as fast as swap
partitions: swapfiles have been as fast as raw partitions since the
Linux 2.0 days.

>> What Partition Type do I have to use for /dev/sd[a|b]3? Will it be LVM
>> or RAID?
>> 
> I'd just use type linux ...

Literally nothing cares :) it's for documentation, mostly, to make sure
that e.g. Windows won't randomly smash a Linux partition. So whatever
feels right to you. The only type that really matters is the GUID of the
EFI system partition, and fdisk gets that right these days so you don't
have to care.

 -- N., set up a system without RAID recently and felt so... dirty (it
    has one NVMe device and nothing else). So everything important is
    NFS-mounted from a RAID array, natch. With 10GbE the NFS mounts are
    actually faster than the NVMe device until the server's huge write
    cache fills and it has to actually write it to the array :) but how
    often do you write a hundred gig to a disk at once, anyway?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 16:14   ` Stefanie Leisestreichler
@ 2020-04-17 23:10     ` Nix
  2020-04-24 19:15       ` Phillip Susi
  0 siblings, 1 reply; 29+ messages in thread
From: Nix @ 2020-04-17 23:10 UTC (permalink / raw)
  To: Stefanie Leisestreichler; +Cc: G, linux-raid

On 14 Apr 2020, Stefanie Leisestreichler outgrape:

> On 14.04.20 18:00, G wrote:
>> Since you are running disks less than 2TB I would suggest a more rudimentary setup using legacy bios booting. This setup will not
>> allow disks greater than 2TB because they would not be partitioned GPT. There would still be an ability to increase total storage
>> using more disks. There would be raid redundancy with the ability for grub to boot off either disk.
>>
>> Two identical partitions on each disk using mbr partition tables.
>
> Thank you for your thoughts and input.
> I would prefer using uefi/gpt in favor of legacy setup even you are
> totally right about the 2TB border.

Agreed. I avoided GPT like the plague for ages (shoddy early-2010s
motherboard firmware), but once I switched to it it was so much easier
to manage than old-style BIOS, and so much easier to deal with when
disaster struck, that I'd never consider going back. Does the BIOS have
anything like an EFI shell? No, no it doesn't. Can you hack your own
device drivers up if need be and throw them into place for more recovery
options? No, no you can't.

Can you play DOOM before the OS starts in your BIOS-based system -- oh
wait, sane people don't do that, do they. But you can't get higher
availability for truly important functions than that!
<https://doomwiki.org/wiki/Doom_UEFI>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-14 20:05     ` antlists
@ 2020-04-17 23:24       ` Nix
  0 siblings, 0 replies; 29+ messages in thread
From: Nix @ 2020-04-17 23:24 UTC (permalink / raw)
  To: antlists; +Cc: Stefanie Leisestreichler, linux-raid

On 14 Apr 2020, antlists@youngman.org.uk stated:

> The CPU starts, jumps into the mobo rom, and then looks for an EFI
> partition on the disk. Because it understands VFAT, it then reads the

Side note: technically it doesn't understand VFAT, exactly, and some
vendors (e.g. Apple) provide entirely separate drivers to read the ESP.
However, the fs is *very close* to VFAT (actually FAT32), and in
practice Linux's vfat driver works fine.

See s13.3 "File System Format" in
<https://uefi.org/sites/default/files/resources/UEFI_Spec_2_8_final.pdf>
(and similarly-named sections in earlier spec versions).

> I don't quite get this myself, but this is where it gets confusing.
> EFI I think offers you a choice of bootloaders, which CAN include
> grub, which then offers you a choice of operating systems. But EFI can
> offer you a list of operating systems, too, which is why I said why
> did you want to use grub?

... which is why I don't. In my experience grub(2) is just another thing
to go horribly wrong on an EFI system, with yet *another* set of driver
reimplementations, some of which are downright dangerous. (The XFS one,
for instance, which at one point assumed it could just access the fs as
it was on the disk, when you can't do any such thing without replaying
the journal first: XFS explicitly does not flush the journal on shutdown
because it's a waste of time, given that the journal must always be
present to access the fs anyway: but this means that accessing the fs
without going through the journal can omit any number of transactions,
quite possibly including the transactions that wrote out the kernel you
expected to be booting. Thankfully XFS has since evolved to the point
where grub simply can't read the fs at all -- sparse inode allocation
and CRC'd metadata both break it -- so the temptation to try to use it
on such filesystems is removed.)

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-17 23:06       ` Nix
@ 2020-04-20  9:23         ` Stefanie Leisestreichler
  2020-04-20  9:45         ` Stefanie Leisestreichler
  1 sibling, 0 replies; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-20  9:23 UTC (permalink / raw)
  To: Nix, Wols Lists; +Cc: linux-raid



On 18.04.20 01:06, Nix wrote:
> Frankly... I would make this stuff a lot simpler and just not make a
> swap partition at all. Instead, make a swapfile: RAID will just happen
> for you automatically, you can expand the thing (or shrink it!) with
> ease while the system is up, and it's exactly as fast as swap
> partitions: swapfiles have been as fast as raw partitions since the
> Linux 2.0 days.

This convinced me, I'll take this approach.
Thanks.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-17 23:06       ` Nix
  2020-04-20  9:23         ` Stefanie Leisestreichler
@ 2020-04-20  9:45         ` Stefanie Leisestreichler
  2020-04-20 11:05           ` Nix
  1 sibling, 1 reply; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-20  9:45 UTC (permalink / raw)
  To: Nix, Wols Lists; +Cc: linux-raid



On 18.04.20 01:06, Nix wrote:
> Rather than trying to make some sort of clever mirroring setup work with
> the ESP, I just made one partition per disk, added all of them to the
> efibootmgr:
> 
> Boot000B* Current kernel	HD(1,GPT,b6697409-a6ec-470d-994c-0d4828d08861,0x800,0x200000)/File(\efi\nix\current.efi)
> Boot000E* Current kernel (secondary disk)	HD(1,GPT,9f5912c7-46e7-45bf-a49d-969250f0a388,0x800,0x200000)/File(\efi\nix\current.efi)
> Boot0011* Current kernel (tertiary disk)	HD(1,GPT,8a5cf352-2e92-43ac-bb23-b0d9f27109e9,0x800,0x200000)/File(\efi\nix\current.efi)
> Boot0012* Current kernel (quaternary disk)	HD(1,GPT,83ec2441-79e9-4f3c-86ec-378545f776c6,0x800,0x200000)/File(\efi\nix\current.efi)
> 
> ... and used simple rsync at kernel install time to keep them in sync
> (the variables in here are specific to my autobuilder, but the general
> idea should be clear enough):
> 
>      # For EFI, we install the kernel by hand.  /boot may be a mountpoint,
>      # kept unmounted in normal operation.
>      mountpoint -q /boot && mount /boot
>      KERNELVER="$(file $BUILDMAKEPATH/.o/arch/x86/boot/bzImage | sed 's,^.*version \([0-9\.]*\).*$,\1,g')"
>      install -o root -g root $BUILDMAKEPATH/.o/System.map /boot/System.map-$KERNELVER
>      install -o root -g root $BUILDMAKEPATH/.o/arch/x86/boot/bzImage /boot/efi/nix/vmlinux-$KERNELVER.efi
>      [[ -f /boot/efi/nix/current.efi ]] && mv /boot/efi/nix/current.efi /boot/efi/nix/old.efi
>      install -o root -g root $BUILDMAKEPATH/.o/arch/x86/boot/bzImage /boot/efi/nix/current.efi
>      for backup in 1 2 3; do
>          test -d/boot/backups/$backup || continue
>          mount/boot/backups/$backup
>          rsync -rtq --modify-window=2 --one-file-system/boot/  /boot/backups/$backup
>          umount/boot/backups/$backup
>      done
>      mountpoint -q /boot && umount /boot

Is there any reason why one could not have another entry for each disk 
(8 entries in this example in total instead of the 4 being listed) like:

Boot000B* Current kernel 
HD(1,GPT,b6697409-a6ec-470d-994c-0d4828d08861,0x800,0x200000)/File(\efi\nix\old.efi)

and so on to support booting the old kernel in case something went wrong?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-20  9:45         ` Stefanie Leisestreichler
@ 2020-04-20 11:05           ` Nix
  2020-04-20 12:01             ` Stefanie Leisestreichler
  0 siblings, 1 reply; 29+ messages in thread
From: Nix @ 2020-04-20 11:05 UTC (permalink / raw)
  To: Stefanie Leisestreichler; +Cc: Wols Lists, linux-raid

On 20 Apr 2020, Stefanie Leisestreichler said:

> On 18.04.20 01:06, Nix wrote:
>> Boot000B* Current kernel	HD(1,GPT,b6697409-a6ec-470d-994c-0d4828d08861,0x800,0x200000)/File(\efi\nix\current.efi)
>> Boot000E* Current kernel (secondary disk)	HD(1,GPT,9f5912c7-46e7-45bf-a49d-969250f0a388,0x800,0x200000)/File(\efi\nix\current.efi)
>> Boot0011* Current kernel (tertiary disk)	HD(1,GPT,8a5cf352-2e92-43ac-bb23-b0d9f27109e9,0x800,0x200000)/File(\efi\nix\current.efi)
>> Boot0012* Current kernel (quaternary disk)	HD(1,GPT,83ec2441-79e9-4f3c-86ec-378545f776c6,0x800,0x200000)/File(\efi\nix\current.efi)
[...]
>
> Is there any reason why one could not have another entry for each disk (8 entries in this example in total instead of the 4 being
> listed) like:
>
> Boot000B* Current kernel HD(1,GPT,b6697409-a6ec-470d-994c-0d4828d08861,0x800,0x200000)/File(\efi\nix\old.efi)
>
> and so on to support booting the old kernel in case something went wrong?

None! I have one entry each for old/stable kernel on the current disk,
but it just gets quite verbose and repetitive with six disks, and the
two (other disks, old kernels) serve different purposes: I'm going to
want to boot off another disk if the kernel works but something went
wrong with the disk, and I'm going to want to boot an old kernel if the
kernel has gone wrong, in which case probably the disk is fine and I can
just boot off the current disk. If I find when booting a new kernel that
it breaks the disk and I have to boot an old kernel off another disk,
I'll just use the EFI shell to do it. :)

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-20 11:05           ` Nix
@ 2020-04-20 12:01             ` Stefanie Leisestreichler
  0 siblings, 0 replies; 29+ messages in thread
From: Stefanie Leisestreichler @ 2020-04-20 12:01 UTC (permalink / raw)
  To: Nix; +Cc: Wols Lists, linux-raid



On 20.04.20 13:05, Nix wrote:
>> Is there any reason why one could not have another entry for each disk (8 entries in this example in total instead of the 4 being
>> listed) like:
>>
>> Boot000B* Current kernel HD(1,GPT,b6697409-a6ec-470d-994c-0d4828d08861,0x800,0x200000)/File(\efi\nix\old.efi)
>>
>> and so on to support booting the old kernel in case something went wrong?

> None! I have one entry each for old/stable kernel on the current disk,
> but it just gets quite verbose and repetitive with six disks, and the
> two (other disks, old kernels) serve different purposes: I'm going to
> want to boot off another disk if the kernel works but something went
> wrong with the disk, and I'm going to want to boot an old kernel if the
> kernel has gone wrong, in which case probably the disk is fine and I can
> just boot off the current disk. If I find when booting a new kernel that
> it breaks the disk and I have to boot an old kernel off another disk,
> I'll just use the EFI shell to do it. :)

I guess I got the idea. Need to dive deeper using EFI shell, but this 
sounds like a smart solution even there is no grub, managing the 
kernels, thanks.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-17 23:10     ` Nix
@ 2020-04-24 19:15       ` Phillip Susi
  2020-04-24 20:24         ` Nix
  0 siblings, 1 reply; 29+ messages in thread
From: Phillip Susi @ 2020-04-24 19:15 UTC (permalink / raw)
  To: Nix; +Cc: Stefanie Leisestreichler, G, linux-raid


Nix writes:

> Agreed. I avoided GPT like the plague for ages (shoddy early-2010s
> motherboard firmware), but once I switched to it it was so much easier
> to manage than old-style BIOS, and so much easier to deal with when
> disaster struck, that I'd never consider going back. Does the BIOS have
> anything like an EFI shell? No, no it doesn't. Can you hack your own

My current and previous motherboard with UEFI boot support did not come
with the shell.  I once tried downloading one and running it but never
could find anything useful to do with it.  I also haven't ever been able
to find any useful UEFI drivers or programs.

> Can you play DOOM before the OS starts in your BIOS-based system -- oh
> wait, sane people don't do that, do they. But you can't get higher
> availability for truly important functions than that!
> <https://doomwiki.org/wiki/Doom_UEFI>

OMG!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Setup Recommendation on UEFI/GRUB/RAID1/LVM
  2020-04-24 19:15       ` Phillip Susi
@ 2020-04-24 20:24         ` Nix
  0 siblings, 0 replies; 29+ messages in thread
From: Nix @ 2020-04-24 20:24 UTC (permalink / raw)
  To: Phillip Susi; +Cc: Stefanie Leisestreichler, G, linux-raid

On 24 Apr 2020, Phillip Susi stated:

>
> Nix writes:
>
>> Agreed. I avoided GPT like the plague for ages (shoddy early-2010s
>> motherboard firmware), but once I switched to it it was so much easier
>> to manage than old-style BIOS, and so much easier to deal with when
>> disaster struck, that I'd never consider going back. Does the BIOS have
>> anything like an EFI shell? No, no it doesn't. Can you hack your own
>
> My current and previous motherboard with UEFI boot support did not come
> with the shell.  I once tried downloading one and running it but never
> could find anything useful to do with it.

It's mostly useful for emergency recovery, I'll admit :)

>                                            I also haven't ever been able
> to find any useful UEFI drivers or programs.

They are very thin on the ground :( a shame, really: it seems like
something you *should* be able to use to build whole rescue
environments, but nooo. I guess it *is* easier to stick those in an
initramfs linked into a kernel built as an EFI stub, since at least it's
a Linux rather than the rather weird PE-based environment which is EFI.

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-04-24 20:24 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-14 13:02 Setup Recommendation on UEFI/GRUB/RAID1/LVM Stefanie Leisestreichler
2020-04-14 14:20 ` Wols Lists
2020-04-14 14:27   ` Reindl Harald
2020-04-14 15:12   ` Stefanie Leisestreichler
2020-04-14 17:35     ` Wols Lists
2020-04-14 19:13       ` Stefanie Leisestreichler
2020-04-14 19:36         ` Wols Lists
2020-04-17 23:06       ` Nix
2020-04-20  9:23         ` Stefanie Leisestreichler
2020-04-20  9:45         ` Stefanie Leisestreichler
2020-04-20 11:05           ` Nix
2020-04-20 12:01             ` Stefanie Leisestreichler
2020-04-16  1:50   ` David C. Rankin
2020-04-16 11:16     ` Wols Lists
2020-04-16  1:58   ` David C. Rankin
2020-04-14 16:00 ` G
2020-04-14 16:14   ` Stefanie Leisestreichler
2020-04-17 23:10     ` Nix
2020-04-24 19:15       ` Phillip Susi
2020-04-24 20:24         ` Nix
2020-04-14 16:20   ` Reindl Harald
2020-04-14 16:41     ` Stefanie Leisestreichler
2020-04-16 17:00     ` G
2020-04-14 18:14 ` Phillip Susi
2020-04-14 19:00   ` Stefanie Leisestreichler
2020-04-14 20:05     ` antlists
2020-04-17 23:24       ` Nix
     [not found] ` <394a3255-251c-41d1-8a65-2451e5503ef9@teksavvy.com>
2020-04-15 15:53   ` Stefanie Leisestreichler
2020-04-15 16:10     ` Stefanie Leisestreichler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.