All of lore.kernel.org
 help / color / mirror / Atom feed
* a new install - - - putting the system on raid
@ 2022-06-23 12:11 o1bigtenor
  2022-06-23 12:56 ` Wols Lists
  0 siblings, 1 reply; 50+ messages in thread
From: o1bigtenor @ 2022-06-23 12:11 UTC (permalink / raw)
  To: Linux-RAID

Greetings

https://raid.wiki.kernel.org/index.php/SATA_RAID_Boot_Recipe

Found the above recipe - - - the preface there is that this is
an existing system.

I am wanting to have all of /efi/boot, /, swap, /tmp, /var, /usr and
/usr/local on one raid-1 array and a second array for /home - - -
on a new install.

I have tried the following:

1. make large partition on each drive
2. set up raid array (2 separate arrays)
3. unable to place partitions on arrays

1. set up the same partitions on each set of drives
    (did allocate unused space between each partition)
2. was only allowed one partition from each drive for the array

Neither option seems able to give me what I want.
(More security - - - less likely to lose both drives (2 M2s and 2 SSDs).)

Is my only option to set up the arrays and then use LVM2 on top?
(One more point of failure so would rather not.)

Is there another option somewhat like the method outlined above - - -
recipe is some over 10 years old - - - or is this the only way to do things?

Please advise.

TIA

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 12:11 a new install - - - putting the system on raid o1bigtenor
@ 2022-06-23 12:56 ` Wols Lists
  2022-06-23 14:46   ` ESP and its file system (Re: a new install - - - putting the system on raid) Paul Menzel
  2022-06-23 18:54   ` a new install - - - putting the system on raid Pascal Hambourg
  0 siblings, 2 replies; 50+ messages in thread
From: Wols Lists @ 2022-06-23 12:56 UTC (permalink / raw)
  To: o1bigtenor, Linux-RAID

On 23/06/2022 13:11, o1bigtenor wrote:
> Greetings
> 
> https://raid.wiki.kernel.org/index.php/SATA_RAID_Boot_Recipe
> 
> Found the above recipe - - - the preface there is that this is
> an existing system.
> 
> I am wanting to have all of /efi/boot, /, swap, /tmp, /var, /usr and
> /usr/local on one raid-1 array and a second array for /home - - -
> on a new install.

/efi/boot (a) must be fat32, and (b) must be a "top level" partition. 
Okay, that's not totally true, but near enough, and scuppers your plan 
straight off ...

swap - why mirror it? If you set the fstab priorities to the same value, 
you get a striped raid-0 for free.

/tmp - is usually tmpfs nowadays, if you need disk backing, just make 
sure you've got a big-enough swap (tmpfs defaults to half ram, make it 
bigger and let it swap).
> 
> I have tried the following:
> 
> 1. make large partition on each drive
> 2. set up raid array (2 separate arrays)
> 3. unable to place partitions on arrays

Should be able to, but as above for your first array it won't actually 
work ...
> 
> 1. set up the same partitions on each set of drives
>      (did allocate unused space between each partition)
> 2. was only allowed one partition from each drive for the array
> 
> Neither option seems able to give me what I want.
> (More security - - - less likely to lose both drives (2 M2s and 2 SSDs).)
> 
> Is my only option to set up the arrays and then use LVM2 on top?
> (One more point of failure so would rather not.)

Well. I'm using lvm, it's normal practice, but again won't work for your 
first array ...
> 
> Is there another option somewhat like the method outlined above - - -
> recipe is some over 10 years old - - - or is this the only way to do things?

/boot/efi on its own partition

swap - its own partition

/tmp - tmpfs

/ (including /var and /usr) on one array

/home on the other array
> 
> Please advise.
> 
I've not done it, it's on my list of things to try, but you could put 
/boot/efi on v1.0 superblock raid-1 array and format it fat32. Make sure 
you know what you're doing!

That basically leaves swap and /tmp as your only unprotected partitions, 
neither of which is expected to survive any computer problems intact 
anyway (swap depends on your current session, and/tmp is *defined* as 
volatile and lost on shutdown.

My setup only has the one (raid-5) array for all my "real" partitions, 
and I've got lvm to give me / and /home (and others). It also gives me 
some degree of backup capability, as I just take snapshots. Running 
gentoo, that gives me security when I update the system every weekend :-)

https://raid.wiki.kernel.org/index.php/System2020

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* ESP and its file system (Re: a new install - - - putting the system on raid)
  2022-06-23 12:56 ` Wols Lists
@ 2022-06-23 14:46   ` Paul Menzel
  2022-06-23 18:54   ` a new install - - - putting the system on raid Pascal Hambourg
  1 sibling, 0 replies; 50+ messages in thread
From: Paul Menzel @ 2022-06-23 14:46 UTC (permalink / raw)
  To: Wols Lists; +Cc: o1bigtenor, linux-raid

Dear Wol,


Am 23.06.22 um 14:56 schrieb Wols Lists:

[…]

> /efi/boot (a) must be fat32,
Just a minor (theoretical) correction. To my knowledge the UEFI 
specification requires the UEFI firmware to be able to read a (UEFI) FAT 
partition, but if you are lucky or control the firmware stack, the ESP 
can be formatted with any filesystem.


Kind regards,

Paul

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 12:56 ` Wols Lists
  2022-06-23 14:46   ` ESP and its file system (Re: a new install - - - putting the system on raid) Paul Menzel
@ 2022-06-23 18:54   ` Pascal Hambourg
  2022-06-23 21:39     ` Wols Lists
  1 sibling, 1 reply; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-23 18:54 UTC (permalink / raw)
  To: Wols Lists, o1bigtenor, Linux-RAID

Le 23/06/2022 à 14:56, Wols Lists wrote :
> On 23/06/2022 13:11, o1bigtenor wrote:
>>
>> I am wanting to have all of /efi/boot, /, swap, /tmp, /var, /usr and
>> /usr/local on one raid-1 array and a second array for /home - - -
>> on a new install.
> 
> /efi/boot (a) must be fat32, and (b) must be a "top level" partition. 

Right, the UEFI firmware would not be able to read an EFI partition 
inside a partitioned software RAID array, and even on most unpartitioned 
RAID arrays. The most you can do is create a RAID1 array with superblock 
1.0 and use it all as an EFI partition but registering EFI boot entries 
for each EFI RAID member will be tricky.

> swap - why mirror it?

For redundancy, of course.

> If you set the fstab priorities to the same value, 
> you get a striped raid-0 for free.

Without any redundancy. What is the point of setting up RAID1 for all 
the rest and see your system crash pitifully when a drive fails because 
half of the swap suddenly becomes unreachable ?

>> I have tried the following:
>>
>> 1. make large partition on each drive
>> 2. set up raid array (2 separate arrays)
>> 3. unable to place partitions on arrays
> 
> Should be able to

Yes, RAID arrays should be partitionable by default. But your installer 
may not support it. LVM over RAID may be more widely supported.

>> 1. set up the same partitions on each set of drives
>>      (did allocate unused space between each partition)
>> 2. was only allowed one partition from each drive for the array

That sounds sensible. A RAID array with several partitions on the same 
drive does not make much sense. You must create a RAID array for each 
partition set.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 18:54   ` a new install - - - putting the system on raid Pascal Hambourg
@ 2022-06-23 21:39     ` Wols Lists
  2022-06-23 22:27       ` Pascal Hambourg
  0 siblings, 1 reply; 50+ messages in thread
From: Wols Lists @ 2022-06-23 21:39 UTC (permalink / raw)
  To: Pascal Hambourg, o1bigtenor, Linux-RAID

On 23/06/2022 19:54, Pascal Hambourg wrote:
>> If you set the fstab priorities to the same value, you get a striped 
>> raid-0 for free.
> 
> Without any redundancy. What is the point of setting up RAID1 for all 
> the rest and see your system crash pitifully when a drive fails because 
> half of the swap suddenly becomes unreachable ?

Why would it crash? Firstly, the system shouldn't be swapping. MOST 
systems, under MOST workloads, don't need swap.

And secondly, the *system* should not be using swap. User space, yes. So 
a bunch of running stuff might crash. But the system should stay up.

Raid is meant to protect your data. The benefit for raiding your swap is 
much less, and *should* be negligible.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 21:39     ` Wols Lists
@ 2022-06-23 22:27       ` Pascal Hambourg
  2022-06-23 23:44         ` Wol
  2022-06-24 18:20         ` a new install - - - putting the system on raid Roman Mamedov
  0 siblings, 2 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-23 22:27 UTC (permalink / raw)
  To: Wols Lists, o1bigtenor, Linux-RAID



Le 23/06/2022 à 23:39, Wols Lists a écrit :
> On 23/06/2022 19:54, Pascal Hambourg wrote:
>>> If you set the fstab priorities to the same value, you get a striped 
>>> raid-0 for free.
>>
>> Without any redundancy. What is the point of setting up RAID1 for all 
>> the rest and see your system crash pitifully when a drive fails 
>> because half of the swap suddenly becomes unreachable ?
> 
> Why would it crash?

Do you really believe a program can lose some of its data and still 
behave as if nothing happened ? If that were true, then why not just 
discard data instead of swap them out when memory is short ?

> Firstly, the system shouldn't be swapping. MOST 
> systems, under MOST workloads, don't need swap.

Conversely, some systems, under some workloads, do need swap. And when 
they do, swap needs to be as reliable as any other storage space.

> And secondly, the *system* should not be using swap. User space, yes. So 
> a bunch of running stuff might crash. But the system should stay up.

Firstly, the *system* is not only the kernel. Many user space processes 
are part of the *system*. Secondly, you were the one who wrote:

"/tmp - is usually tmpfs nowadays, if you need disk backing, just make 
sure you've got a big-enough swap (tmpfs defaults to half ram, make it 
bigger and let it swap)."

> Raid is meant to protect your data. The benefit for raiding your swap is 
> much less, and *should* be negligible.

No, this is what backup is meant to. RAID does not protect your data 
against accidental or malicious deletion or corruption. RAID is meant to 
provide availabity. The benefit of having everything including swap on 
RAID is that the system as a whole will continue to operate normally 
when a drive fails.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 22:27       ` Pascal Hambourg
@ 2022-06-23 23:44         ` Wol
  2022-06-25  8:27           ` Pascal Hambourg
  2022-06-24 18:20         ` a new install - - - putting the system on raid Roman Mamedov
  1 sibling, 1 reply; 50+ messages in thread
From: Wol @ 2022-06-23 23:44 UTC (permalink / raw)
  To: Pascal Hambourg, o1bigtenor, Linux-RAID

On 23/06/2022 23:27, Pascal Hambourg wrote:
> 

>>
>> Why would it crash?
> 
> Do you really believe a program can lose some of its data and still 
> behave as if nothing happened ? If that were true, then why not just 
> discard data instead of swap them out when memory is short ?

No ...
> 
>> Firstly, the system shouldn't be swapping. MOST systems, under MOST 
>> workloads, don't need swap.
> 
> Conversely, some systems, under some workloads, do need swap. And when 
> they do, swap needs to be as reliable as any other storage space.

And if your system is one of the ?majority? that shouldn't swap, the 
cost/benefit analysis is COMPLETELY different for swap than for main 
storage. So don't treat them the same.
> 
>> And secondly, the *system* should not be using swap. User space, yes. 
>> So a bunch of running stuff might crash. But the system should stay up.
> 
> Firstly, the *system* is not only the kernel. Many user space processes 
> are part of the *system*. Secondly, you were the one who wrote:
> 
> "/tmp - is usually tmpfs nowadays, if you need disk backing, just make 
> sure you've got a big-enough swap (tmpfs defaults to half ram, make it 
> bigger and let it swap)."
> 
And? /tmp is *explicitly* not to be trusted in the event of problems. If 
you lose a disk and it takes /tmp out, sorry. If the tmp-cleaner decides 
to do a random "rm /tmp/*" at an inconvenient moment, well, if the 
system can't handle it then whoever set the system up (or wrote the 
program) was incompetent. Sorry. It's true. (And, no, I'm not claiming 
to be a competent person :-)

>> Raid is meant to protect your data. The benefit for raiding your swap 
>> is much less, and *should* be negligible.
> 
> No, this is what backup is meant to. RAID does not protect your data 
> against accidental or malicious deletion or corruption. RAID is meant to 
> provide availabity. The benefit of having everything including swap on 
> RAID is that the system as a whole will continue to operate normally 
> when a drive fails.

And how does backup protect your data when the system crashes? You know, 
all that web-shop data that is fresh and new and arrived after the most 
recent backup 5mins ago? But that is probably irrelevant to most people :-)

(Oh, and I didn't tell o1bigtenor NOT to raid his swap. I asked him WHY 
he would want to. Maybe he has good reason. But I know him of old, and 
have good reason to suspect he's going OTT.)

You need to know what the threats are, what the mitigations are, and 
what strategies are RELEVANT. And you need different strategies for 
long-term, short-term, and immediate protection/threats.

I run xosview. Even with gentoo, and a massive tmpfs, swap in-use sits 
at 0B practically ALL the time. Why would I want to protect it? On the 
other hand, my data sits on raid-5 on top of dm-integrity - protected 
against both disk corruption and disk loss.

And then I usually forget I've got a massive disk sitting there for 
backup. Losing a hard drive doesn't cross my mind, because in pretty 
much 30 years I personally have yet to lose a disk. I know I'm lucky, 
I've recovered other people who have, but ...

As I say, different risks, different mitigations...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 22:27       ` Pascal Hambourg
  2022-06-23 23:44         ` Wol
@ 2022-06-24 18:20         ` Roman Mamedov
  2022-06-24 18:27           ` Upgrading motherboard + CPU Alexander Shenkin
  2022-06-25  8:00           ` a new install - - - putting the system on raid Pascal Hambourg
  1 sibling, 2 replies; 50+ messages in thread
From: Roman Mamedov @ 2022-06-24 18:20 UTC (permalink / raw)
  To: Pascal Hambourg; +Cc: Wols Lists, o1bigtenor, Linux-RAID

On Fri, 24 Jun 2022 00:27:45 +0200
Pascal Hambourg <pascal@plouf.fr.eu.org> wrote:

> > Raid is meant to protect your data. The benefit for raiding your swap is 
> > much less, and *should* be negligible.
> 
> No, this is what backup is meant to. RAID does not protect your data 
> against accidental or malicious deletion or corruption. RAID is meant to 
> provide availabity. The benefit of having everything including swap on 
> RAID is that the system as a whole will continue to operate normally 
> when a drive fails.

I think the key decider in whether or not a RAIDed swap should be a must-have,
is whether the system has hot-swap bays for drives.

Also, it seemed like the discussion began in the context of setting up a home
machine, or something otherwise not as mission-critical. And in those cases,
almost nobody will have hot-swap.

As such, if you have to bring down the machine to replace a drive anyway, might
as well tolerate the risk of it going down with a bang (due to a part of swap
going away), and enjoy a faster swap on either RAID0 or multiple independent
swap zones for the rest of the time.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Upgrading motherboard + CPU
  2022-06-24 18:20         ` a new install - - - putting the system on raid Roman Mamedov
@ 2022-06-24 18:27           ` Alexander Shenkin
  2022-06-24 18:44             ` Roman Mamedov
  2022-06-25  8:00           ` a new install - - - putting the system on raid Pascal Hambourg
  1 sibling, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-24 18:27 UTC (permalink / raw)
  To: Linux-RAID


Hi all,

I have 1 RAID6 (root) and 1 RAID1 (boot) array running across 7 drives 
in my Ubuntu 20.04 system.  I bought a new motherboard and CPU that I'd 
like to replace my current ones.  In non-raid systems, I get the sense 
that it's not a very risky operation.  However, I suspect RAID makes it 
more tricky.  Wondering if anyone can offer any advice here?  Do I have 
to make sure sata cables are plugged into corresponding ports in the new 
motherboard?  etc.  Many thanks in advance.

Allie

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 18:27           ` Upgrading motherboard + CPU Alexander Shenkin
@ 2022-06-24 18:44             ` Roman Mamedov
  2022-06-24 18:46               ` Alexander Shenkin
  0 siblings, 1 reply; 50+ messages in thread
From: Roman Mamedov @ 2022-06-24 18:44 UTC (permalink / raw)
  To: Alexander Shenkin; +Cc: Linux-RAID

On Fri, 24 Jun 2022 11:27:08 -0700
Alexander Shenkin <al@shenkin.org> wrote:

> I have 1 RAID6 (root) and 1 RAID1 (boot) array running across 7 drives 
> in my Ubuntu 20.04 system.  I bought a new motherboard and CPU that I'd 
> like to replace my current ones.  In non-raid systems, I get the sense 
> that it's not a very risky operation.  However, I suspect RAID makes it 
> more tricky.

Luckily with software RAID using mdadm it does not.

> Wondering if anyone can offer any advice here?  Do I have to make sure sata
> cables are plugged into corresponding ports in the new motherboard?

No, the order of cables is unimportant. Just plug everything in, and it should
work.

May have to check in BIOS that it doesn't require a UEFI boot partition, but
allows to boot from a legacy-style bootloader, in settings such as "Launch
CSM: Enabled" and "Boot device control: (UEFI and) Legacy".

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 18:44             ` Roman Mamedov
@ 2022-06-24 18:46               ` Alexander Shenkin
  2022-06-24 19:14                 ` Wol
  0 siblings, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-24 18:46 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Linux-RAID

Fantastic, thanks Roman.

On 6/24/2022 11:44 AM, Roman Mamedov wrote:
> On Fri, 24 Jun 2022 11:27:08 -0700
> Alexander Shenkin <al@shenkin.org> wrote:
> 
>> I have 1 RAID6 (root) and 1 RAID1 (boot) array running across 7 drives
>> in my Ubuntu 20.04 system.  I bought a new motherboard and CPU that I'd
>> like to replace my current ones.  In non-raid systems, I get the sense
>> that it's not a very risky operation.  However, I suspect RAID makes it
>> more tricky.
> 
> Luckily with software RAID using mdadm it does not.
> 
>> Wondering if anyone can offer any advice here?  Do I have to make sure sata
>> cables are plugged into corresponding ports in the new motherboard?
> 
> No, the order of cables is unimportant. Just plug everything in, and it should
> work.
> 
> May have to check in BIOS that it doesn't require a UEFI boot partition, but
> allows to boot from a legacy-style bootloader, in settings such as "Launch
> CSM: Enabled" and "Boot device control: (UEFI and) Legacy".
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 18:46               ` Alexander Shenkin
@ 2022-06-24 19:14                 ` Wol
  2022-06-24 20:23                   ` Alexander Shenkin
  0 siblings, 1 reply; 50+ messages in thread
From: Wol @ 2022-06-24 19:14 UTC (permalink / raw)
  To: Alexander Shenkin, Roman Mamedov; +Cc: Linux-RAID

On 24/06/2022 19:46, Alexander Shenkin wrote:
> Fantastic, thanks Roman.
> 
> On 6/24/2022 11:44 AM, Roman Mamedov wrote:
>> On Fri, 24 Jun 2022 11:27:08 -0700
>> Alexander Shenkin <al@shenkin.org> wrote:
>>
>>> I have 1 RAID6 (root) and 1 RAID1 (boot) array running across 7 drives
>>> in my Ubuntu 20.04 system.  I bought a new motherboard and CPU that I'd
>>> like to replace my current ones.  In non-raid systems, I get the sense
>>> that it's not a very risky operation.  However, I suspect RAID makes it
>>> more tricky.
>>
>> Luckily with software RAID using mdadm it does not.
>>
Just make sure fstab (and anywhere else you may have manually altered) 
is using uuids, and not device references (like /dev/sdxn).

Oh - and EFI / BIOS will always look for device 1. So that's a minor pain.

And both of those rules apply raid or not :-)

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 19:14                 ` Wol
@ 2022-06-24 20:23                   ` Alexander Shenkin
  2022-06-24 22:00                     ` Wol
  0 siblings, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-24 20:23 UTC (permalink / raw)
  To: Wol, Roman Mamedov; +Cc: Linux-RAID

Smart, thanks Wol.  I'm good on the UUIDs.  Not sure what you mean by 
'device 1' though?

On 6/24/2022 12:14 PM, Wol wrote:
> On 24/06/2022 19:46, Alexander Shenkin wrote:
>> Fantastic, thanks Roman.
>>
>> On 6/24/2022 11:44 AM, Roman Mamedov wrote:
>>> On Fri, 24 Jun 2022 11:27:08 -0700
>>> Alexander Shenkin <al@shenkin.org> wrote:
>>>
>>>> I have 1 RAID6 (root) and 1 RAID1 (boot) array running across 7 drives
>>>> in my Ubuntu 20.04 system.  I bought a new motherboard and CPU that I'd
>>>> like to replace my current ones.  In non-raid systems, I get the sense
>>>> that it's not a very risky operation.  However, I suspect RAID makes it
>>>> more tricky.
>>>
>>> Luckily with software RAID using mdadm it does not.
>>>
> Just make sure fstab (and anywhere else you may have manually altered) 
> is using uuids, and not device references (like /dev/sdxn).
> 
> Oh - and EFI / BIOS will always look for device 1. So that's a minor pain.
> 
> And both of those rules apply raid or not :-)
> 
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 20:23                   ` Alexander Shenkin
@ 2022-06-24 22:00                     ` Wol
  2022-06-24 22:06                       ` Alexander Shenkin
  2022-06-24 22:08                       ` Roman Mamedov
  0 siblings, 2 replies; 50+ messages in thread
From: Wol @ 2022-06-24 22:00 UTC (permalink / raw)
  To: Alexander Shenkin, Roman Mamedov; +Cc: Linux-RAID

On 24/06/2022 21:23, Alexander Shenkin wrote:
> Smart, thanks Wol.  I'm good on the UUIDs.  Not sure what you mean by 
> 'device 1' though?

Sata port 1. /dev/sda.

So your boot device is currently in physical connector 1 on the mobo. If 
you move it across, you need to make sure it stays in physical position 
1, otherwise the mobo will try to boot off whatever disk is in position 
1, and there won't be a boot system to boot off!

Remember, uuids rely on linux being running. But linux can't run until 
AFTER the boot code has run, so the boot code knows nothing about uuids 
and relies on physical locations. Cart before horse, catch-22, all that 
palaver you know :-)

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 22:00                     ` Wol
@ 2022-06-24 22:06                       ` Alexander Shenkin
  2022-06-24 22:42                         ` Ram Ramesh
  2022-06-24 22:08                       ` Roman Mamedov
  1 sibling, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-24 22:06 UTC (permalink / raw)
  To: Wol, Roman Mamedov; +Cc: Linux-RAID

Got it, thanks.  I hopefully, should, have all my disks bootable... but 
better safe than sorry.

On 6/24/2022 3:00 PM, Wol wrote:
> On 24/06/2022 21:23, Alexander Shenkin wrote:
>> Smart, thanks Wol.  I'm good on the UUIDs.  Not sure what you mean by 
>> 'device 1' though?
> 
> Sata port 1. /dev/sda.
> 
> So your boot device is currently in physical connector 1 on the mobo. If 
> you move it across, you need to make sure it stays in physical position 
> 1, otherwise the mobo will try to boot off whatever disk is in position 
> 1, and there won't be a boot system to boot off!
> 
> Remember, uuids rely on linux being running. But linux can't run until 
> AFTER the boot code has run, so the boot code knows nothing about uuids 
> and relies on physical locations. Cart before horse, catch-22, all that 
> palaver you know :-)
> 
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 22:00                     ` Wol
  2022-06-24 22:06                       ` Alexander Shenkin
@ 2022-06-24 22:08                       ` Roman Mamedov
  2022-06-25  7:14                         ` Wols Lists
  2022-06-25  7:54                         ` Pascal Hambourg
  1 sibling, 2 replies; 50+ messages in thread
From: Roman Mamedov @ 2022-06-24 22:08 UTC (permalink / raw)
  To: Wol; +Cc: Alexander Shenkin, Linux-RAID

On Fri, 24 Jun 2022 23:00:02 +0100
Wol <antlists@youngman.org.uk> wrote:

> So your boot device is currently in physical connector 1 on the mobo. If 
> you move it across, you need to make sure it stays in physical position 
> 1, otherwise the mobo will try to boot off whatever disk is in position 
> 1, and there won't be a boot system to boot off!

Using RAID1 across all disks with metadata 0.90 for /boot makes sure the BIOS
can boot no matter which disk it tries first.

In fact it could be with recent grub the "0.90" part is not even required
anymore.

Just make sure to "grub-install /dev/sdX" at least once to all of them.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 22:06                       ` Alexander Shenkin
@ 2022-06-24 22:42                         ` Ram Ramesh
  0 siblings, 0 replies; 50+ messages in thread
From: Ram Ramesh @ 2022-06-24 22:42 UTC (permalink / raw)
  To: Alexander Shenkin, Wol, Roman Mamedov; +Cc: Linux-RAID

On 6/24/22 17:06, Alexander Shenkin wrote:
> Got it, thanks.  I hopefully, should, have all my disks bootable... 
> but better safe than sorry.
>
> On 6/24/2022 3:00 PM, Wol wrote:
>> On 24/06/2022 21:23, Alexander Shenkin wrote:
>>> Smart, thanks Wol.  I'm good on the UUIDs.  Not sure what you mean 
>>> by 'device 1' though?
>>
>> Sata port 1. /dev/sda.
>>
>> So your boot device is currently in physical connector 1 on the mobo. 
>> If you move it across, you need to make sure it stays in physical 
>> position 1, otherwise the mobo will try to boot off whatever disk is 
>> in position 1, and there won't be a boot system to boot off!
>>
>> Remember, uuids rely on linux being running. But linux can't run 
>> until AFTER the boot code has run, so the boot code knows nothing 
>> about uuids and relies on physical locations. Cart before horse, 
>> catch-22, all that palaver you know :-)
>>
>> Cheers,
>> Wol
Whenever I upgrade my machine (new MB/CPU), I note down the name of the 
disk I currently boot from. Look /dev/disk/by-id and note down the disk.

After building new machine, get into BIOS setup and choose that disk as 
the first boot disk. You will have to enable CSM before choosing the 
disk, if it is legacy boot.

Recent motherboard's BIOS will not enable CSM without video card as 
video bios is removed in those MBs. So, you may have to either switch to 
UEFI boot first (very doable) on the old machine itself or get a spare 
video card (I have never used this approach, so so not sure).

Also, 12th gen intel CPUs need 5.16 or later kernel for video support. 
Otherwise, you will get blank screen. I have no idea why.

Also, make sure you go to manufacturer's website and read the MB spec to 
ensure all devices (network/audio etc) you care about are supported in 
your kernel.

Booting from bleeding edge hardware has always been in my experience.

Regards
Ramesh


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 22:08                       ` Roman Mamedov
@ 2022-06-25  7:14                         ` Wols Lists
  2022-06-25  7:54                         ` Pascal Hambourg
  1 sibling, 0 replies; 50+ messages in thread
From: Wols Lists @ 2022-06-25  7:14 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Alexander Shenkin, Linux-RAID

On 24/06/2022 23:08, Roman Mamedov wrote:
> On Fri, 24 Jun 2022 23:00:02 +0100
> Wol <antlists@youngman.org.uk> wrote:
> 
>> So your boot device is currently in physical connector 1 on the mobo. If
>> you move it across, you need to make sure it stays in physical position
>> 1, otherwise the mobo will try to boot off whatever disk is in position
>> 1, and there won't be a boot system to boot off!
> 
> Using RAID1 across all disks with metadata 0.90 for /boot makes sure the BIOS
> can boot no matter which disk it tries first.
> 
> In fact it could be with recent grub the "0.90" part is not even required
> anymore.
> 
> Just make sure to "grub-install /dev/sdX" at least once to all of them.
> 
Don't use 0.9, use 1.0.

Both of these put the superblock at the end, so the start of the array 
on disk is also the start of the data area. Which means you can bypass 
the raid (normally a bad idea, but here you need it).

Thing is, v0.x is deprecated, if it breaks it won't be fixed. v1.x is 
supported, all the x indicates is the location of the superblock, it's 
identical across all 1.x arrays.

1.0 is at the end, not necessarily a good idea. To be avoided where 
possible.

1.1 is at the start. Where any problems with a rogue partitioner will 
trample it. To be avoided.

1.2 leaves a 4K scratch space and puts the superblock after that. So it 
will survive most damage to the start of a partition. Recommended.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-24 22:08                       ` Roman Mamedov
  2022-06-25  7:14                         ` Wols Lists
@ 2022-06-25  7:54                         ` Pascal Hambourg
  2022-06-25 13:35                           ` Stephan
  1 sibling, 1 reply; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-25  7:54 UTC (permalink / raw)
  To: Roman Mamedov, Wol; +Cc: Alexander Shenkin, Linux-RAID

Le 25/06/2022 à 00:08, Roman Mamedov a écrit :
> 
> Using RAID1 across all disks with metadata 0.90 for /boot makes sure the BIOS
> can boot no matter which disk it tries first.

Why 0.90 ? It is obsolete. If you need RAID metadata at the end of RAID 
members for whatever ugly reason, you can use metadata 1.0 instead. But 
AFAIK it is only needed as a dirty hack with boot loaders which do not 
really support software RAID and must see RAID partitions as native 
filesystems.

> In fact it could be with recent grub the "0.90" part is not even required
> anymore.

GRUB has supported RAID metadata 1.x since version 1.99 (2010).

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-24 18:20         ` a new install - - - putting the system on raid Roman Mamedov
  2022-06-24 18:27           ` Upgrading motherboard + CPU Alexander Shenkin
@ 2022-06-25  8:00           ` Pascal Hambourg
  1 sibling, 0 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-25  8:00 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Wols Lists, o1bigtenor, Linux-RAID

Le 24/06/2022 à 20:20, Roman Mamedov wrote :
> 
> I think the key decider in whether or not a RAIDed swap should be a must-have,
> is whether the system has hot-swap bays for drives.

Why ? What has hot-swap to do with RAIDed swap ?

> Also, it seemed like the discussion began in the context of setting up a home
> machine, or something otherwise not as mission-critical. And in those cases,
> almost nobody will have hot-swap.
> 
> As such, if you have to bring down the machine to replace a drive anyway, might
> as well tolerate the risk of it going down with a bang (due to a part of swap
> going away)

I disagree. Even without hot-swap, RAIDed swap allows you to control how 
and when the downtime happens in order to replace a faulty drive. 
Without RAIDed swap, you cannot.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-23 23:44         ` Wol
@ 2022-06-25  8:27           ` Pascal Hambourg
  2022-06-25 11:41             ` Wols Lists
  0 siblings, 1 reply; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-25  8:27 UTC (permalink / raw)
  To: Wol, o1bigtenor, Linux-RAID

Le 24/06/2022 à 01:44, Wol a écrit :
> On 23/06/2022 23:27, Pascal Hambourg wrote:
>>
>>> Firstly, the system shouldn't be swapping. MOST systems, under MOST 
>>> workloads, don't need swap.
>>
>> Conversely, some systems, under some workloads, do need swap. And when 
>> they do, swap needs to be as reliable as any other storage space.
> 
> And if your system is one of the ?majority? that shouldn't swap, the 
> cost/benefit analysis is COMPLETELY different for swap than for main 
> storage. So don't treat them the same.

If your system should not swap, then why use any swap at all ?

>>> And secondly, the *system* should not be using swap. User space, yes. 
>>> So a bunch of running stuff might crash. But the system should stay up.
>>
>> Firstly, the *system* is not only the kernel. Many user space 
>> processes are part of the *system*. Secondly, you were the one who wrote:
>>
>> "/tmp - is usually tmpfs nowadays, if you need disk backing, just make 
>> sure you've got a big-enough swap (tmpfs defaults to half ram, make it 
>> bigger and let it swap)."
>>
> And? /tmp is *explicitly* not to be trusted in the event of problems. If 
> you lose a disk and it takes /tmp out, sorry.

Source ?

> If the tmp-cleaner decides 
> to do a random "rm /tmp/*" at an inconvenient moment, well, if the 
> system can't handle it then whoever set the system up (or wrote the 
> program) was incompetent.

Cleaning /tmp is not the same as loosing access to it. Opened files are 
still accessible.

>>> Raid is meant to protect your data. The benefit for raiding your swap 
>>> is much less, and *should* be negligible.
>>
>> No, this is what backup is meant to. RAID does not protect your data 
>> against accidental or malicious deletion or corruption. RAID is meant 
>> to provide availabity. The benefit of having everything including swap 
>> on RAID is that the system as a whole will continue to operate 
>> normally when a drive fails.
> 
> And how does backup protect your data when the system crashes? You know, 
> all that web-shop data that is fresh and new and arrived after the most 
> recent backup 5mins ago?

Use RAID, so that the system does not crash when a drive fails.

> (Oh, and I didn't tell o1bigtenor NOT to raid his swap. I asked him WHY 
> he would want to. Maybe he has good reason. But I know him of old, and 
> have good reason to suspect he's going OTT.)

I think you do not need a good reason to have swap on RAID when all the 
rest is on RAID. It is the other way around : you need a good reason not 
to have swap on RAID.

> I run xosview. Even with gentoo, and a massive tmpfs, swap in-use sits 
> at 0B practically ALL the time. Why would I want to protect it?

Because you may not want anything to crash when a drive fails while the 
swap is used. If you don't care, well, don't protect the swap.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-25  8:27           ` Pascal Hambourg
@ 2022-06-25 11:41             ` Wols Lists
  2022-06-26 21:58               ` Pascal Hambourg
  2022-06-27 10:50               ` swapping (was "Re: a new install - - - putting the system on raid") David T-G
  0 siblings, 2 replies; 50+ messages in thread
From: Wols Lists @ 2022-06-25 11:41 UTC (permalink / raw)
  To: Pascal Hambourg, o1bigtenor, Linux-RAID

On 25/06/2022 09:27, Pascal Hambourg wrote:
> Le 24/06/2022 à 01:44, Wol a écrit :
>> On 23/06/2022 23:27, Pascal Hambourg wrote:
>>>
>>>> Firstly, the system shouldn't be swapping. MOST systems, under MOST 
>>>> workloads, don't need swap.
>>>
>>> Conversely, some systems, under some workloads, do need swap. And 
>>> when they do, swap needs to be as reliable as any other storage space.
>>
>> And if your system is one of the ?majority? that shouldn't swap, the 
>> cost/benefit analysis is COMPLETELY different for swap than for main 
>> storage. So don't treat them the same.
> 
> If your system should not swap, then why use any swap at all ?

Same reason I do? I have a whole bunch of rarely used, and not-at-all 
critical tmpfs's, so the system occasionally spills into swap. A system 
failure is no grief - reboot, carry on where it left off ...
> 
>>>> And secondly, the *system* should not be using swap. User space, 
>>>> yes. So a bunch of running stuff might crash. But the system should 
>>>> stay up.
>>>
>>> Firstly, the *system* is not only the kernel. Many user space 
>>> processes are part of the *system*. Secondly, you were the one who 
>>> wrote:
>>>
>>> "/tmp - is usually tmpfs nowadays, if you need disk backing, just 
>>> make sure you've got a big-enough swap (tmpfs defaults to half ram, 
>>> make it bigger and let it swap)."
>>>
>> And? /tmp is *explicitly* not to be trusted in the event of problems. 
>> If you lose a disk and it takes /tmp out, sorry.
> 
> Source ?

Linux Filesystem Hierarchy Standard? The presence of ANY files in /tmp 
is not to be trusted - even if you created it ten seconds ago ...
> 
> 
>> (Oh, and I didn't tell o1bigtenor NOT to raid his swap. I asked him 
>> WHY he would want to. Maybe he has good reason. But I know him of old, 
>> and have good reason to suspect he's going OTT.)
> 
> I think you do not need a good reason to have swap on RAID when all the 
> rest is on RAID. It is the other way around : you need a good reason not 
> to have swap on RAID.
> 
>> I run xosview. Even with gentoo, and a massive tmpfs, swap in-use sits 
>> at 0B practically ALL the time. Why would I want to protect it?
> 
> Because you may not want anything to crash when a drive fails while the 
> swap is used. If you don't care, well, don't protect the swap.

And most people nowadays - certainly on a single-user system - will have 
no reason to care. Certainly my system, with 32GB of ram, is very 
unlikely to spill into swap in normal operation ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25  7:54                         ` Pascal Hambourg
@ 2022-06-25 13:35                           ` Stephan
  2022-06-25 17:10                             ` Wols Lists
  0 siblings, 1 reply; 50+ messages in thread
From: Stephan @ 2022-06-25 13:35 UTC (permalink / raw)
  To: Linux-RAID


Pascal Hambourg <pascal@plouf.fr.eu.org> writes:

> Why 0.90 ? It is obsolete. If you need RAID metadata at the end of
> RAID members for whatever ugly reason, you can use metadata 1.0
> instead. 

Does mdraid with metadata 1 work on the root filesystem w/o initramfs?

-- 
Stephan

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 13:35                           ` Stephan
@ 2022-06-25 17:10                             ` Wols Lists
  2022-06-25 17:38                               ` Alexander Shenkin
  2022-06-26 21:46                               ` Pascal Hambourg
  0 siblings, 2 replies; 50+ messages in thread
From: Wols Lists @ 2022-06-25 17:10 UTC (permalink / raw)
  To: Stephan, Linux-RAID

On 25/06/2022 14:35, Stephan wrote:
> 
> Pascal Hambourg <pascal@plouf.fr.eu.org> writes:
> 
>> Why 0.90 ? It is obsolete. If you need RAID metadata at the end of
>> RAID members for whatever ugly reason, you can use metadata 1.0
>> instead.
> 
> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
> 
If you're using v1.0, then you could boot off of one of the mirror 
members no problem.

You would point the kernel boot line at sda1 say (if that's part of your 
mirror). IFF that is mounted read-only for boot, then that's not a problem.

Your fstab would then mount /dev/md0 as root read-write, and you're good 
to go ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 17:10                             ` Wols Lists
@ 2022-06-25 17:38                               ` Alexander Shenkin
  2022-06-25 17:43                                 ` Wols Lists
  2022-06-26 21:46                               ` Pascal Hambourg
  1 sibling, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-25 17:38 UTC (permalink / raw)
  To: Wols Lists, Stephan, Linux-RAID


On 6/25/2022 10:10 AM, Wols Lists wrote:
> On 25/06/2022 14:35, Stephan wrote:
>>
>> Pascal Hambourg <pascal@plouf.fr.eu.org> writes:
>>
>>> Why 0.90 ? It is obsolete. If you need RAID metadata at the end of
>>> RAID members for whatever ugly reason, you can use metadata 1.0
>>> instead.
>>
>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
>>
> If you're using v1.0, then you could boot off of one of the mirror 
> members no problem.
> 
> You would point the kernel boot line at sda1 say (if that's part of your 
> mirror). IFF that is mounted read-only for boot, then that's not a problem.
> 
> Your fstab would then mount /dev/md0 as root read-write, and you're good 
> to go ...
> 
> Cheers,
> Wol

My metadata is 1.2... I presume that won't cause problems...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 17:38                               ` Alexander Shenkin
@ 2022-06-25 17:43                                 ` Wols Lists
  2022-06-25 17:53                                   ` Alexander Shenkin
  0 siblings, 1 reply; 50+ messages in thread
From: Wols Lists @ 2022-06-25 17:43 UTC (permalink / raw)
  To: Alexander Shenkin, Stephan, Linux-RAID

On 25/06/2022 18:38, Alexander Shenkin wrote:
> 
> On 6/25/2022 10:10 AM, Wols Lists wrote:
>> On 25/06/2022 14:35, Stephan wrote:
>>>
>>> Pascal Hambourg <pascal@plouf.fr.eu.org> writes:
>>>
>>>> Why 0.90 ? It is obsolete. If you need RAID metadata at the end of
>>>> RAID members for whatever ugly reason, you can use metadata 1.0
>>>> instead.
>>>
>>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
>>>
>> If you're using v1.0, then you could boot off of one of the mirror 
>> members no problem.
>>
>> You would point the kernel boot line at sda1 say (if that's part of 
>> your mirror). IFF that is mounted read-only for boot, then that's not 
>> a problem.
>>
>> Your fstab would then mount /dev/md0 as root read-write, and you're 
>> good to go ...
>>
>> Cheers,
>> Wol
> 
> My metadata is 1.2... I presume that won't cause problems...

Do you need an initramfs to assemble raid ... I thought you did ... in 
which case your system will not boot without one if your root is v1.2 ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 17:43                                 ` Wols Lists
@ 2022-06-25 17:53                                   ` Alexander Shenkin
  2022-06-25 18:41                                     ` Wols Lists
  2022-06-25 22:37                                     ` Reindl Harald
  0 siblings, 2 replies; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-25 17:53 UTC (permalink / raw)
  To: Wols Lists, Stephan, Linux-RAID




On 6/25/2022 10:43 AM, Wols Lists wrote:
> On 25/06/2022 18:38, Alexander Shenkin wrote:
>>
>> On 6/25/2022 10:10 AM, Wols Lists wrote:
>>> On 25/06/2022 14:35, Stephan wrote:
>>>>
>>>> Pascal Hambourg <pascal@plouf.fr.eu.org> writes:
>>>>
>>>>> Why 0.90 ? It is obsolete. If you need RAID metadata at the end of
>>>>> RAID members for whatever ugly reason, you can use metadata 1.0
>>>>> instead.
>>>>
>>>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
>>>>
>>> If you're using v1.0, then you could boot off of one of the mirror 
>>> members no problem.
>>>
>>> You would point the kernel boot line at sda1 say (if that's part of 
>>> your mirror). IFF that is mounted read-only for boot, then that's not 
>>> a problem.
>>>
>>> Your fstab would then mount /dev/md0 as root read-write, and you're 
>>> good to go ...
>>>
>>> Cheers,
>>> Wol
>>
>> My metadata is 1.2... I presume that won't cause problems...
> 
> Do you need an initramfs to assemble raid ... I thought you did ... in 
> which case your system will not boot without one if your root is v1.2 ...
> 
> Cheers,
> Wol
I have no idea.  My system boots fine now.  Will any of this impact 
booting after the switch to a new CPU + motherboard?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 17:53                                   ` Alexander Shenkin
@ 2022-06-25 18:41                                     ` Wols Lists
  2022-06-25 22:37                                     ` Reindl Harald
  1 sibling, 0 replies; 50+ messages in thread
From: Wols Lists @ 2022-06-25 18:41 UTC (permalink / raw)
  To: Alexander Shenkin, Stephan, Linux-RAID

On 25/06/2022 18:53, Alexander Shenkin wrote:
> 
> 
> 
> On 6/25/2022 10:43 AM, Wols Lists wrote:
>> On 25/06/2022 18:38, Alexander Shenkin wrote:
>>>
>>> On 6/25/2022 10:10 AM, Wols Lists wrote:
>>>> On 25/06/2022 14:35, Stephan wrote:
>>>>>
>>>>> Pascal Hambourg <pascal@plouf.fr.eu.org> writes:
>>>>>
>>>>>> Why 0.90 ? It is obsolete. If you need RAID metadata at the end of
>>>>>> RAID members for whatever ugly reason, you can use metadata 1.0
>>>>>> instead.
>>>>>
>>>>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
>>>>>
>>>> If you're using v1.0, then you could boot off of one of the mirror 
>>>> members no problem.
>>>>
>>>> You would point the kernel boot line at sda1 say (if that's part of 
>>>> your mirror). IFF that is mounted read-only for boot, then that's 
>>>> not a problem.
>>>>
>>>> Your fstab would then mount /dev/md0 as root read-write, and you're 
>>>> good to go ...
>>>>
>>>> Cheers,
>>>> Wol
>>>
>>> My metadata is 1.2... I presume that won't cause problems...
>>
>> Do you need an initramfs to assemble raid ... I thought you did ... in 
>> which case your system will not boot without one if your root is v1.2 ...
>>
>> Cheers,
>> Wol
> I have no idea.  My system boots fine now.  Will any of this impact 
> booting after the switch to a new CPU + motherboard?

If it's the same setup, it shouldn't make any difference. Famous last 
words ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 17:53                                   ` Alexander Shenkin
  2022-06-25 18:41                                     ` Wols Lists
@ 2022-06-25 22:37                                     ` Reindl Harald
  2022-06-25 22:45                                       ` Roman Mamedov
  1 sibling, 1 reply; 50+ messages in thread
From: Reindl Harald @ 2022-06-25 22:37 UTC (permalink / raw)
  To: Alexander Shenkin, Wols Lists, Stephan, Linux-RAID



Am 25.06.22 um 19:53 schrieb Alexander Shenkin:
> I have no idea.  My system boots fine now.  Will any of this impact 
> booting after the switch to a new CPU + motherboard?

Linux couldn't care less about CPU or mother board as long your initrd 
isn't "hostonly"

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 22:37                                     ` Reindl Harald
@ 2022-06-25 22:45                                       ` Roman Mamedov
  2022-06-25 23:40                                         ` Reindl Harald
       [not found]                                         ` <CAAMCDecEd1po2WpGT_SyimkJLoitRL-=RxKgDdsFA0LX7=2QuQ@mail.gmail.com>
  0 siblings, 2 replies; 50+ messages in thread
From: Roman Mamedov @ 2022-06-25 22:45 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Alexander Shenkin, Wols Lists, Stephan, Linux-RAID

On Sun, 26 Jun 2022 00:37:42 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:

> 
> 
> Am 25.06.22 um 19:53 schrieb Alexander Shenkin:
> > I have no idea.  My system boots fine now.  Will any of this impact 
> > booting after the switch to a new CPU + motherboard?
> 
> Linux couldn't care less about CPU or mother board as long your initrd 
> isn't "hostonly"

You're all overcomplicating this too much. Of course the next obvious question,
what is "hostonly", and how do we check if we have that?

For the OP: it will boot just fine. Really.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 22:45                                       ` Roman Mamedov
@ 2022-06-25 23:40                                         ` Reindl Harald
  2022-06-25 23:50                                           ` Roman Mamedov
       [not found]                                         ` <CAAMCDecEd1po2WpGT_SyimkJLoitRL-=RxKgDdsFA0LX7=2QuQ@mail.gmail.com>
  1 sibling, 1 reply; 50+ messages in thread
From: Reindl Harald @ 2022-06-25 23:40 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Alexander Shenkin, Wols Lists, Stephan, Linux-RAID



Am 26.06.22 um 00:45 schrieb Roman Mamedov:
> On Sun, 26 Jun 2022 00:37:42 +0200
> Reindl Harald <h.reindl@thelounge.net> wrote:
> 
>>
>>
>> Am 25.06.22 um 19:53 schrieb Alexander Shenkin:
>>> I have no idea.  My system boots fine now.  Will any of this impact
>>> booting after the switch to a new CPU + motherboard?
>>
>> Linux couldn't care less about CPU or mother board as long your initrd
>> isn't "hostonly"
> 
> You're all overcomplicating this too much. Of course the next obvious question,
> what is "hostonly", and how do we check if we have that?

https://fedoraproject.org/wiki/Features/DracutHostOnly

> For the OP: it will boot just fine. Really

except your initrd misses modules needed fater chaneg hardware

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 23:40                                         ` Reindl Harald
@ 2022-06-25 23:50                                           ` Roman Mamedov
  2022-06-25 23:56                                             ` Reindl Harald
  0 siblings, 1 reply; 50+ messages in thread
From: Roman Mamedov @ 2022-06-25 23:50 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Alexander Shenkin, Wols Lists, Stephan, Linux-RAID

On Sun, 26 Jun 2022 01:40:19 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:

> https://fedoraproject.org/wiki/Features/DracutHostOnly

Thanks! It feels like this should be carefully weighted whether it adds more
maintenance issues than any actual savings. At least they had the good
judgment to keep "a boot entry "Rescue System" with a full fledged initramfs",
so even if it is "hostonly", it's not the end of the world.

Do you know if any other distros do the same, how about Ubuntu, which the OP
is using?

> except your initrd misses modules needed fater chaneg hardware

Luckily a major one needed both before and after will be "ahci" nowadays, and
not much else comes to mind as to what might be missing.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 23:50                                           ` Roman Mamedov
@ 2022-06-25 23:56                                             ` Reindl Harald
  0 siblings, 0 replies; 50+ messages in thread
From: Reindl Harald @ 2022-06-25 23:56 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Alexander Shenkin, Wols Lists, Stephan, Linux-RAID



Am 26.06.22 um 01:50 schrieb Roman Mamedov:
> On Sun, 26 Jun 2022 01:40:19 +0200
> Reindl Harald <h.reindl@thelounge.net> wrote:
> 
>> https://fedoraproject.org/wiki/Features/DracutHostOnly
> 
> Thanks! It feels like this should be carefully weighted whether it adds more
> maintenance issues than any actual savings. At least they had the good
> judgment to keep "a boot entry "Rescue System" with a full fledged initramfs"

which don't work in reality and i have no time and energy to explain why

at least you should have goolged it before "what is hostonly" when 
others point out issues existing for neraly 10 years

> so even if it is "hostonly", it's not the end of the world.

nothing is the end of the world but if your system don't boot it's a 
problem when you didn't expect it

> Do you know if any other distros do the same, how about Ubuntu, which the OP
> is using?
> 
>> except your initrd misses modules needed fater chaneg hardware
> 
> Luckily a major one needed both before and after will be "ahci" nowadays, and
> not much else comes to mind as to what might be missing

irrelevant when it comes to your "For the OP: it will boot just fine. 
Really." which is uneducated and naive - period

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
       [not found]                                         ` <CAAMCDecEd1po2WpGT_SyimkJLoitRL-=RxKgDdsFA0LX7=2QuQ@mail.gmail.com>
@ 2022-06-26 15:34                                           ` Alexander Shenkin
  2022-06-26 15:44                                             ` Alexander Shenkin
  0 siblings, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-26 15:34 UTC (permalink / raw)
  To: Roger Heflin, Roman Mamedov
  Cc: Reindl Harald, Wols Lists, Stephan, Linux-RAID

On 6/25/2022 4:28 PM, Roger Heflin wrote:
> Hostonly only builds the drivers that are needed to boot the given 
> host.  If you change the disk card it may no longer work.
> 
> Hostonly disabled adds all kernel drivers that could be used to boot.  
> Disabling hostonly should be done prior to starting the hw upgrades.   I 
> generally check the initramfs size before and after the rebuild as 
> disabled hostonly should be much larger.  I have also seen some dists 
> hostonly not work unless an extra rpm is added, and the size being much 
> larger made realize it failed before doing the hw migration and then 
> Having to rescue boot it to fix it.  The note about the extra rpm was in 
> dracut.conf where the hostonly setting is on the dist that needed the 
> extra rpm.

So, any idea how to disable hostonly?  I'm not finding it via google...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-26 15:34                                           ` Alexander Shenkin
@ 2022-06-26 15:44                                             ` Alexander Shenkin
  2022-06-26 16:25                                               ` Andy Smith
  0 siblings, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-26 15:44 UTC (permalink / raw)
  To: Roger Heflin, Roman Mamedov
  Cc: Reindl Harald, Wols Lists, Stephan, Linux-RAID


On 6/26/2022 8:34 AM, Alexander Shenkin wrote:
> So, any idea how to disable hostonly?  I'm not finding it via google...

I should mention that /etc/dracut.conf doesn't exist on my system, and 
the dracut pacakge (if it is a package) doesn't show up in apt list, so 
I'm assuming it's not installed.  Does that mean I don't have hostonly 
installed?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-26 15:44                                             ` Alexander Shenkin
@ 2022-06-26 16:25                                               ` Andy Smith
  0 siblings, 0 replies; 50+ messages in thread
From: Andy Smith @ 2022-06-26 16:25 UTC (permalink / raw)
  To: Linux-RAID

Hello,

On Sun, Jun 26, 2022 at 08:44:12AM -0700, Alexander Shenkin wrote:
> 
> On 6/26/2022 8:34 AM, Alexander Shenkin wrote:
> > So, any idea how to disable hostonly?  I'm not finding it via google...
> 
> I should mention that /etc/dracut.conf doesn't exist on my system, and the
> dracut pacakge (if it is a package) doesn't show up in apt list, so I'm
> assuming it's not installed.  Does that mean I don't have hostonly
> installed?

This is a Fedora (and maybe other Red Hat-like) thing. On Debian how
generic your initramfs is, is set in
/etc/initramfs-tools/initramfs.conf:

    # MODULES: [ most | netboot | dep | list ]
    #
    # most - Add most filesystem and all harddrive drivers.
    #
    # dep - Try and guess which modules to load.
    #
    # netboot - Add the base modules, network modules, but skip block devices.
    #
    # list - Only include modules from the 'additional modules' list
    #
    MODULES=most

I think "dep" might be equivalent to this "hostonly" thing, the
point being to include only modules needed for the current hardware
configuration. Which might hamper an effort to move that install
into different hardware. It's not the default.

update-initramfs after changing.

Cheers,
Andy

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-25 17:10                             ` Wols Lists
  2022-06-25 17:38                               ` Alexander Shenkin
@ 2022-06-26 21:46                               ` Pascal Hambourg
  2022-06-27  1:32                                 ` Alexander Shenkin
  2022-06-27  7:13                                 ` Stephan
  1 sibling, 2 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-26 21:46 UTC (permalink / raw)
  To: Wols Lists, Stephan, Linux-RAID

Le 25/06/2022 à 19:10, Wols Lists wrote :
> On 25/06/2022 14:35, Stephan wrote:
>>
>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?

No. Why would one not use an initramfs ?

> If you're using v1.0, then you could boot off of one of the mirror 
> members no problem.
>
> You would point the kernel boot line at sda1 say (if that's part of your 
> mirror). IFF that is mounted read-only for boot, then that's not a problem.

Mounting read-only does not guarantee that there won't be any write. See 
man mount(8) :

-r, --read-only
     Mount the filesystem read-only. A synonym is -o ro.

Note that, depending on the filesystem type, state and kernel behavior, 
the system may still write to the device. For example, ext3 and ext4 
will replay the journal if the filesystem is dirty. To prevent this kind 
of write access, you may want to mount an ext3 or ext4 filesystem with 
the ro,noload mount options or set the block device itself to read-only 
mode, see the blockdev(8) command.

> Your fstab would then mount /dev/md0 as root read-write

I don't think so. IME the root device in fstab is ignored, only the 
options are used.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: a new install - - - putting the system on raid
  2022-06-25 11:41             ` Wols Lists
@ 2022-06-26 21:58               ` Pascal Hambourg
  2022-06-27 10:50               ` swapping (was "Re: a new install - - - putting the system on raid") David T-G
  1 sibling, 0 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-26 21:58 UTC (permalink / raw)
  To: Wols Lists, o1bigtenor, Linux-RAID

Le 25/06/2022 à 13:41, Wols Lists wrote :
> On 25/06/2022 09:27, Pascal Hambourg wrote:
>> Le 24/06/2022 à 01:44, Wol wrote :
>>>
>>> And? /tmp is *explicitly* not to be trusted in the event of problems. 
>>> If you lose a disk and it takes /tmp out, sorry.
>>
>> Source ?
> 
> Linux Filesystem Hierarchy Standard? The presence of ANY files in /tmp 
> is not to be trusted - even if you created it ten seconds ago ...

This is not the same as /tmp becoming unusable at all.

As I wrote (and you cut), open files are not deleted and can still be 
read and written to until they are closed. They are only unlinked from 
the directory tree. It is actually rather common to unlink temporary 
files just after opening them, so that they are deleted automatically 
when they are closed.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-26 21:46                               ` Pascal Hambourg
@ 2022-06-27  1:32                                 ` Alexander Shenkin
  2022-06-27  8:49                                   ` Pascal Hambourg
  2022-06-27 11:05                                   ` Roman Mamedov
  2022-06-27  7:13                                 ` Stephan
  1 sibling, 2 replies; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-27  1:32 UTC (permalink / raw)
  To: Pascal Hambourg, Wols Lists, Stephan, Linux-RAID


Ok all.  I've put in the new mobo + CPU, and the BIOS isn't finding any 
bootable devices.  Suggestions for next steps?  Thanks in advance...

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-26 21:46                               ` Pascal Hambourg
  2022-06-27  1:32                                 ` Alexander Shenkin
@ 2022-06-27  7:13                                 ` Stephan
  2022-06-27 10:03                                   ` Pascal Hambourg
  1 sibling, 1 reply; 50+ messages in thread
From: Stephan @ 2022-06-27  7:13 UTC (permalink / raw)
  To: Pascal Hambourg; +Cc: Wols Lists, Stephan, Linux-RAID

Pascal Hambourg <pascal@plouf.fr.eu.org> writes:

> Le 25/06/2022 à 19:10, Wols Lists wrote :
>> On 25/06/2022 14:35, Stephan wrote:
>>>
>>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
>
> No. Why would one not use an initramfs ?

An initramfs adds unnecessary intransparency to the system.  

>> If you're using v1.0, then you could boot off of one of the mirror
>> members no problem.
>>
>> You would point the kernel boot line at sda1 say (if that's part of
>> your mirror). IFF that is mounted read-only for boot, then that's
>> not a problem.
>
> Mounting read-only does not guarantee that there won't be any
> write. See man mount(8) :
>
> -r, --read-only
>     Mount the filesystem read-only. A synonym is -o ro.
>
> Note that, depending on the filesystem type, state and kernel
> behavior, the system may still write to the device. For example, ext3
> and ext4 will replay the journal if the filesystem is dirty. To
> prevent this kind of write access, you may want to mount an ext3 or
> ext4 filesystem with the ro,noload mount options or set the block
> device itself to read-only mode, see the blockdev(8) command.

Good point.  Thus, there is no alternative to superblock 0.90 for root on
mdraid w/o initramfs.  This is the answer to the question why somebody
(like me) may need to use superblock 0.90.

>> Your fstab would then mount /dev/md0 as root read-write
>
> I don't think so. IME the root device in fstab is ignored, only the
> options are used.

This is some of the intransparency.  Will the / entry in the /etc/fstab
be copied to the initramfs to use it for mounting the real root
filesystem?  You imply that this is the case but the device will be
ignored.  Why?  

Gruß,
-- 
Stephan

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-27  1:32                                 ` Alexander Shenkin
@ 2022-06-27  8:49                                   ` Pascal Hambourg
  2022-06-27 11:05                                   ` Roman Mamedov
  1 sibling, 0 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-27  8:49 UTC (permalink / raw)
  To: Alexander Shenkin, Wols Lists, Stephan, Linux-RAID

Le 27/06/2022 à 03:32, Alexander Shenkin a écrit :
> 
> Ok all.  I've put in the new mobo + CPU, and the BIOS isn't finding any 
> bootable devices.  Suggestions for next steps?  Thanks in advance...

Was the system originally installed for BIOS or EFI boot ?

If BIOS, was the bootloader installed in the MBR of all drives ? Does 
the new motherboard support BIOS/legacy boot ?

If EFI, was a copy of the boot loader installed in the "removable media 
path" (\EFI\Boot\Bootx64.efi) of the EFI partition(s) ?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-27  7:13                                 ` Stephan
@ 2022-06-27 10:03                                   ` Pascal Hambourg
  0 siblings, 0 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-27 10:03 UTC (permalink / raw)
  To: Stephan; +Cc: Wols Lists, Linux-RAID

Le 27/06/2022 à 09:13, Stephan a écrit :
> Pascal Hambourg <pascal@plouf.fr.eu.org> writes:
> 
>> Le 25/06/2022 à 19:10, Wols Lists wrote :
>>> On 25/06/2022 14:35, Stephan wrote:
>>>>
>>>> Does mdraid with metadata 1 work on the root filesystem w/o initramfs?
>>
>> No. Why would one not use an initramfs ?
> 
> An initramfs adds unnecessary intransparency to the system.

An initramfs is already necessary if root|usr|hibernation swap is on LVM 
or LUKS, if UUID or LABEL is used for root specification, with 
merged+separated /usr... Some distributions do not support booting 
without an initramfs any more.

>>> If you're using v1.0, then you could boot off of one of the mirror
>>> members no problem.
>>>
>>> You would point the kernel boot line at sda1 say (if that's part of
>>> your mirror). IFF that is mounted read-only for boot, then that's
>>> not a problem.
>>
>> Mounting read-only does not guarantee that there won't be any
>> write. See man mount(8)
(...)
> Good point.  Thus, there is no alternative to superblock 0.90 for root on
> mdraid w/o initramfs.  This is the answer to the question why somebody
> (like me) may need to use superblock 0.90.
> 
>>> Your fstab would then mount /dev/md0 as root read-write
>>
>> I don't think so. IME the root device in fstab is ignored, only the
>> options are used.
> 
> This is some of the intransparency.  Will the / entry in the /etc/fstab
> be copied to the initramfs to use it for mounting the real root
> filesystem?  You imply that this is the case but the device will be
> ignored.  Why?

There are different kind of initramfs out there, and I don't know them 
all. I can only speak about the initramfs built by initramfs-tools used 
by default in Debian and derivatives.

AFAIK, /etc/fstab is not copied into the initramfs. The initramfs uses 
only the command line parameters (root=, rootfstype=, rootflags=, 
ro|rw...) to mount the real root filesystem, then hands over to the real 
root init.

At some point, init remounts the root filesystem (mount -o remount /) to 
apply the mount options in /etc/fstab (this is usually when ro is 
changed to rw). Remount does not change the root device.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* swapping (was "Re: a new install - - - putting the system on raid")
  2022-06-25 11:41             ` Wols Lists
  2022-06-26 21:58               ` Pascal Hambourg
@ 2022-06-27 10:50               ` David T-G
  1 sibling, 0 replies; 50+ messages in thread
From: David T-G @ 2022-06-27 10:50 UTC (permalink / raw)
  To: Linux-RAID

...and then Wols Lists said...
% 
...
% And most people nowadays - certainly on a single-user system - will have no
% reason to care. Certainly my system, with 32GB of ram, is very unlikely to
% spill into swap in normal operation ...

... but my desktop, with 16G of RAM and 64G of swap, routinely runs along
at about half used.  There are lits of different configs out there :-)


% 
% Cheers,
% Wol


HAND

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-27  1:32                                 ` Alexander Shenkin
  2022-06-27  8:49                                   ` Pascal Hambourg
@ 2022-06-27 11:05                                   ` Roman Mamedov
       [not found]                                     ` <693d4b1c-ee58-4cc2-854b-4ae445ff7d24@Spark>
  1 sibling, 1 reply; 50+ messages in thread
From: Roman Mamedov @ 2022-06-27 11:05 UTC (permalink / raw)
  To: Alexander Shenkin; +Cc: Pascal Hambourg, Wols Lists, Stephan, Linux-RAID

On Sun, 26 Jun 2022 18:32:26 -0700
Alexander Shenkin <al@shenkin.org> wrote:

> Ok all.  I've put in the new mobo + CPU, and the BIOS isn't finding any 
> bootable devices.  Suggestions for next steps?  Thanks in advance...

As mentioned before, did you find and enable CSM and "Boot mode: UEFI and
Legacy" in BIOS?

If you have trouble finding any of that, perhaps post the motherboard model,
we could check the exact settings to look for.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
       [not found]                                     ` <693d4b1c-ee58-4cc2-854b-4ae445ff7d24@Spark>
@ 2022-06-27 14:26                                       ` Pascal Hambourg
  2022-06-28 21:32                                         ` Alexander Shenkin
  0 siblings, 1 reply; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-27 14:26 UTC (permalink / raw)
  To: Alexander Shenkin, Roman Mamedov; +Cc: Wols Lists, Stephan, Linux-RAID

On 27/06/2022 15:57, Alexander Shenkin wrote :
> 
> CSM is greyed out.  I think I need a graphics card

Is secure boot disabled? Legacy boot is not compatible with secure boot.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-27 14:26                                       ` Pascal Hambourg
@ 2022-06-28 21:32                                         ` Alexander Shenkin
  2022-06-28 21:57                                           ` Ram Ramesh
  0 siblings, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-28 21:32 UTC (permalink / raw)
  To: Pascal Hambourg, Roman Mamedov; +Cc: Wols Lists, Stephan, Linux-RAID


On 6/27/2022 7:26 AM, Pascal Hambourg wrote:
> On 27/06/2022 15:57, Alexander Shenkin wrote :
>>
>> CSM is greyed out.  I think I need a graphics card
> 
> Is secure boot disabled? Legacy boot is not compatible with secure boot.

There's no way to completely disable secure boot on this mobo.  It's an 
Asus Prime B560M-A.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-28 21:32                                         ` Alexander Shenkin
@ 2022-06-28 21:57                                           ` Ram Ramesh
  2022-06-29  1:05                                             ` Alexander Shenkin
  0 siblings, 1 reply; 50+ messages in thread
From: Ram Ramesh @ 2022-06-28 21:57 UTC (permalink / raw)
  To: Alexander Shenkin, Linux Raid

On 6/28/22 16:32, Alexander Shenkin wrote:
>
> On 6/27/2022 7:26 AM, Pascal Hambourg wrote:
>> On 27/06/2022 15:57, Alexander Shenkin wrote :
>>>
>>> CSM is greyed out.  I think I need a graphics card
>>
>> Is secure boot disabled? Legacy boot is not compatible with secure boot.
>
> There's no way to completely disable secure boot on this mobo. It's an 
> Asus Prime B560M-A.

Can you boot legacy install with UEFI boot disk? Have you  tried booting 
with sysrescuecd or some such to boot your installed OS?

Have you considered converting UEFI? CSM/Legacy boot support is dropping 
steadily. Sooner or later it will be extinct.

Regards
Ramesh


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-28 21:57                                           ` Ram Ramesh
@ 2022-06-29  1:05                                             ` Alexander Shenkin
  2022-06-29  2:10                                               ` Ram Ramesh
  0 siblings, 1 reply; 50+ messages in thread
From: Alexander Shenkin @ 2022-06-29  1:05 UTC (permalink / raw)
  To: Ram Ramesh, Linux Raid

Thanks all,
Once I installed a graphics card, I was able to enable CSM and I've 
booted successfully.  Thanks everyone for your help.  Converting to UEFI 
would be a good idea, though not sure how easy that will be given that 
/boot is a RAID1 array (how do you resize all the partitions, etc?).

Anyway, now looking for a network driver for this mobo...  ;-)

On 6/28/2022 2:57 PM, Ram Ramesh wrote:
> On 6/28/22 16:32, Alexander Shenkin wrote:
>>
>> On 6/27/2022 7:26 AM, Pascal Hambourg wrote:
>>> On 27/06/2022 15:57, Alexander Shenkin wrote :
>>>>
>>>> CSM is greyed out.  I think I need a graphics card
>>>
>>> Is secure boot disabled? Legacy boot is not compatible with secure boot.
>>
>> There's no way to completely disable secure boot on this mobo. It's an 
>> Asus Prime B560M-A.
> 
> Can you boot legacy install with UEFI boot disk? Have you  tried booting 
> with sysrescuecd or some such to boot your installed OS?
> 
> Have you considered converting UEFI? CSM/Legacy boot support is dropping 
> steadily. Sooner or later it will be extinct.
> 
> Regards
> Ramesh
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-29  1:05                                             ` Alexander Shenkin
@ 2022-06-29  2:10                                               ` Ram Ramesh
  2022-06-29 11:23                                                 ` Pascal Hambourg
  0 siblings, 1 reply; 50+ messages in thread
From: Ram Ramesh @ 2022-06-29  2:10 UTC (permalink / raw)
  To: Alexander Shenkin, Linux Raid

When all else fails, I use a usb3 NIC until the correct NIC drivers are 
found :-)

I think you can use another disk to install UEFI setup and boot your 
raid from that install. If you do not have a spare disk, use USB to 
experiment. They are cheap to buy and use as you only need a bare 
install (on USB)  to get started.

I converted several legacy to UEFI. However, none was from a RAID. I 
always leave 32G in every install/boot disk for crunch work like this. I 
recommend this. Even the machine that has RAID1 boot has first 32G on 
each RAID1 disk unused (so that I can convert this install too into 
UEFI, when I decide to)

Note that you need GPT for UEFI boot disk. So, separate disk/USB is the 
best.

Ramesh


On 6/28/22 20:05, Alexander Shenkin wrote:
> Thanks all,
> Once I installed a graphics card, I was able to enable CSM and I've 
> booted successfully.  Thanks everyone for your help. Converting to 
> UEFI would be a good idea, though not sure how easy that will be given 
> that /boot is a RAID1 array (how do you resize all the partitions, etc?).
>
> Anyway, now looking for a network driver for this mobo...  ;-)
>
> On 6/28/2022 2:57 PM, Ram Ramesh wrote:
>> On 6/28/22 16:32, Alexander Shenkin wrote:
>>>
>>> On 6/27/2022 7:26 AM, Pascal Hambourg wrote:
>>>> On 27/06/2022 15:57, Alexander Shenkin wrote :
>>>>>
>>>>> CSM is greyed out.  I think I need a graphics card
>>>>
>>>> Is secure boot disabled? Legacy boot is not compatible with secure 
>>>> boot.
>>>
>>> There's no way to completely disable secure boot on this mobo. It's 
>>> an Asus Prime B560M-A.
>>
>> Can you boot legacy install with UEFI boot disk? Have you  tried 
>> booting with sysrescuecd or some such to boot your installed OS?
>>
>> Have you considered converting UEFI? CSM/Legacy boot support is 
>> dropping steadily. Sooner or later it will be extinct.
>>
>> Regards
>> Ramesh
>>


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Upgrading motherboard + CPU
  2022-06-29  2:10                                               ` Ram Ramesh
@ 2022-06-29 11:23                                                 ` Pascal Hambourg
  0 siblings, 0 replies; 50+ messages in thread
From: Pascal Hambourg @ 2022-06-29 11:23 UTC (permalink / raw)
  To: Ram Ramesh, Alexander Shenkin, Linux Raid

On 29/06/2022 04:10, Ram Ramesh wrote :
> 
> Note that you need GPT for UEFI boot disk.

This is only a requirement for Windows (and possibly broken UEFI 
implementations). The UEFI specification allows to boot from a drive 
with a DOS/MBR partition table. Else, assigning a DOS/MBR partition type 
code for EFI system partition (0xef) would be pointless.

> On 6/28/22 20:05, Alexander Shenkin wrote:
>> Once I installed a graphics card, I was able to enable CSM and I've 
>> booted successfully.  Thanks everyone for your help. Converting to 
>> UEFI would be a good idea, though not sure how easy that will be given 
>> that /boot is a RAID1 array (how do you resize all the partitions, etc?).

Setting up redundant EFI boot with software RAID is a pain and a hack.
Resizing RAID partitions is a pain.
So I advise you keep legacy boot as long as you can (but prepare for EFI 
boot on a test machine or VM).

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2022-06-29 11:23 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-23 12:11 a new install - - - putting the system on raid o1bigtenor
2022-06-23 12:56 ` Wols Lists
2022-06-23 14:46   ` ESP and its file system (Re: a new install - - - putting the system on raid) Paul Menzel
2022-06-23 18:54   ` a new install - - - putting the system on raid Pascal Hambourg
2022-06-23 21:39     ` Wols Lists
2022-06-23 22:27       ` Pascal Hambourg
2022-06-23 23:44         ` Wol
2022-06-25  8:27           ` Pascal Hambourg
2022-06-25 11:41             ` Wols Lists
2022-06-26 21:58               ` Pascal Hambourg
2022-06-27 10:50               ` swapping (was "Re: a new install - - - putting the system on raid") David T-G
2022-06-24 18:20         ` a new install - - - putting the system on raid Roman Mamedov
2022-06-24 18:27           ` Upgrading motherboard + CPU Alexander Shenkin
2022-06-24 18:44             ` Roman Mamedov
2022-06-24 18:46               ` Alexander Shenkin
2022-06-24 19:14                 ` Wol
2022-06-24 20:23                   ` Alexander Shenkin
2022-06-24 22:00                     ` Wol
2022-06-24 22:06                       ` Alexander Shenkin
2022-06-24 22:42                         ` Ram Ramesh
2022-06-24 22:08                       ` Roman Mamedov
2022-06-25  7:14                         ` Wols Lists
2022-06-25  7:54                         ` Pascal Hambourg
2022-06-25 13:35                           ` Stephan
2022-06-25 17:10                             ` Wols Lists
2022-06-25 17:38                               ` Alexander Shenkin
2022-06-25 17:43                                 ` Wols Lists
2022-06-25 17:53                                   ` Alexander Shenkin
2022-06-25 18:41                                     ` Wols Lists
2022-06-25 22:37                                     ` Reindl Harald
2022-06-25 22:45                                       ` Roman Mamedov
2022-06-25 23:40                                         ` Reindl Harald
2022-06-25 23:50                                           ` Roman Mamedov
2022-06-25 23:56                                             ` Reindl Harald
     [not found]                                         ` <CAAMCDecEd1po2WpGT_SyimkJLoitRL-=RxKgDdsFA0LX7=2QuQ@mail.gmail.com>
2022-06-26 15:34                                           ` Alexander Shenkin
2022-06-26 15:44                                             ` Alexander Shenkin
2022-06-26 16:25                                               ` Andy Smith
2022-06-26 21:46                               ` Pascal Hambourg
2022-06-27  1:32                                 ` Alexander Shenkin
2022-06-27  8:49                                   ` Pascal Hambourg
2022-06-27 11:05                                   ` Roman Mamedov
     [not found]                                     ` <693d4b1c-ee58-4cc2-854b-4ae445ff7d24@Spark>
2022-06-27 14:26                                       ` Pascal Hambourg
2022-06-28 21:32                                         ` Alexander Shenkin
2022-06-28 21:57                                           ` Ram Ramesh
2022-06-29  1:05                                             ` Alexander Shenkin
2022-06-29  2:10                                               ` Ram Ramesh
2022-06-29 11:23                                                 ` Pascal Hambourg
2022-06-27  7:13                                 ` Stephan
2022-06-27 10:03                                   ` Pascal Hambourg
2022-06-25  8:00           ` a new install - - - putting the system on raid Pascal Hambourg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.