* change UUID of RAID devcies
@ 2022-09-12 15:04 Reindl Harald
2022-09-12 21:37 ` Wol
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-12 15:04 UTC (permalink / raw)
To: Linux RAID Mailing List
is is possible to change the UUID of RAID devcies?
background: we have several machines with 4 disks (/boot RAID1, /
RAID10, /data RAID10)
the plan is buy twice as large SSDs (currently HDD), partition them from
a Live-ISO and dd-over-ssh the contents
at that time the RAID10 would be degraded and two disks out of the machines
besides the UUID one interesting question is how to make sure the copy
of /etc/mdadm.conf contains "RAID1" instead of "RAID10"
the reason for that game is that the machines are running for 10 years
now and all the new desktop hardware can't hold 4x3.5" disks and so just
put them in a new one isn't possible
-----------------
[root@srv-rhsoft:~]$ cat /etc/mdadm.conf
MAILADDR root
HOMEHOST localhost.localdomain
AUTO +imsm +1.x -all
ARRAY /dev/md0 level=raid1 num-devices=4
UUID=1d691642:baed26df:1d197496:4fb00ff8
ARRAY /dev/md1 level=raid10 num-devices=4
UUID=b7475879:c95d9a47:c5043c02:0c5ae720
ARRAY /dev/md2 level=raid10 num-devices=4
UUID=ea253255:cb915401:f32794ad:ce0fe396
-----------------
GRUB_CMDLINE_LINUX="quiet hpet=disable audit=0 rd.plymouth=0
plymouth.enable=0 rd.md.uuid=b7475879:c95d9a47:c5043c02:0c5ae720
rd.md.uuid=1d691642:baed26df:1d197496:4fb00ff8
rd.md.uuid=ea253255:cb915401:f32794ad:ce0fe396 rd.luks=0 rd.lvm=0
rd.dm=0 zswap.enabled=0 selinux=0 net.ifnames=0 biosdevname=0 noresume
hibernate=no printk.time=0 nmi_watchdog=0 acpi_osi=Linux
vconsole.font=latarcyrheb-sun16 vconsole.keymap=de-nodeadkeys
locale.LANG=de_DE.UTF-8"
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-12 15:04 change UUID of RAID devcies Reindl Harald
@ 2022-09-12 21:37 ` Wol
2022-09-13 10:28 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Wol @ 2022-09-12 21:37 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 12/09/2022 16:04, Reindl Harald wrote:
> the reason for that game is that the machines are running for 10 years
> now and all the new desktop hardware can't hold 4x3.5" disks and so just
> put them in a new one isn't possible
How many SATA ports does the mobo have? Can you --replace onto the new
drives (especially if it's raid-10!), then just fail the remaining two
drives?
Iirc raid-10 doesn't require the drives to be the same size, so provided
the two new drives are big enough, that should just work.
Then with just two drives you change the raid to raid-1.
Cheers,
Wol
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-12 21:37 ` Wol
@ 2022-09-13 10:28 ` Reindl Harald
2022-09-13 10:39 ` Pascal Hambourg
2022-09-13 15:37 ` Reindl Harald
0 siblings, 2 replies; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 10:28 UTC (permalink / raw)
To: Wol, Linux RAID Mailing List
Am 12.09.22 um 23:37 schrieb Wol:
> On 12/09/2022 16:04, Reindl Harald wrote:
>> the reason for that game is that the machines are running for 10 years
>> now and all the new desktop hardware can't hold 4x3.5" disks and so
>> just put them in a new one isn't possible
>
> How many SATA ports does the mobo have? Can you --replace onto the new
> drives (especially if it's raid-10!), then just fail the remaining two
> drives?
>
> Iirc raid-10 doesn't require the drives to be the same size, so provided
> the two new drives are big enough, that should just work.
>
> Then with just two drives you change the raid to raid-1
i had this idea also and the drives have 3 partitions in that order, two
machines 4x1TB and two machines 4x2TB
/boot is a RAID1, the other two RAIDS are RAID10
/dev/md0 ext4 482M 77M 401M 17% /boot
/dev/md1 ext4 29G 8,9G 20G 31% /
/dev/md2 ext4 3,6T 2,0T 1,7T 55% /data
/dev/md0 ext4 474M 45M 426M 10% /boot
/dev/md1 ext4 39G 22G 17G 58% /
/dev/md2 ext4 1,8T 1,1T 699G 61% /data
when i understand you correctly:
* replace two disks with double sized SSD's
* partition / and /data double sized
* "mdadm /dev/md1 --add /dev/sdcX" the double sized
* wait resync
* finally remove the two old half sized
* reshape md1 and md2 to RAID1
how would the command look for "Then with just two drives you change the
raid to raid-1"?
----------
BTW: currently the machines are BIOS-boot - am i right that the 2 TB
limitation only requires that the parts which are needed for booting are
on the first 2 TB and i can use 4 TB SSD's on the two bigger machines?
in that case i think i would need GPT partitioning and does GRUB2
support booting from GPT-partitioned disks in BIOS-mode?
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 10:28 ` Reindl Harald
@ 2022-09-13 10:39 ` Pascal Hambourg
2022-09-13 11:12 ` Reindl Harald
2022-09-13 15:37 ` Reindl Harald
1 sibling, 1 reply; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 10:39 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 12:28, Reindl Harald wrote:
>
> BTW: currently the machines are BIOS-boot - am i right that the 2 TB
> limitation only requires that the parts which are needed for booting are
> on the first 2 TB and i can use 4 TB SSD's on the two bigger machines?
Which 2 TB limitation ? EDD BIOS calls use 64-bit LBA and should not
have any practical limitation unless the BIOS implementation is flawed.
> in that case i think i would need GPT partitioning and does GRUB2
> support booting from GPT-partitioned disks in BIOS-mode?
Yes, but it requires a "BIOS boot" partition for the core image (usually
less than 100 kB, so 1 MB is plenty enough). Also some flawed BIOS
require that a legacy partition entry in the protective MBR has the
"boot" flag set.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 10:39 ` Pascal Hambourg
@ 2022-09-13 11:12 ` Reindl Harald
2022-09-13 11:17 ` Pascal Hambourg
2022-09-13 17:39 ` Wols Lists
0 siblings, 2 replies; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 11:12 UTC (permalink / raw)
To: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 12:39 schrieb Pascal Hambourg:
> On 13/09/2022 at 12:28, Reindl Harald wrote:
>>
>> BTW: currently the machines are BIOS-boot - am i right that the 2 TB
>> limitation only requires that the parts which are needed for booting
>> are on the first 2 TB and i can use 4 TB SSD's on the two bigger
>> machines?
>
> Which 2 TB limitation ? EDD BIOS calls use 64-bit LBA and should not
> have any practical limitation unless the BIOS implementation is flawed.
>
>> in that case i think i would need GPT partitioning and does GRUB2
>> support booting from GPT-partitioned disks in BIOS-mode?
>
> Yes, but it requires a "BIOS boot" partition for the core image (usually
> less than 100 kB, so 1 MB is plenty enough). Also some flawed BIOS
> require that a legacy partition entry in the protective MBR has the
> "boot" flag set
https://www.cyberciti.biz/tips/fdisk-unable-to-create-partition-greater-2tb.html
"For example, you cannot create 3TB or 4TB partition size (RAID based)
using the fdisk command. It will not allow you to create a partition
that is greater than 2TB" makes me nervous
how to get a > 3 TB partition for /dev/md2
--------------------
and finally how would the command look for "Then with just two drives
you change the raid to raid-1"?
the first two drives are ordered to start with 1 out of 4 machines ASAP
given that the machine in front of me is running since 2011/06 365/24......
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:12 ` Reindl Harald
@ 2022-09-13 11:17 ` Pascal Hambourg
2022-09-13 11:30 ` Reindl Harald
2022-09-13 17:39 ` Wols Lists
1 sibling, 1 reply; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 11:17 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 13:12, Reindl Harald wrote:
>
> Am 13.09.22 um 12:39 schrieb Pascal Hambourg:
>> On 13/09/2022 at 12:28, Reindl Harald wrote:
>>>
>>> BTW: currently the machines are BIOS-boot - am i right that the 2 TB
>>> limitation only requires that the parts which are needed for booting
>>> are on the first 2 TB and i can use 4 TB SSD's on the two bigger
>>> machines?
>>
>> Which 2 TB limitation ? EDD BIOS calls use 64-bit LBA and should not
>> have any practical limitation unless the BIOS implementation is flawed.
(...)
> "For example, you cannot create 3TB or 4TB partition size (RAID based)
> using the fdisk command. It will not allow you to create a partition
> that is greater than 2TB" makes me nervous
This is a DOS/MBR partition scheme limitation, not a BIOS limitation,
and irrelevant with GPT partition scheme.
> how to get a > 3 TB partition for /dev/md2
Use GPT.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:17 ` Pascal Hambourg
@ 2022-09-13 11:30 ` Reindl Harald
2022-09-13 11:35 ` Pascal Hambourg
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 11:30 UTC (permalink / raw)
To: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 13:17 schrieb Pascal Hambourg:
> On 13/09/2022 at 13:12, Reindl Harald wrote:
>>
>> Am 13.09.22 um 12:39 schrieb Pascal Hambourg:
>>> On 13/09/2022 at 12:28, Reindl Harald wrote:
>>>>
>>>> BTW: currently the machines are BIOS-boot - am i right that the 2 TB
>>>> limitation only requires that the parts which are needed for booting
>>>> are on the first 2 TB and i can use 4 TB SSD's on the two bigger
>>>> machines?
>>>
>>> Which 2 TB limitation ? EDD BIOS calls use 64-bit LBA and should not
>>> have any practical limitation unless the BIOS implementation is flawed.
> (...)
>> "For example, you cannot create 3TB or 4TB partition size (RAID based)
>> using the fdisk command. It will not allow you to create a partition
>> that is greater than 2TB" makes me nervous
>
> This is a DOS/MBR partition scheme limitation, not a BIOS limitation,
> and irrelevant with GPT partition scheme.
>
>> how to get a > 3 TB partition for /dev/md2
>
> Use GPT
yeah but the goal is to convert a existing RAID1/RAID10/RAID10 setup
with 4x2 TB drives to RAID1/RAID1/RAID1 with 2x4 Tb drives and so my
/boot won't work with GPT :-)
the two smaller machines are easier because of finally 2 TB drives - i
really think loudly about a USB stick for /boot with regular dd-images
and leave the existing RAID1 /boot ignored
[root@srv-rhsoft:~]$ df
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 ext4 482M 77M 401M 17% /boot
/dev/md1 ext4 29G 8.9G 20G 31% /
/dev/md2 ext4 3.6T 2.0T 1.7T 55% /data
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:30 ` Reindl Harald
@ 2022-09-13 11:35 ` Pascal Hambourg
2022-09-13 11:39 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 11:35 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 13:30, Reindl Harald wrote:
> Am 13.09.22 um 13:17 schrieb Pascal Hambourg:
>> On 13/09/2022 at 13:12, Reindl Harald wrote:
>>> Am 13.09.22 um 12:39 schrieb Pascal Hambourg:
>>>> On 13/09/2022 at 12:28, Reindl Harald wrote:
>>>>>
>>>>> BTW: currently the machines are BIOS-boot - am i right that the 2
>>>>> TB limitation only requires that the parts which are needed for
>>>>> booting are on the first 2 TB and i can use 4 TB SSD's on the two
>>>>> bigger machines?
>>>>
>>>> Which 2 TB limitation ? EDD BIOS calls use 64-bit LBA and should not
>>>> have any practical limitation unless the BIOS implementation is flawed.
>> (...)
>>> "For example, you cannot create 3TB or 4TB partition size (RAID
>>> based) using the fdisk command. It will not allow you to create a
>>> partition that is greater than 2TB" makes me nervous
>>
>> This is a DOS/MBR partition scheme limitation, not a BIOS limitation,
>> and irrelevant with GPT partition scheme.
>>
>>> how to get a > 3 TB partition for /dev/md2
>>
>> Use GPT
>
> yeah but the goal is to convert a existing RAID1/RAID10/RAID10 setup
> with 4x2 TB drives to RAID1/RAID1/RAID1 with 2x4 Tb drives and so my
> /boot won't work with GPT :-)
Why wouldn't your /boot work with GPT ? It works for me.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:35 ` Pascal Hambourg
@ 2022-09-13 11:39 ` Reindl Harald
2022-09-13 11:48 ` Pascal Hambourg
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 11:39 UTC (permalink / raw)
To: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 13:35 schrieb Pascal Hambourg:
> On 13/09/2022 at 13:30, Reindl Harald wrote:
>> Am 13.09.22 um 13:17 schrieb Pascal Hambourg:
>>> On 13/09/2022 at 13:12, Reindl Harald wrote:
>>>> Am 13.09.22 um 12:39 schrieb Pascal Hambourg:
>>>>> On 13/09/2022 at 12:28, Reindl Harald wrote:
>>>>>>
>>>>>> BTW: currently the machines are BIOS-boot - am i right that the 2
>>>>>> TB limitation only requires that the parts which are needed for
>>>>>> booting are on the first 2 TB and i can use 4 TB SSD's on the two
>>>>>> bigger machines?
>>>>>
>>>>> Which 2 TB limitation ? EDD BIOS calls use 64-bit LBA and should
>>>>> not have any practical limitation unless the BIOS implementation is
>>>>> flawed.
>>> (...)
>>>> "For example, you cannot create 3TB or 4TB partition size (RAID
>>>> based) using the fdisk command. It will not allow you to create a
>>>> partition that is greater than 2TB" makes me nervous
>>>
>>> This is a DOS/MBR partition scheme limitation, not a BIOS limitation,
>>> and irrelevant with GPT partition scheme.
>>>
>>>> how to get a > 3 TB partition for /dev/md2
>>>
>>> Use GPT
>>
>> yeah but the goal is to convert a existing RAID1/RAID10/RAID10 setup
>> with 4x2 TB drives to RAID1/RAID1/RAID1 with 2x4 Tb drives and so my
>> /boot won't work with GPT :-)
>
> Why wouldn't your /boot work with GPT ? It works for me
because you said so?
[root@srv-rhsoft:~]$ fdisk -l /dev/sda
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 860
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000d9ef2
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1026047 1024000 500M fd Linux raid
autodetect
/dev/sda2 1026048 31746047 30720000 14.6G fd Linux raid
autodetect
/dev/sda3 31746048 3906971647 3875225600 1.8T fd Linux raid
autodetect
-------- Weitergeleitete Nachricht --------
Betreff: Re: change UUID of RAID devcies
Datum: Tue, 13 Sep 2022 12:39:50 +0200
Von: Pascal Hambourg <pascal@plouf.fr.eu.org>
An: Reindl Harald <h.reindl@thelounge.net>, Linux RAID Mailing List
<linux-raid@vger.kernel.org>
Yes, but it requires a "BIOS boot" partition for the core image (usually
less than 100 kB, so 1 MB is plenty enough). Also some flawed BIOS
require that a legacy partition entry in the protective MBR has the
"boot" flag set.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:39 ` Reindl Harald
@ 2022-09-13 11:48 ` Pascal Hambourg
2022-09-13 11:50 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 11:48 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 13:39, Reindl Harald wrote:
> Am 13.09.22 um 13:35 schrieb Pascal Hambourg:
>> On 13/09/2022 at 13:30, Reindl Harald wrote:
>>> Am 13.09.22 um 13:17 schrieb Pascal Hambourg:
>>
>> Why wouldn't your /boot work with GPT ? It works for me
>
> because you said so?
No, I didn't say so.
> -------- Weitergeleitete Nachricht --------
> Betreff: Re: change UUID of RAID devcies
> Datum: Tue, 13 Sep 2022 12:39:50 +0200
> Von: Pascal Hambourg <pascal@plouf.fr.eu.org>
> An: Reindl Harald <h.reindl@thelounge.net>, Linux RAID Mailing List
> <linux-raid@vger.kernel.org>
>
> Yes, but it requires a "BIOS boot" partition for the core image (usually
> less than 100 kB, so 1 MB is plenty enough). Also some flawed BIOS
> require that a legacy partition entry in the protective MBR has the
> "boot" flag set.
Legacy boot on GPT has some requirements, but it works.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:48 ` Pascal Hambourg
@ 2022-09-13 11:50 ` Reindl Harald
2022-09-13 12:03 ` Pascal Hambourg
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 11:50 UTC (permalink / raw)
To: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 13:48 schrieb Pascal Hambourg:
> On 13/09/2022 at 13:39, Reindl Harald wrote:
>> Am 13.09.22 um 13:35 schrieb Pascal Hambourg:
>>> On 13/09/2022 at 13:30, Reindl Harald wrote:
>>>> Am 13.09.22 um 13:17 schrieb Pascal Hambourg:
>>>
>>> Why wouldn't your /boot work with GPT ? It works for me
>>
>> because you said so?
>
> No, I didn't say so.
>
>> -------- Weitergeleitete Nachricht --------
>> Betreff: Re: change UUID of RAID devcies
>> Datum: Tue, 13 Sep 2022 12:39:50 +0200
>> Von: Pascal Hambourg <pascal@plouf.fr.eu.org>
>> An: Reindl Harald <h.reindl@thelounge.net>, Linux RAID Mailing List
>> <linux-raid@vger.kernel.org>
>>
>> Yes, but it requires a "BIOS boot" partition for the core image
>> (usually less than 100 kB, so 1 MB is plenty enough). Also some flawed
>> BIOS require that a legacy partition entry in the protective MBR has
>> the "boot" flag set.
>
> Legacy boot on GPT has some requirements, but it works
but we are talking about a LIVE-migration/reshape of existing disks with
no place left for another partition
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:50 ` Reindl Harald
@ 2022-09-13 12:03 ` Pascal Hambourg
2022-09-13 12:21 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 12:03 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 13:50, Reindl Harald wrote:
>
> Am 13.09.22 um 13:48 schrieb Pascal Hambourg:
>>
>> Legacy boot on GPT has some requirements, but it works
>
> but we are talking about a LIVE-migration/reshape of existing disks with
> no place left for another partition
So what ? Aren't you going to create a GPT partition table on your 4-TB
drives ? Else you won't be able to use the space beyond 2 TiB. (*)
A GPT partition table supports up to 128 partitions by default.
(*) In the DOS/MBR partition scheme, Linux supports that a partition
ends beyond 2 TiB up to 4 TiB but this is a non standard trick and
probably not supported by usual partitioning tools.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 12:03 ` Pascal Hambourg
@ 2022-09-13 12:21 ` Reindl Harald
2022-09-13 12:47 ` Pascal Hambourg
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 12:21 UTC (permalink / raw)
To: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 14:03 schrieb Pascal Hambourg:
> On 13/09/2022 at 13:50, Reindl Harald wrote:
>>
>> Am 13.09.22 um 13:48 schrieb Pascal Hambourg:
>>>
>>> Legacy boot on GPT has some requirements, but it works
>>
>> but we are talking about a LIVE-migration/reshape of existing disks
>> with no place left for another partition
>
> So what ? Aren't you going to create a GPT partition table on your 4-TB
> drives ? Else you won't be able to use the space beyond 2 TiB. (*)
> A GPT partition table supports up to 128 partitions by default.
i won't have a choice as it looks like and so the easiest choice would
be migrate /boot completly to a USB-stick and simply ignore the current
/boot RAID1 which is just 482M small
since finally the new machines in the next step only support UEFI and
the uefi-system partition can't live on a RAID it would end there over
time anyways
the 4 machines are two pairs (homeoffice and office for two people)
cloned by remove two drives and rebuild the array on both so the
bbot-part is always identical and the stick on one can boot the other
so for now my last remaining question is "how would the command look for
"Then with just two drives you change the raid to raid-1"
> (*) In the DOS/MBR partition scheme, Linux supports that a partition
> ends beyond 2 TiB up to 4 TiB but this is a non standard trick and
> probably not supported by usual partitioning tools
meh
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 12:21 ` Reindl Harald
@ 2022-09-13 12:47 ` Pascal Hambourg
2022-09-13 13:02 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 12:47 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 14:21, Reindl Harald wrote:
> Am 13.09.22 um 14:03 schrieb Pascal Hambourg:
>> On 13/09/2022 at 13:50, Reindl Harald wrote:
>>> Am 13.09.22 um 13:48 schrieb Pascal Hambourg:
>>>>
>>>> Legacy boot on GPT has some requirements, but it works
>>>
>>> but we are talking about a LIVE-migration/reshape of existing disks
>>> with no place left for another partition
>>
>> So what ? Aren't you going to create a GPT partition table on your
>> 4-TB drives ? Else you won't be able to use the space beyond 2 TiB. (*)
>> A GPT partition table supports up to 128 partitions by default.
>
> i won't have a choice as it looks like and so the easiest choice would
> be migrate /boot completly to a USB-stick and simply ignore the current
> /boot RAID1 which is just 482M small
I don't see how it is easier. Also, USB sticks are not reliable.
However you are right that you can get rid of the current /boot array; I
don't see the need for a separate /boot, its contents could be included
in the root filesystem.
> since finally the new machines in the next step only support UEFI and
> the uefi-system partition can't live on a RAID it would end there over
> time anyways
Software is not natively supported by EFI boot but there are a few
tricks to set up a redundant EFI boot: create independent EFI partitions
on each disk, or create a RAID 1 array with metadata 1.0 (at the end of
the partition) so that the UEFI firmware can see each RAID partition as
a normal EFI partition with a FAT filesystem.
> so for now my last remaining question is "how would the command look for
> "Then with just two drives you change the raid to raid-1"
I would not convert existing arrays. Rather create new arrays on the new
disks and copy the data.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 12:47 ` Pascal Hambourg
@ 2022-09-13 13:02 ` Reindl Harald
2022-09-13 14:12 ` Pascal Hambourg
2022-09-13 19:32 ` Roman Mamedov
0 siblings, 2 replies; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 13:02 UTC (permalink / raw)
To: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 14:47 schrieb Pascal Hambourg:
> On 13/09/2022 at 14:21, Reindl Harald wrote:
>> Am 13.09.22 um 14:03 schrieb Pascal Hambourg:
>>> On 13/09/2022 at 13:50, Reindl Harald wrote:
>>>> Am 13.09.22 um 13:48 schrieb Pascal Hambourg:
>>>>>
>>>>> Legacy boot on GPT has some requirements, but it works
>>>>
>>>> but we are talking about a LIVE-migration/reshape of existing disks
>>>> with no place left for another partition
>>>
>>> So what ? Aren't you going to create a GPT partition table on your
>>> 4-TB drives ? Else you won't be able to use the space beyond 2 TiB. (*)
>>> A GPT partition table supports up to 128 partitions by default.
>>
>> i won't have a choice as it looks like and so the easiest choice would
>> be migrate /boot completly to a USB-stick and simply ignore the
>> current /boot RAID1 which is just 482M small
>
> I don't see how it is easier. Also, USB sticks are not reliable
relieble enough for /boot and given that i run a HP microserver with the
whole OS on a USB stick since 2016 to have only the RAID10 data on the 4
drives....
> However you are right that you can get rid of the current /boot array; I
> don't see the need for a separate /boot, its contents could be included
> in the root filesystem.
and the initrd lives where?
chicken / egg
>> since finally the new machines in the next step only support UEFI and
>> the uefi-system partition can't live on a RAID it would end there over
>> time anyways
>
> Software is not natively supported by EFI boot but there are a few
> tricks to set up a redundant EFI boot: create independent EFI partitions
> on each disk, or create a RAID 1 array with metadata 1.0 (at the end of
> the partition) so that the UEFI firmware can see each RAID partition as
> a normal EFI partition with a FAT filesystem.
sounds all not appealing
>> so for now my last remaining question is "how would the command look
>> for "Then with just two drives you change the raid to raid-1"
>
> I would not convert existing arrays. Rather create new arrays on the new
> disks and copy the data
i want my identical machines to stay as they are with all their UUIDs
which is the main topic here
it's not funny when you are used to rsync your /etc/fstab over 11 years
that doing so would lead in a unbootbale system on the other side
in a perfect world new hardware would still support 4 SATA drives and
UEFI would be able to boot from a RAID1 like BIOS-boot does no matter
which of the 4 drives are there and the hardware replacement would be
insert the 4 old disks and power on
all that new crap sucks completly
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 13:02 ` Reindl Harald
@ 2022-09-13 14:12 ` Pascal Hambourg
2022-09-13 19:32 ` Roman Mamedov
1 sibling, 0 replies; 31+ messages in thread
From: Pascal Hambourg @ 2022-09-13 14:12 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 at 15:02, Reindl Harald wrote:
> Am 13.09.22 um 14:47 schrieb Pascal Hambourg:
>
>> However you are right that you can get rid of the current /boot array;
>> I don't see the need for a separate /boot, its contents could be
>> included in the root filesystem.
>
> and the initrd lives where?
> chicken / egg
In the /boot directory, beside the kernel image. What's different with
the initrd ? I don't see any chicken & egg problem. If GRUB can boot the
kernel image and initrd from a separate /boot RAID array, it can do the
same from a root RAID array.
>>> since finally the new machines in the next step only support UEFI and
>>> the uefi-system partition can't live on a RAID it would end there
>>> over time anyways
>>
>> Software is not natively supported by EFI boot but there are a few
^^^
Oops ! I meant "software RAID".
>> tricks to set up a redundant EFI boot: create independent EFI
>> partitions on each disk, or create a RAID 1 array with metadata 1.0
>> (at the end of the partition) so that the UEFI firmware can see each
>> RAID partition as a normal EFI partition with a FAT filesystem.
>
> sounds all not appealing
Yes, but I know no cleaner way to achieve UEFI boot redundancy, which is
desirable.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 10:28 ` Reindl Harald
2022-09-13 10:39 ` Pascal Hambourg
@ 2022-09-13 15:37 ` Reindl Harald
1 sibling, 0 replies; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 15:37 UTC (permalink / raw)
To: Wol, Linux RAID Mailing List
Am 13.09.22 um 12:28 schrieb Reindl Harald:
> Am 12.09.22 um 23:37 schrieb Wol:
>> On 12/09/2022 16:04, Reindl Harald wrote:
>>> the reason for that game is that the machines are running for 10
>>> years now and all the new desktop hardware can't hold 4x3.5" disks
>>> and so just put them in a new one isn't possible
>>
>> How many SATA ports does the mobo have? Can you --replace onto the new
>> drives (especially if it's raid-10!), then just fail the remaining two
>> drives?
>>
>> Iirc raid-10 doesn't require the drives to be the same size, so
>> provided the two new drives are big enough, that should just work.
>>
>> Then with just two drives you change the raid to raid-1
[root@testserver:~]$ mdadm /dev/md0 --grow --level=1
mdadm: RAID10 can only be changed to RAID0
virtual machine with two drives replaced against double sized
so we are back at the dd/ssh-game and how do i set the UUIDs identical?
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 11:12 ` Reindl Harald
2022-09-13 11:17 ` Pascal Hambourg
@ 2022-09-13 17:39 ` Wols Lists
2022-09-13 18:03 ` Reindl Harald
1 sibling, 1 reply; 31+ messages in thread
From: Wols Lists @ 2022-09-13 17:39 UTC (permalink / raw)
To: Reindl Harald, Pascal Hambourg, Linux RAID Mailing List
On 13/09/2022 12:12, Reindl Harald wrote:
>
> "For example, you cannot create 3TB or 4TB partition size (RAID based)
> using the fdisk command. It will not allow you to create a partition
> that is greater than 2TB" makes me nervous
>
> how to get a > 3 TB partition for /dev/md2
>
> --------------------
>
> and finally how would the command look for "Then with just two drives
> you change the raid to raid-1"?
>
> the first two drives are ordered to start with 1 out of 4 machines ASAP
> given that the machine in front of me is running since 2011/06 365/24......
Dare I suggest you read the raid wiki site?
In particular
https://raid.wiki.kernel.org/index.php/Setting_up_a_(new)_system
https://raid.wiki.kernel.org/index.php/Converting_an_existing_system
Also a very good read ...
https://raid.wiki.kernel.org/index.php/System2020
Which is the system I'm typing this on.
These pages don't all jibe with what I remember writing, but read them
carefully, make sure you understand what is going on, and more
importantly WHY, and you should be good to go.
And when you're wondering how to go from the 4-drive raid-10 to the
2-drive raid-1, you should be able to just fail/remove the two small
drives and everything will migrate to your two new big drives, and then
it's just whatever the command is to convert between raid levels. The
drives will already be in a raid-1 layout, so converting from 10 to 1
will just be a change of metadata.
Cheers,
Wol
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 17:39 ` Wols Lists
@ 2022-09-13 18:03 ` Reindl Harald
2022-09-13 19:44 ` Wol
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 18:03 UTC (permalink / raw)
To: Wols Lists, Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 19:39 schrieb Wols Lists:
> On 13/09/2022 12:12, Reindl Harald wrote:
>>
>> "For example, you cannot create 3TB or 4TB partition size (RAID based)
>> using the fdisk command. It will not allow you to create a partition
>> that is greater than 2TB" makes me nervous
>>
>> how to get a > 3 TB partition for /dev/md2
>>
>> --------------------
>>
>> and finally how would the command look for "Then with just two drives
>> you change the raid to raid-1"?
>>
>> the first two drives are ordered to start with 1 out of 4 machines
>> ASAP given that the machine in front of me is running since 2011/06
>> 365/24......
>
> Dare I suggest you read the raid wiki site?
>
> In particular
> https://raid.wiki.kernel.org/index.php/Setting_up_a_(new)_system
> https://raid.wiki.kernel.org/index.php/Converting_an_existing_system
>
> Also a very good read ...
> https://raid.wiki.kernel.org/index.php/System2020
> Which is the system I'm typing this on.
>
> These pages don't all jibe with what I remember writing, but read them
> carefully, make sure you understand what is going on, and more
> importantly WHY, and you should be good to go.
>
> And when you're wondering how to go from the 4-drive raid-10 to the
> 2-drive raid-1, you should be able to just fail/remove the two small
> drives and everything will migrate to your two new big drives, and then
> it's just whatever the command is to convert between raid levels. The
> drives will already be in a raid-1 layout, so converting from 10 to 1
> will just be a change of metadata
if it's that easy why aren't mdadm doing it?
[root@testserver:~]$ mdadm /dev/md0 --grow --level=1
mdadm: RAID10 can only be changed to RAID0
virtual machine with two drives replaced against double sized
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 13:02 ` Reindl Harald
2022-09-13 14:12 ` Pascal Hambourg
@ 2022-09-13 19:32 ` Roman Mamedov
2022-09-13 19:54 ` Reindl Harald
1 sibling, 1 reply; 31+ messages in thread
From: Roman Mamedov @ 2022-09-13 19:32 UTC (permalink / raw)
To: Reindl Harald; +Cc: Pascal Hambourg, Linux RAID Mailing List
On Tue, 13 Sep 2022 15:02:41 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:
> > I would not convert existing arrays. Rather create new arrays on the new
> > disks and copy the data
> i want my identical machines to stay as they are with all their UUIDs
> which is the main topic here
>
> it's not funny when you are used to rsync your /etc/fstab over 11 years
> that doing so would lead in a unbootbale system on the other side
For this I'd suggest to use LABEL=rootfs (and so on) in fstab, instead of
UUIDs.
Or with LVM, /dev/vgname/lvname.
It's kind of the point of UUIDs that they are supposed to be (even globally)
unique, and there should not be the same UUID on two different machines.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 18:03 ` Reindl Harald
@ 2022-09-13 19:44 ` Wol
2022-09-13 19:53 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Wol @ 2022-09-13 19:44 UTC (permalink / raw)
To: Reindl Harald, Linux RAID Mailing List
On 13/09/2022 19:03, Reindl Harald wrote:
>
>
> Am 13.09.22 um 19:39 schrieb Wols Lists:
>> On 13/09/2022 12:12, Reindl Harald wrote:
>>>
>>> "For example, you cannot create 3TB or 4TB partition size (RAID
>>> based) using the fdisk command. It will not allow you to create a
>>> partition that is greater than 2TB" makes me nervous
>>>
>>> how to get a > 3 TB partition for /dev/md2
>>>
>>> --------------------
>>>
>>> and finally how would the command look for "Then with just two drives
>>> you change the raid to raid-1"?
>>>
>>> the first two drives are ordered to start with 1 out of 4 machines
>>> ASAP given that the machine in front of me is running since 2011/06
>>> 365/24......
>>
>> Dare I suggest you read the raid wiki site?
>>
>> In particular
>> https://raid.wiki.kernel.org/index.php/Setting_up_a_(new)_system
>> https://raid.wiki.kernel.org/index.php/Converting_an_existing_system
>>
>> Also a very good read ...
>> https://raid.wiki.kernel.org/index.php/System2020
>> Which is the system I'm typing this on.
>>
>> These pages don't all jibe with what I remember writing, but read them
>> carefully, make sure you understand what is going on, and more
>> importantly WHY, and you should be good to go.
>>
>> And when you're wondering how to go from the 4-drive raid-10 to the
>> 2-drive raid-1, you should be able to just fail/remove the two small
>> drives and everything will migrate to your two new big drives, and
>> then it's just whatever the command is to convert between raid levels.
>> The drives will already be in a raid-1 layout, so converting from 10
>> to 1 will just be a change of metadata
>
> if it's that easy why aren't mdadm doing it?
>
> [root@testserver:~]$ mdadm /dev/md0 --grow --level=1
> mdadm: RAID10 can only be changed to RAID0
>
> virtual machine with two drives replaced against double sized
Hmm...
I don't know. I'll have to defer to the experts, but a raid-10 across
two drives has to be a plain mirror in order to provide redundancy.
So I don't know why it doesn't just change the array definition, because
the on-disk layout *should* be the same ...
Cheers,
Wol
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 19:44 ` Wol
@ 2022-09-13 19:53 ` Reindl Harald
2022-11-27 20:03 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 19:53 UTC (permalink / raw)
To: Wol, Linux RAID Mailing List
Am 13.09.22 um 21:44 schrieb Wol:
> On 13/09/2022 19:03, Reindl Harald wrote:
>>
>>
>> Am 13.09.22 um 19:39 schrieb Wols Lists:
>>> On 13/09/2022 12:12, Reindl Harald wrote:
>>>>
>>>> "For example, you cannot create 3TB or 4TB partition size (RAID
>>>> based) using the fdisk command. It will not allow you to create a
>>>> partition that is greater than 2TB" makes me nervous
>>>>
>>>> how to get a > 3 TB partition for /dev/md2
>>>>
>>>> --------------------
>>>>
>>>> and finally how would the command look for "Then with just two
>>>> drives you change the raid to raid-1"?
>>>>
>>>> the first two drives are ordered to start with 1 out of 4 machines
>>>> ASAP given that the machine in front of me is running since 2011/06
>>>> 365/24......
>>>
>>> Dare I suggest you read the raid wiki site?
>>>
>>> In particular
>>> https://raid.wiki.kernel.org/index.php/Setting_up_a_(new)_system
>>> https://raid.wiki.kernel.org/index.php/Converting_an_existing_system
>>>
>>> Also a very good read ...
>>> https://raid.wiki.kernel.org/index.php/System2020
>>> Which is the system I'm typing this on.
>>>
>>> These pages don't all jibe with what I remember writing, but read
>>> them carefully, make sure you understand what is going on, and more
>>> importantly WHY, and you should be good to go.
>>>
>>> And when you're wondering how to go from the 4-drive raid-10 to the
>>> 2-drive raid-1, you should be able to just fail/remove the two small
>>> drives and everything will migrate to your two new big drives, and
>>> then it's just whatever the command is to convert between raid
>>> levels. The drives will already be in a raid-1 layout, so converting
>>> from 10 to 1 will just be a change of metadata
>>
>> if it's that easy why aren't mdadm doing it?
>>
>> [root@testserver:~]$ mdadm /dev/md0 --grow --level=1
>> mdadm: RAID10 can only be changed to RAID0
>>
>> virtual machine with two drives replaced against double sized
>
> Hmm...
>
> I don't know. I'll have to defer to the experts, but a raid-10 across
> two drives has to be a plain mirror in order to provide redundancy.
>
> So I don't know why it doesn't just change the array definition, because
> the on-disk layout *should* be the same ...
not really - RAID 10 have stripes and it likely ends in having the data
on both disk at different places
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 19:32 ` Roman Mamedov
@ 2022-09-13 19:54 ` Reindl Harald
2022-09-13 20:28 ` Roman Mamedov
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 19:54 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 21:32 schrieb Roman Mamedov:
> On Tue, 13 Sep 2022 15:02:41 +0200
> Reindl Harald <h.reindl@thelounge.net> wrote:
>
>>> I would not convert existing arrays. Rather create new arrays on the new
>>> disks and copy the data
>> i want my identical machines to stay as they are with all their UUIDs
>> which is the main topic here
>>
>> it's not funny when you are used to rsync your /etc/fstab over 11 years
>> that doing so would lead in a unbootbale system on the other side
>
> For this I'd suggest to use LABEL=rootfs (and so on) in fstab, instead of
> UUIDs.
>
> It's kind of the point of UUIDs that they are supposed to be (even globally)
> unique, and there should not be the same UUID on two different machines
that's already the case for 15 years here
but there is also mdadm.conf and sadly a copy in the intird
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 19:54 ` Reindl Harald
@ 2022-09-13 20:28 ` Roman Mamedov
2022-09-13 20:46 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Roman Mamedov @ 2022-09-13 20:28 UTC (permalink / raw)
To: Reindl Harald; +Cc: Pascal Hambourg, Linux RAID Mailing List
On Tue, 13 Sep 2022 21:54:21 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:
> >> it's not funny when you are used to rsync your /etc/fstab over 11 years
> >> that doing so would lead in a unbootbale system on the other side
> >
> > For this I'd suggest to use LABEL=rootfs (and so on) in fstab, instead of
> > UUIDs.
> >
> > It's kind of the point of UUIDs that they are supposed to be (even globally)
> > unique, and there should not be the same UUID on two different machines
>
> that's already the case for 15 years here
>
> but there is also mdadm.conf and sadly a copy in the intird
It has never occured to me to check, but you could also specify arrays by
"name=" there, instead of UUID. See "man mdadm.conf".
And it is possible to rename arrays:
https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array
Having same-name arrays on different hosts seems much more reasonable than
same UUIDs.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 20:28 ` Roman Mamedov
@ 2022-09-13 20:46 ` Reindl Harald
2022-09-13 20:48 ` Roman Mamedov
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 20:46 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 22:28 schrieb Roman Mamedov:
> On Tue, 13 Sep 2022 21:54:21 +0200
> Reindl Harald <h.reindl@thelounge.net> wrote:
>
>>>> it's not funny when you are used to rsync your /etc/fstab over 11 years
>>>> that doing so would lead in a unbootbale system on the other side
>>>
>>> For this I'd suggest to use LABEL=rootfs (and so on) in fstab, instead of
>>> UUIDs.
>>>
>>> It's kind of the point of UUIDs that they are supposed to be (even globally)
>>> unique, and there should not be the same UUID on two different machines
>>
>> that's already the case for 15 years here
>>
>> but there is also mdadm.conf and sadly a copy in the intird
>
> It has never occured to me to check, but you could also specify arrays by
> "name=" there, instead of UUID. See "man mdadm.conf".
and the name is *what*
ARRAY /dev/md1 UUID=b7475879:c95d9a47:c5043c02:0c5ae720
ARRAY /dev/md2 UUID=ea253255:cb915401:f32794ad:ce0fe396
> And it is possible to rename arrays:
> https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array
>
> Having same-name arrays on different hosts seems much more reasonable than
> same UUIDs
i get sick and tired after "Then with just two drives you change the
raid to raid-1" and "mdadm: RAID10 can only be changed to RAID0" don't
get me wrong but people coming with "i think" and "it may" shouldn't say
nothing unless they did things in *reality*
i will find a way to get all this crap booting as RAID1 without
reinstall the OS and nothing here is really helpful
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 20:46 ` Reindl Harald
@ 2022-09-13 20:48 ` Roman Mamedov
2022-09-13 20:56 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Roman Mamedov @ 2022-09-13 20:48 UTC (permalink / raw)
To: Reindl Harald; +Cc: Pascal Hambourg, Linux RAID Mailing List
On Tue, 13 Sep 2022 22:46:23 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:
> > It has never occured to me to check, but you could also specify arrays by
> > "name=" there, instead of UUID. See "man mdadm.conf".
>
> and the name is *what*
Name is, see mdadm --detail /dev/md1 | grep Name
> ARRAY /dev/md1 UUID=b7475879:c95d9a47:c5043c02:0c5ae720
> ARRAY /dev/md2 UUID=ea253255:cb915401:f32794ad:ce0fe396
--
With respect,
Roman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 20:48 ` Roman Mamedov
@ 2022-09-13 20:56 ` Reindl Harald
2022-09-13 21:03 ` Roman Mamedov
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 20:56 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 22:48 schrieb Roman Mamedov:
> On Tue, 13 Sep 2022 22:46:23 +0200
> Reindl Harald <h.reindl@thelounge.net> wrote:
>
>>> It has never occured to me to check, but you could also specify arrays by
>>> "name=" there, instead of UUID. See "man mdadm.conf".
>>
>> and the name is *what*
>
> Name is, see mdadm --detail /dev/md1 | grep Name
[root@srv-rhsoft:/var/lib/mpd/playlists]$ mdadm --detail /dev/md1 | grep
Name
Name : localhost.localdomain:1 (local to host
localhost.localdomain)
i nice joke when we talk about create new arrays, ensure they have the
same identifiers, dd the data insinde the raid and remove the old RAID
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 20:56 ` Reindl Harald
@ 2022-09-13 21:03 ` Roman Mamedov
2022-09-13 21:11 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Roman Mamedov @ 2022-09-13 21:03 UTC (permalink / raw)
To: Reindl Harald; +Cc: Pascal Hambourg, Linux RAID Mailing List
On Tue, 13 Sep 2022 22:56:00 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:
> [root@srv-rhsoft:/var/lib/mpd/playlists]$ mdadm --detail /dev/md1 | grep
> Name
> Name : localhost.localdomain:1 (local to host
> localhost.localdomain)
I have serverhostname.mydomain.net:1 there, perhaps due to creating arrays
not on install time, but long after the OS has been already installed and set
up properly with all the networking, domains and hostnames.
In any case, you can change that name to your liking, and then replace
"UUID=..." with "name=..." in mdadm.conf, if that helps anything with your
intended configuration.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 21:03 ` Roman Mamedov
@ 2022-09-13 21:11 ` Reindl Harald
2022-09-13 21:13 ` Reindl Harald
0 siblings, 1 reply; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 21:11 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 23:03 schrieb Roman Mamedov:
> On Tue, 13 Sep 2022 22:56:00 +0200
> Reindl Harald <h.reindl@thelounge.net> wrote:
>
>> [root@srv-rhsoft:/var/lib/mpd/playlists]$ mdadm --detail /dev/md1 | grep
>> Name
>> Name : localhost.localdomain:1 (local to host
>> localhost.localdomain)
>
> I have serverhostname.mydomain.net:1 there, perhaps due to creating arrays
> not on install time, but long after the OS has been already installed and set
> up properly with all the networking, domains and hostnames.
>
> In any case, you can change that name to your liking, and then replace
> "UUID=..." with "name=..." in mdadm.conf, if that helps anything with your
> intended configuration
not really and that is the point: i talk about a system where
boot+system itelf is on top of the array
there is no real solution and nobody has *real* expierience here -
trial&error can i do at my own given that half of the existung RAID10
will ly on a desk and whatever happens a resync and start from sract is
possible at any point in time
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 21:11 ` Reindl Harald
@ 2022-09-13 21:13 ` Reindl Harald
0 siblings, 0 replies; 31+ messages in thread
From: Reindl Harald @ 2022-09-13 21:13 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Pascal Hambourg, Linux RAID Mailing List
Am 13.09.22 um 23:11 schrieb Reindl Harald:
>
>
> Am 13.09.22 um 23:03 schrieb Roman Mamedov:
>> On Tue, 13 Sep 2022 22:56:00 +0200
>> Reindl Harald <h.reindl@thelounge.net> wrote:
>>
>>> [root@srv-rhsoft:/var/lib/mpd/playlists]$ mdadm --detail /dev/md1 | grep
>>> Name
>>> Name : localhost.localdomain:1 (local to host
>>> localhost.localdomain)
>>
>> I have serverhostname.mydomain.net:1 there, perhaps due to creating
>> arrays
>> not on install time, but long after the OS has been already installed
>> and set
>> up properly with all the networking, domains and hostnames.
>>
>> In any case, you can change that name to your liking, and then replace
>> "UUID=..." with "name=..." in mdadm.conf, if that helps anything with
>> your
>> intended configuration
>
> not really and that is the point: i talk about a system where
> boot+system itelf is on top of the array
>
> there is no real solution and nobody has *real* expierience here -
> trial&error can i do at my own given that half of the existung RAID10
> will ly on a desk and whatever happens a resync and start from sract is
> possible at any point in time
just for fun: that "name shit" was even different on two machines cloned
by take out 2 of the 4 disks and rebuild both machines
thanks to UUID it didn't matter
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: change UUID of RAID devcies
2022-09-13 19:53 ` Reindl Harald
@ 2022-11-27 20:03 ` Reindl Harald
0 siblings, 0 replies; 31+ messages in thread
From: Reindl Harald @ 2022-11-27 20:03 UTC (permalink / raw)
To: Wol, Linux RAID Mailing List
Am 13.09.22 um 21:53 schrieb Reindl Harald:
> Am 13.09.22 um 21:44 schrieb Wol:
>> I don't know. I'll have to defer to the experts
than let experts talk
>> but a raid-10 across
>> two drives has to be a plain mirror in order to provide redundancy.
a degraded RAID10 with two present drives out of 4 is not a mirror but a
stripe - it's more or less RAID0
> So I don't know why it doesn't just change the array definition,
> because the on-disk layout *should* be the same
the on-disk layout of a degraded RAID10 is exatcly the opposite of a
RAID1 - it has two stripes with no bit mirrorred at all
honestly: for 8 years that i am present on this list you are always one
of the first repsonders but don't know anything
for guessing nobody needs a mailing-list because trial&error can be done
alone without the the response-delays of a list
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2022-11-27 20:04 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-12 15:04 change UUID of RAID devcies Reindl Harald
2022-09-12 21:37 ` Wol
2022-09-13 10:28 ` Reindl Harald
2022-09-13 10:39 ` Pascal Hambourg
2022-09-13 11:12 ` Reindl Harald
2022-09-13 11:17 ` Pascal Hambourg
2022-09-13 11:30 ` Reindl Harald
2022-09-13 11:35 ` Pascal Hambourg
2022-09-13 11:39 ` Reindl Harald
2022-09-13 11:48 ` Pascal Hambourg
2022-09-13 11:50 ` Reindl Harald
2022-09-13 12:03 ` Pascal Hambourg
2022-09-13 12:21 ` Reindl Harald
2022-09-13 12:47 ` Pascal Hambourg
2022-09-13 13:02 ` Reindl Harald
2022-09-13 14:12 ` Pascal Hambourg
2022-09-13 19:32 ` Roman Mamedov
2022-09-13 19:54 ` Reindl Harald
2022-09-13 20:28 ` Roman Mamedov
2022-09-13 20:46 ` Reindl Harald
2022-09-13 20:48 ` Roman Mamedov
2022-09-13 20:56 ` Reindl Harald
2022-09-13 21:03 ` Roman Mamedov
2022-09-13 21:11 ` Reindl Harald
2022-09-13 21:13 ` Reindl Harald
2022-09-13 17:39 ` Wols Lists
2022-09-13 18:03 ` Reindl Harald
2022-09-13 19:44 ` Wol
2022-09-13 19:53 ` Reindl Harald
2022-11-27 20:03 ` Reindl Harald
2022-09-13 15:37 ` Reindl Harald
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.