All of lore.kernel.org
 help / color / mirror / Atom feed
* Debian Bullseye install btrfs raid1
@ 2022-05-04  9:23 richard lucassen
  2022-05-04  9:27 ` Nikolay Borisov
  2022-05-04 12:06 ` Hans van Kranenburg
  0 siblings, 2 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-04  9:23 UTC (permalink / raw)
  To: linux-btrfs

Hello list,

Still new to btrfs, I try to set up a system that is capable of booting
even if one of the two disks is removed or broken. The BIOS supports this.

As the Debian installer is not capable of installing btrfs raid1, I
installed Bullseye using /dev/md0 for /boot (ext2) and a / btrfs on /dev/sda3.
This works of course. After install I added /dev/sdb3 to the / fs: OK.
Reboot: works. Proof/pudding/eating: I stopped the system, removed one of the
disks and started again. It boots, but it refuses to mount the / fs, either
without sda or sdb.

Question: is this newbie trying to set up an impossible config or have I
missed something crucial somewhere?

R.

Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... [    6.809309] Btrfs loaded, crc32c=crc32c-generic
Scanning for Btrfs filesystems
[    6.849966] random: fast init done
[    6.884290] BTRFS: device label data devid 1 transid 50 /dev/sda6 scanned by btrfs (171)
[    6.892822] BTRFS: device fsid 1739f989-05e0-48d8-b99a-67f91c18c892 devid 1 transid 23 /dev/sda5 scanned by btrfs (171)
[    6.903959] BTRFS: device fsid f9cf579f-d3d9-49b2-ab0d-ba258e9df3d8 devid 1 transid 3971 /dev/sda3 scanned by btrfs (171)
Begin: Waiting for suspend/resume device ... Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... [   27.015660] md/raid1:md0: active with 1 out of 2 mirrors
[   27.021181] md0: detected capacity change from 0 to 262078464
[   27.036555] md/raid1:md1: active with 1 out of 2 mirrors
[   27.042062] md1: detected capacity change from 0 to 4294901760
done.
done.
done.
Warning: fsck not present, so skipping root file system
[   27.235880] BTRFS info (device sda3): flagging fs with big metadata feature
[   27.242984] BTRFS info (device sda3): disk space caching is enabled
[   27.249314] BTRFS info (device sda3): has skinny extents
[   27.258259] BTRFS error (device sda3): devid 2 uuid 5b50e238-ae76-426f-bae3-deee5999adbc is missing
[   27.267448] BTRFS error (device sda3): failed to read the system array: -2
[   27.275696] BTRFS error (device sda3): open_ctree failed
mount: mounting /dev/sda3 on /root failed: Invalid argument
Failed to mount /dev/sda3 as root file system.


BusyBox v1.30.1 (Debian 1:1.30.1-6+b3) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs)


-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04  9:23 Debian Bullseye install btrfs raid1 richard lucassen
@ 2022-05-04  9:27 ` Nikolay Borisov
  2022-05-04  9:30   ` richard lucassen
  2022-05-04 10:02   ` richard lucassen
  2022-05-04 12:06 ` Hans van Kranenburg
  1 sibling, 2 replies; 15+ messages in thread
From: Nikolay Borisov @ 2022-05-04  9:27 UTC (permalink / raw)
  To: linux-btrfs



On 4.05.22 г. 12:23 ч., richard lucassen wrote:
> Hello list,
> 
> Still new to btrfs, I try to set up a system that is capable of booting
> even if one of the two disks is removed or broken. The BIOS supports this.
> 
> As the Debian installer is not capable of installing btrfs raid1, I
> installed Bullseye using /dev/md0 for /boot (ext2) and a / btrfs on /dev/sda3.
> This works of course. After install I added /dev/sdb3 to the / fs: OK.
> Reboot: works. Proof/pudding/eating: I stopped the system, removed one of the
> disks and started again. It boots, but it refuses to mount the / fs, either
> without sda or sdb.
> 
> Question: is this newbie trying to set up an impossible config or have I
> missed something crucial somewhere?

That's the default behavior, the reasoning is if you are missing one 
device of a raid1 your data is at-risk in case the 2nd device fails. You 
can override this behavior by mounting in degraded mode, that is
mount -odegraded

> 
> R.
> 
> Begin: Running /scripts/init-premount ... done.
> Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
> Begin: Running /scripts/local-premount ... [    6.809309] Btrfs loaded, crc32c=crc32c-generic
> Scanning for Btrfs filesystems
> [    6.849966] random: fast init done
> [    6.884290] BTRFS: device label data devid 1 transid 50 /dev/sda6 scanned by btrfs (171)
> [    6.892822] BTRFS: device fsid 1739f989-05e0-48d8-b99a-67f91c18c892 devid 1 transid 23 /dev/sda5 scanned by btrfs (171)
> [    6.903959] BTRFS: device fsid f9cf579f-d3d9-49b2-ab0d-ba258e9df3d8 devid 1 transid 3971 /dev/sda3 scanned by btrfs (171)
> Begin: Waiting for suspend/resume device ... Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... done.
> Begin: Running /scripts/local-block ... [   27.015660] md/raid1:md0: active with 1 out of 2 mirrors
> [   27.021181] md0: detected capacity change from 0 to 262078464
> [   27.036555] md/raid1:md1: active with 1 out of 2 mirrors
> [   27.042062] md1: detected capacity change from 0 to 4294901760
> done.
> done.
> done.
> Warning: fsck not present, so skipping root file system
> [   27.235880] BTRFS info (device sda3): flagging fs with big metadata feature
> [   27.242984] BTRFS info (device sda3): disk space caching is enabled
> [   27.249314] BTRFS info (device sda3): has skinny extents
> [   27.258259] BTRFS error (device sda3): devid 2 uuid 5b50e238-ae76-426f-bae3-deee5999adbc is missing
> [   27.267448] BTRFS error (device sda3): failed to read the system array: -2
> [   27.275696] BTRFS error (device sda3): open_ctree failed
> mount: mounting /dev/sda3 on /root failed: Invalid argument
> Failed to mount /dev/sda3 as root file system.
> 
> 
> BusyBox v1.30.1 (Debian 1:1.30.1-6+b3) built-in shell (ash)
> Enter 'help' for a list of built-in commands.
> 
> (initramfs)
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04  9:27 ` Nikolay Borisov
@ 2022-05-04  9:30   ` richard lucassen
  2022-05-04 10:02   ` richard lucassen
  1 sibling, 0 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-04  9:30 UTC (permalink / raw)
  To: linux-btrfs

On Wed, 4 May 2022 12:27:29 +0300
Nikolay Borisov <nborisov@suse.com> wrote:


> > Question: is this newbie trying to set up an impossible config or
> > have I missed something crucial somewhere?
> 
> That's the default behavior, the reasoning is if you are missing one 
> device of a raid1 your data is at-risk in case the 2nd device fails.
> You can override this behavior by mounting in degraded mode, that is
> mount -odegraded

Thnx! I'll have a look at that option.

R.

-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04  9:27 ` Nikolay Borisov
  2022-05-04  9:30   ` richard lucassen
@ 2022-05-04 10:02   ` richard lucassen
  2022-05-04 10:07     ` Nikolay Borisov
  1 sibling, 1 reply; 15+ messages in thread
From: richard lucassen @ 2022-05-04 10:02 UTC (permalink / raw)
  To: linux-btrfs

On Wed, 4 May 2022 12:27:29 +0300
Nikolay Borisov <nborisov@suse.com> wrote:

> > Question: is this newbie trying to set up an impossible config or
> > have I missed something crucial somewhere?
> 
> That's the default behavior, the reasoning is if you are missing one 
> device of a raid1 your data is at-risk in case the 2nd device fails.
> You can override this behavior by mounting in degraded mode, that is
> mount -odegraded

Another thing: sda3/sdb3 is the root fs, so I need to tell grub that
it's ok to mount a degraded array (one or another way, don't know
if it's possible, I'm not a grub guru). Adding it to fstab makes
no sense as there is no fstab at that time.

OTOH, when using md devices, the / fs is mounted as Degraded Array and
the remaining device remembers that this had happened. If the second
disk is replaced I have to add it manually using mdadm. The first disk
is "master". Using md the system boots and mounts / and thus all tools
are available to repair the array.

The wiki explains how to repair an array, but when the array is the
root fs you will have a problem.

So, what should I do when the / fs is degraded?

R.

-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 10:02   ` richard lucassen
@ 2022-05-04 10:07     ` Nikolay Borisov
  2022-05-04 10:14       ` richard lucassen
  0 siblings, 1 reply; 15+ messages in thread
From: Nikolay Borisov @ 2022-05-04 10:07 UTC (permalink / raw)
  To: linux-btrfs



On 4.05.22 г. 13:02 ч., richard lucassen wrote:
> On Wed, 4 May 2022 12:27:29 +0300
> Nikolay Borisov <nborisov@suse.com> wrote:
> 
>>> Question: is this newbie trying to set up an impossible config or
>>> have I missed something crucial somewhere?
>>
>> That's the default behavior, the reasoning is if you are missing one
>> device of a raid1 your data is at-risk in case the 2nd device fails.
>> You can override this behavior by mounting in degraded mode, that is
>> mount -odegraded
> 
> Another thing: sda3/sdb3 is the root fs, so I need to tell grub that
> it's ok to mount a degraded array (one or another way, don't know
> if it's possible, I'm not a grub guru). Adding it to fstab makes
> no sense as there is no fstab at that time.
> 
> OTOH, when using md devices, the / fs is mounted as Degraded Array and
> the remaining device remembers that this had happened. If the second
> disk is replaced I have to add it manually using mdadm. The first disk
> is "master". Using md the system boots and mounts / and thus all tools
> are available to repair the array.
> 
> The wiki explains how to repair an array, but when the array is the
> root fs you will have a problem.
> 
> So, what should I do when the / fs is degraded?

In case of btrfs raid1 if you managed to mount the array degraded it's 
possible to add another device to the array and then run a balance 
operation so that you end up with 2 copies of your data. I.e I don't see 
a problem? Have I misunderstood you?
> 
> R.
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 10:07     ` Nikolay Borisov
@ 2022-05-04 10:14       ` richard lucassen
  2022-05-04 10:26         ` Andy Smith
  2022-05-04 18:15         ` Andrei Borzenkov
  0 siblings, 2 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-04 10:14 UTC (permalink / raw)
  To: linux-btrfs

On Wed, 4 May 2022 13:07:14 +0300
Nikolay Borisov <nborisov@suse.com> wrote:

> > The wiki explains how to repair an array, but when the array is the
> > root fs you will have a problem.
> > 
> > So, what should I do when the / fs is degraded?
> 
> In case of btrfs raid1 if you managed to mount the array degraded
> it's possible to add another device to the array and then run a
> balance operation so that you end up with 2 copies of your data. I.e
> I don't see a problem? Have I misunderstood you?

I fear you did. I cannot mount it -o degraded, I have no working system!

I need fysical access to the system to repair it, contrary to an md
system, the latter will simply start as 'Degraded Array' even when I'm
abroad...

-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 10:14       ` richard lucassen
@ 2022-05-04 10:26         ` Andy Smith
  2022-05-04 11:16           ` richard lucassen
  2022-05-04 18:15         ` Andrei Borzenkov
  1 sibling, 1 reply; 15+ messages in thread
From: Andy Smith @ 2022-05-04 10:26 UTC (permalink / raw)
  To: linux-btrfs

Hi Richard,

On Wed, May 04, 2022 at 12:14:54PM +0200, richard lucassen wrote:
> I fear you did. I cannot mount it -o degraded, I have no working system!

You can pause at the grub menu and edit the current boot selection
to have the additional kernel command line parameter:

    rootflags=degraded

That has the same effect as "Mount -o degraded …" or putting
"degraded" in the fstab options.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 10:26         ` Andy Smith
@ 2022-05-04 11:16           ` richard lucassen
  0 siblings, 0 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-04 11:16 UTC (permalink / raw)
  To: linux-btrfs

On Wed, 4 May 2022 10:26:08 +0000
Andy Smith <andy@strugglers.net> wrote:

> You can pause at the grub menu and edit the current boot selection
> to have the additional kernel command line parameter:
> 
>     rootflags=degraded
> 
> That has the same effect as "Mount -o degraded …" or putting
> "degraded" in the fstab options.

Yes, but apparently I fsck'd up the whole system, even with two disks.
I will first add a single disk / filesystem as rescue (never mind, this
is a test system):

[    5.622734]  sdb: sdb1 sdb2 sdb3 sdb4 < sdb5 sdb6 >
[    5.634754]  sda: sda1 sda2 sda3 sda4 < sda5 sda6 >
[    5.649152] sd 1:0:0:0: [sdb] Attached SCSI disk
[    5.652889] sd 0:0:0:0: [sda] Attached SCSI disk
[    5.724026] random: fast init done
[    5.821439] md/raid1:md0: active with 2 out of 2 mirrors
[    5.827536] md0: detected capacity change from 0 to 262078464
[    5.828427] md/raid1:md1: active with 2 out of 2 mirrors
[    5.839332] md1: detected capacity change from 0 to 4294901760

[..]

Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... [    6.826141] Btrfs loaded, crc32c=crc32c-generic
Scanning for Btrfs filesystems
[    6.868147] random: fast init done
[    6.901066] BTRFS: device label data devid 1 transid 50 /dev/sda6 scanned by btrfs (171)
[    6.909951] BTRFS: device fsid 1739f989-05e0-48d8-b99a-67f91c18c892 devid 1 transid 23 /dev/sda5 scanned by btrfs (171)
[    6.921421] BTRFS: device fsid f9cf579f-d3d9-49b2-ab0d-ba258e9df3d8 devid 1 transid 3994 /dev/sda3 scanned by btrfs (171)
Begin: Waiting for suspend/resume device ... Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... [   27.012381] md/raid1:md0: active with 1 out of 2 mirrors
[   27.017890] md0: detected capacity change from 0 to 262078464
[   27.033248] md/raid1:md1: active with 1 out of 2 mirrors
[   27.038793] md1: detected capacity change from 0 to 4294901760
done.
done.
done.
Warning: fsck not present, so skipping root file system
[   27.229282] BTRFS info (device sda3): flagging fs with big metadata feature
[   27.236375] BTRFS info (device sda3): allowing degraded mounts
[   27.242285] BTRFS info (device sda3): disk space caching is enabled
[   27.248579] BTRFS info (device sda3): has skinny extents
[   27.256813] BTRFS warning (device sda3): devid 2 uuid 5b50e238-ae76-426f-bae3-deee5999adbc is missing
[   27.266833] BTRFS warning (device sda3): devid 2 uuid 5b50e238-ae76-426f-bae3-deee5999adbc is missing
[   27.284235] BTRFS info (device sda3): enabling ssd optimizations
done.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... mount: mounting /dev on /root/dev failed: No such file or directory
mount: mounting /dev on /root/dev failed: No such file or directory
done.
mount: mounting /run on /root/run failed: No such file or directory
run-init: can't execute '/sbin/init': No such file or directory
Target filesystem doesn't have requested /sbin/init.
run-init: can't execute '/sbin/init': No such file or directory
run-init: can't execute '/etc/init': No such file or directory
run-init: can't execute '/bin/init': No such file or directory
run-init: can't execute '/bin/sh': No such file or directory
run-init: can't execute '': No such file or directory
No init found. Try passing init= bootarg.


BusyBox v1.30.1 (Debian 1:1.30.1-6+b3) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs)


-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04  9:23 Debian Bullseye install btrfs raid1 richard lucassen
  2022-05-04  9:27 ` Nikolay Borisov
@ 2022-05-04 12:06 ` Hans van Kranenburg
  2022-05-04 12:59   ` richard lucassen
  1 sibling, 1 reply; 15+ messages in thread
From: Hans van Kranenburg @ 2022-05-04 12:06 UTC (permalink / raw)
  To: linux-btrfs, richard lucassen

Hi Richard,

On 5/4/22 11:23, richard lucassen wrote:
> Hello list,
> 
> Still new to btrfs, I try to set up a system that is capable of
> booting even if one of the two disks is removed or broken. The BIOS
> supports this.
> 
> As the Debian installer is not capable of installing btrfs raid1, I 
> installed Bullseye using /dev/md0 for /boot (ext2) and a / btrfs on
> /dev/sda3. This works of course. After install I added /dev/sdb3 to
> the / fs: OK.

Did you 'just' add the disk to the filesystem, or did you also do a next
step of converting the existing data to the raid1 profile?

If you start out with 1 disk and simply add another, it tells btrfs
that it can continue writing just 1 (!) copy of your data wherever it 
likes. And, in this case, the filesystem *always* wants (needs!) all 
disks to be present to mount, of course.

disk 1  disk 2
A       C
B       E
D

If you want everything duplicated on both disks, you need to convert the 
existing data that you already had on the first disk to the raid1 
profile, and from then on, it will keep writing 2 copies of the data on 
any two disks in the filesystem (but you have exactly 2, so it's always 
on both of those two in that case).

disk 1  disk 2
A       D
B       B
D       C
C       A

If the previous installed system still works well when you add back the 
second disk again, you can still do this. (so, when you did not force 
any destructive operations, and just had it fail like seen below)

Can you share output of the following commands:

btrfs fi usage <mountpoint>

With the following command you let it convert all (d)ata and (m)etadata 
to the raid1 profile:

btrfs balance start -dconvert=raid1 -mconvert=raid1 /

Afterwards, you can check the result with the usage command. The data, 
metadata, and system lines in the output of the usage command should all 
say RAID1, and you should see that on both disks, a similar amount of 
data is present.

Hans

> Reboot: works. Proof/pudding/eating: I stopped the system, removed
> one of the disks and started again. It boots, but it refuses to mount
> the / fs, either without sda or sdb.
> 
> Question: is this newbie trying to set up an impossible config or
> have I missed something crucial somewhere?
> 
> R.
> 
> Begin: Running /scripts/init-premount ... done. Begin: Mounting root
> file system ... Begin: Running /scripts/local-top ... done. Begin:
> Running /scripts/local-premount ... [    6.809309] Btrfs loaded,
> crc32c=crc32c-generic Scanning for Btrfs filesystems [    6.849966]
> random: fast init done [    6.884290] BTRFS: device label data devid
> 1 transid 50 /dev/sda6 scanned by btrfs (171) [    6.892822] BTRFS:
> device fsid 1739f989-05e0-48d8-b99a-67f91c18c892 devid 1 transid 23
> /dev/sda5 scanned by btrfs (171) [    6.903959] BTRFS: device fsid
> f9cf579f-d3d9-49b2-ab0d-ba258e9df3d8 devid 1 transid 3971 /dev/sda3
> scanned by btrfs (171) Begin: Waiting for suspend/resume device ...
> Begin: Running /scripts/local-block ... done. Begin: Running
> /scripts/local-block ... done. Begin: Running /scripts/local-block
> ... done. Begin: Running /scripts/local-block ... done. Begin:
> Running /scripts/local-block ... done. Begin: Running
> /scripts/local-block ... done. Begin: Running /scripts/local-block
> ... done. Begin: Running /scripts/local-block ... done. Begin:
> Running /scripts/local-block ... done. Begin: Running
> /scripts/local-block ... done. Begin: Running /scripts/local-block
> ... done. Begin: Running /scripts/local-block ... done. Begin:
> Running /scripts/local-block ... done. Begin: Running
> /scripts/local-block ... done. Begin: Running /scripts/local-block
> ... done. Begin: Running /scripts/local-block ... done. Begin:
> Running /scripts/local-block ... done. Begin: Running
> /scripts/local-block ... [   27.015660] md/raid1:md0: active with 1
> out of 2 mirrors [   27.021181] md0: detected capacity change from 0
> to 262078464 [   27.036555] md/raid1:md1: active with 1 out of 2
> mirrors [   27.042062] md1: detected capacity change from 0 to
> 4294901760 done. done. done. Warning: fsck not present, so skipping
> root file system [   27.235880] BTRFS info (device sda3): flagging fs
> with big metadata feature [   27.242984] BTRFS info (device sda3):
> disk space caching is enabled [   27.249314] BTRFS info (device
> sda3): has skinny extents [   27.258259] BTRFS error (device sda3):
> devid 2 uuid 5b50e238-ae76-426f-bae3-deee5999adbc is missing [
> 27.267448] BTRFS error (device sda3): failed to read the system
> array: -2 [   27.275696] BTRFS error (device sda3): open_ctree
> failed mount: mounting /dev/sda3 on /root failed: Invalid argument 
> Failed to mount /dev/sda3 as root file system.
> 
> 
> BusyBox v1.30.1 (Debian 1:1.30.1-6+b3) built-in shell (ash) Enter
> 'help' for a list of built-in commands.
> 
> (initramfs)
> 
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 12:06 ` Hans van Kranenburg
@ 2022-05-04 12:59   ` richard lucassen
  0 siblings, 0 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-04 12:59 UTC (permalink / raw)
  To: linux-btrfs

On Wed, 4 May 2022 14:06:31 +0200
Hans van Kranenburg <hans@knorrie.org> wrote:

> > Still new to btrfs, I try to set up a system that is capable of
> > booting even if one of the two disks is removed or broken. The BIOS
> > supports this.
> > 
> > As the Debian installer is not capable of installing btrfs raid1, I 
> > installed Bullseye using /dev/md0 for /boot (ext2) and a / btrfs on
> > /dev/sda3. This works of course. After install I added /dev/sdb3 to
> > the / fs: OK.
> 
> Did you 'just' add the disk to the filesystem, or did you also do a
> next step of converting the existing data to the raid1 profile?

AFAIK this is what I need to do to convert sda3 mounted on / to a
raid1 using sda3/sdb3:

btrfs device add /dev/sdb3 /
btrfs balance start -dconvert=raid1 -mconvert=raid1 /

> If you start out with 1 disk and simply add another, it tells btrfs
> that it can continue writing just 1 (!) copy of your data wherever it 
> likes. And, in this case, the filesystem *always* wants (needs!) all 
> disks to be present to mount, of course.
> 
> disk 1  disk 2
> A       C
> B       E
> D
> 
> If you want everything duplicated on both disks, you need to convert
> the existing data that you already had on the first disk to the raid1 
> profile, and from then on, it will keep writing 2 copies of the data
> on any two disks in the filesystem (but you have exactly 2, so it's
> always on both of those two in that case).
> 
> disk 1  disk 2
> A       D
> B       B
> D       C
> C       A
> 
> If the previous installed system still works well when you add back
> the second disk again, you can still do this. (so, when you did not
> force any destructive operations, and just had it fail like seen
> below)
> 
> Can you share output of the following commands:
> 
> btrfs fi usage <mountpoint>
> 
> With the following command you let it convert all (d)ata and
> (m)etadata to the raid1 profile:
> 
> btrfs balance start -dconvert=raid1 -mconvert=raid1 /

That's what I did

> Afterwards, you can check the result with the usage command. The
> data, metadata, and system lines in the output of the usage command
> should all say RAID1, and you should see that on both disks, a
> similar amount of data is present.

Just chose the grub boot option "Debian 11 on sda6" (an older install),
this works but in fact this seems to be the sda3/sdb3 raid1. I must have
messed up grub somewhere:

btrfs filesystem show
Label: none  uuid: f9cf579f-d3d9-49b2-ab0d-ba258e9df3d8
        Total devices 2 FS bytes used 1.15GiB
        devid    1 size 16.00GiB used 2.28GiB path /dev/sda3
        devid    2 size 16.00GiB used 2.28GiB path /dev/sdb3

Label: none  uuid: 1739f989-05e0-48d8-b99a-67f91c18c892
        Total devices 2 FS bytes used 448.00KiB
        devid    1 size 16.00GiB used 2.57GiB path /dev/sda5
        devid    2 size 16.00GiB used 2.56GiB path /dev/sdb5

Label: 'data'  uuid: 3173a224-830f-41d7-8870-3db0e8c986c9
        Total devices 2 FS bytes used 1020.38MiB
        devid    1 size 187.32GiB used 2.01GiB path /dev/sda6
        devid    2 size 187.32GiB used 2.01GiB path /dev/sdb6

I will first clean up all the mess I created.

"There are two types of people: those who have lost data and those who
will" :-)

R.

-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 10:14       ` richard lucassen
  2022-05-04 10:26         ` Andy Smith
@ 2022-05-04 18:15         ` Andrei Borzenkov
  2022-05-04 19:33           ` richard lucassen
  1 sibling, 1 reply; 15+ messages in thread
From: Andrei Borzenkov @ 2022-05-04 18:15 UTC (permalink / raw)
  To: linux-btrfs

On 04.05.2022 13:14, richard lucassen wrote:
> On Wed, 4 May 2022 13:07:14 +0300
> Nikolay Borisov <nborisov@suse.com> wrote:
> 
>>> The wiki explains how to repair an array, but when the array is the
>>> root fs you will have a problem.
>>>
>>> So, what should I do when the / fs is degraded?
>>
>> In case of btrfs raid1 if you managed to mount the array degraded
>> it's possible to add another device to the array and then run a
>> balance operation so that you end up with 2 copies of your data. I.e
>> I don't see a problem? Have I misunderstood you?
> 
> I fear you did. I cannot mount it -o degraded, I have no working system!
> 
> I need fysical access to the system to repair it, contrary to an md
> system, the latter will simply start as 'Degraded Array' even when I'm
> abroad...
> 

No, it will not. Some script(s), as part of startup sequence, will
decide that array can be started even though it is degraded and force it
to be started. Nothing in principle prevents your distribution from
adding scripts to mount btrfs in degraded mode in this case. Those
scripts are not part of btrfs, so you should report it to your
distribution.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 18:15         ` Andrei Borzenkov
@ 2022-05-04 19:33           ` richard lucassen
  2022-05-05  8:27             ` Nikolay Borisov
  2022-05-09  6:50             ` Andrei Borzenkov
  0 siblings, 2 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-04 19:33 UTC (permalink / raw)
  To: linux-btrfs

On Wed, 4 May 2022 21:15:50 +0300
Andrei Borzenkov <arvidjaar@gmail.com> wrote:

> No, it will not. Some script(s), as part of startup sequence, will
> decide that array can be started even though it is degraded and force
> it to be started. Nothing in principle prevents your distribution from
> adding scripts to mount btrfs in degraded mode in this case. Those
> scripts are not part of btrfs, so you should report it to your
> distribution.

Ok thnx! Would it damage btrfs if I add a permanent "rootflags=degraded"
to the kernel?

-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 19:33           ` richard lucassen
@ 2022-05-05  8:27             ` Nikolay Borisov
  2022-05-05 20:30               ` richard lucassen
  2022-05-09  6:50             ` Andrei Borzenkov
  1 sibling, 1 reply; 15+ messages in thread
From: Nikolay Borisov @ 2022-05-05  8:27 UTC (permalink / raw)
  To: linux-btrfs



On 4.05.22 г. 22:33 ч., richard lucassen wrote:
> On Wed, 4 May 2022 21:15:50 +0300
> Andrei Borzenkov <arvidjaar@gmail.com> wrote:
> 
>> No, it will not. Some script(s), as part of startup sequence, will
>> decide that array can be started even though it is degraded and force
>> it to be started. Nothing in principle prevents your distribution from
>> adding scripts to mount btrfs in degraded mode in this case. Those
>> scripts are not part of btrfs, so you should report it to your
>> distribution.
> 
> Ok thnx! Would it damage btrfs if I add a permanent "rootflags=degraded"
> to the kernel?

The flag itself won't have any repercussions on stability but if your 
only remaining disk crashes while you are in degraded mode, because your 
other disk is already gone then you might lose data.

> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-05  8:27             ` Nikolay Borisov
@ 2022-05-05 20:30               ` richard lucassen
  0 siblings, 0 replies; 15+ messages in thread
From: richard lucassen @ 2022-05-05 20:30 UTC (permalink / raw)
  To: linux-btrfs

On Thu, 5 May 2022 11:27:36 +0300
Nikolay Borisov <nborisov@suse.com> wrote:

> >> No, it will not. Some script(s), as part of startup sequence, will
> >> decide that array can be started even though it is degraded and
> >> force it to be started. Nothing in principle prevents your
> >> distribution from adding scripts to mount btrfs in degraded mode
> >> in this case. Those scripts are not part of btrfs, so you should
> >> report it to your distribution.
> > 
> > Ok thnx! Would it damage btrfs if I add a permanent
> > "rootflags=degraded" to the kernel?
> 
> The flag itself won't have any repercussions on stability but if your 
> only remaining disk crashes while you are in degraded mode, because
> your other disk is already gone then you might lose data.

Ok, but that goes for md devices as well. Under md I can add a "spare"
disk, using btrfs just simply add a third disk.

I will do some more testing this week. Thnx for your support!

R.

-- 
richard lucassen
https://contact.xaq.nl/

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Debian Bullseye install btrfs raid1
  2022-05-04 19:33           ` richard lucassen
  2022-05-05  8:27             ` Nikolay Borisov
@ 2022-05-09  6:50             ` Andrei Borzenkov
  1 sibling, 0 replies; 15+ messages in thread
From: Andrei Borzenkov @ 2022-05-09  6:50 UTC (permalink / raw)
  To: linux-btrfs

On 04.05.2022 22:33, richard lucassen wrote:
> On Wed, 4 May 2022 21:15:50 +0300
> Andrei Borzenkov <arvidjaar@gmail.com> wrote:
> 
>> No, it will not. Some script(s), as part of startup sequence, will
>> decide that array can be started even though it is degraded and force
>> it to be started. Nothing in principle prevents your distribution from
>> adding scripts to mount btrfs in degraded mode in this case. Those
>> scripts are not part of btrfs, so you should report it to your
>> distribution.
> 
> Ok thnx! Would it damage btrfs if I add a permanent "rootflags=degraded"
> to the kernel?
> 

It will not damage anything under normal conditions, but btrfs was known
to start creating single chunks when mounted in degraded mode and not
mirror them automatically after you restored redundancy. In this case
failure to access single chunks will lead to data unavailability (and if
single chunk was metadata, it could lead to complete loss of access to
filesystem). I do not know if this was ever fixed.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-05-09  7:05 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-04  9:23 Debian Bullseye install btrfs raid1 richard lucassen
2022-05-04  9:27 ` Nikolay Borisov
2022-05-04  9:30   ` richard lucassen
2022-05-04 10:02   ` richard lucassen
2022-05-04 10:07     ` Nikolay Borisov
2022-05-04 10:14       ` richard lucassen
2022-05-04 10:26         ` Andy Smith
2022-05-04 11:16           ` richard lucassen
2022-05-04 18:15         ` Andrei Borzenkov
2022-05-04 19:33           ` richard lucassen
2022-05-05  8:27             ` Nikolay Borisov
2022-05-05 20:30               ` richard lucassen
2022-05-09  6:50             ` Andrei Borzenkov
2022-05-04 12:06 ` Hans van Kranenburg
2022-05-04 12:59   ` richard lucassen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.