All of lore.kernel.org
 help / color / mirror / Atom feed
From: Forza <forza@tnonline.net>
To: efkf <efkf@firemail.cc>, Nicholas D Steeves <nsteeves@gmail.com>,
	linux-btrfs@vger.kernel.org
Subject: Re: Tried to replace a drive in a raid 1 and all hell broke loose
Date: Mon, 30 May 2022 22:47:30 +0200	[thread overview]
Message-ID: <4bad94f3-7cf2-6224-6876-7a1e3fe5abcd@tnonline.net> (raw)
In-Reply-To: <6685a5e4-6d03-6108-1394-0f75f6433c9e@firemail.cc>



On 2022-05-29 22:48, efkf wrote:
> On 5/28/22 21:20, Nicholas D Steeves wrote:
>> Efkf, would you please confirm if the filesystem was created with Linux
>> and btrfs-progs 5.10.x? (please keep me in CC)
> It was created under linux and I'm 99% sure kernel 5.10.0 and 
> btrfs-progs 5.10.1
> It was surely that configuration when I started messing with it.
> Now that i think about it i had mounted degraded when i had initially 
> created the filesystem so maybe single metadata got created and has been 
> bitrotting away since.
> If that's the case though it didn't cause any problems before running 
> the first balance command after which everything went downhill.
> 
> 
> On 5/27/22 22:37, Forza wrote:
>>> Anyway, is there a way to check the data is really redundant without 
>>> trusting the filesystem telling me it's so?
>>
>> Yes, you use 'btrfs scrub' to read all data and metadata blocks from 
>> all devices and compare the checksums. If there are problems, scrub 
>> will tell you.
>>
>> https://btrfs.readthedocs.io/en/latest/btrfs-scrub.html
>> https://wiki.tnonline.net/w/Btrfs/Scrub
>>
> 
> Yeah but that relies on me having actually set up RAID1.
> The point I'm trying to make is that as a beginner who learns as they go 
> you don't know what you don't know so maybe there is some detail you 
> don't know about that's making your data unsafe . (in this case 
> scrubbing without checking if the whole filesystem is raid1, I assumed 
> it was set in stone from the fs's creation)

Indeed. Btrfs supports multiple profiles, and a combination of profiles 
as you discovered. Some Btrfs tools do show a warning on multiple 
profiles detected.

> I should have read more about it but i think there will be more new 
> users that will try what i did to sanity check their setup so in my 
> opinion it would be important to make it so that if you don't write to 
> the FS, especially if you mount it read only it should be safe to mount 
> degraded and not put any data in jeopardy.
> 

I had a discussion with some Windows users, and they did exactly the 
same thing - yanked the mirror out and then inserted it again. 4 times 
out of 5 it "worked" and they got upset when it didn't work the last time.

So, with that said, there is room to improve documentation, man pages 
and guides to help users find the information they need to check their 
system correctly.

For now, mounting each mirror independently and then combine them again 
is not good for Btrfs. This use-case seems to be unhandled.

> On 5/28/22 22:04, Forza wrote:
>> I believe this is a problem of having degraded mounts.
> So you think the single chunks from the degraded mount got corrupted due 
> to something unrelated to btrfs and that caused the problem i had?
> 

It is possible the errors are older, but not surfacing until you tried 
to do that full balance after adding the third drive. This could have 
caused balance to fail, leading up to all the subsequent errors.

> 
> Either way does anyone want me to run something on the filesystem to 
> provide any help for any possible debugging or can i wipe it and move 
> on? (i kind of need the storage >
> Thanks a lot again by the way to everyone who looked into it and 
> especially for all the great help!

  reply	other threads:[~2022-05-30 20:47 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-23 17:21 Tried to replace a drive in a raid 1 and all hell broke loose efkf
     [not found] ` <5fd50e9.def5d621.180f273d002@tnonline.net>
2022-05-23 20:00   ` efkf
2022-05-23 20:05     ` efkf
2022-05-24  6:51       ` efkf
2022-05-24 19:11         ` Chris Murphy
2022-05-27 15:13           ` efkf
2022-05-27 15:15             ` efkf
2022-05-27 15:25             ` Forza
2022-05-27 16:28               ` efkf
2022-05-27 21:37                 ` Forza
2022-05-28 20:20           ` Nicholas D Steeves
2022-05-28 21:04             ` Forza
2022-05-29 20:48             ` efkf
2022-05-30 20:47               ` Forza [this message]
2022-05-30 21:59                 ` Graham Cobb
2022-06-07 21:17                   ` Nicholas D Steeves

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4bad94f3-7cf2-6224-6876-7a1e3fe5abcd@tnonline.net \
    --to=forza@tnonline.net \
    --cc=efkf@firemail.cc \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=nsteeves@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.