From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: btrfsck does not fix
Date: Mon, 17 Feb 2014 03:20:58 +0000 (UTC) [thread overview]
Message-ID: <pan$475be$cb0c1165$a3d91acc$c206709@cox.net> (raw)
In-Reply-To: 21AD6EBC-FDDA-4BDE-B0B4-D6A8BBAD58F0@colorremedies.com
Chris Murphy posted on Sun, 16 Feb 2014 12:54:44 -0700 as excerpted:
> On Feb 16, 2014, at 12:18 PM, Hendrik Friedel <hendrik@friedels.name>
> wrote:
[On balance converting to single from raidN:]
>> I think it didn't work.
>>
>> btrfs balance start -dconvert=single -mconvert=single -sconvert=single
>> --force /mnt/BTRFS/Video/
>> After >10h:
>> btrfs balance status /mnt/BTRFS/Video/
>> No balance found on '/mnt/BTRFS/Video/'
>> root@homeserver:~# btrfs fi df /mnt/BTRFS/Video/
>> Data, RAID0: total=4.00GB, used=4.00GB
>> Data: total=2.29TB, used=2.29TB
>> System: total=32.00MB, used=256.00KB
>> Metadata: total=4.00GB, used=2.57GB
>
> It looks like everything is single except for 4GB of data which is still
> raid0. Weird. There should be a bunch of messages in dmesg during a
> normal/successful balance, and either something mentioned or missing
> might provide a clue why some chunks weren't converted.
Agreed.
> Unmounted, what do you get for btrfs check?
Agreed, but it's worth an explanation and explicit warning just in case...
btrfs check is read-only by default -- it'll tell you what it thinks is
wrong, but won't attempt to correct anything. Adding --repair tells it
to try to correct the errors it found, but the recommendation is do NOT
use --repair unless it's a last-ditch effort after other things failed,
and preferably only after a btrfs dev says to, because sometimes it can
make things worse instead of better.
So running the (read-only) /check/ to see what it says is a good idea,
but do NOT try to run it with --repair just yet, no matter what errors it
thinks it sees.
>> Do you have an idea what could be wrong?
>
> No. I'd say it's a bug. 3.14rc3 should be out today, and might be worth
> a shot. Or btrfs-next. If you try again, you only need to convert the
> data profile.
https://btrfs.wiki.kernel.org/index.php/Balance_Filters
Based on that, I'd suggest
btrfs balance start -dconvert=single,soft /mnt/BTRFS/Video/
Given that there's only 4 GiB left to convert, it should go MUCH faster
than the 10 hours the multiple TiB took.
> Also, 10 hours to balance two disks at 2.3TB seems like a long time. I'm
> not sure if that's expected.
FWIW, I think you may not realize how big 2.3 TiB is, and/or how slow
spinning rust can be when dealing with TiBs of potentially fragmented
data...
2.3TiB * 1024GiB/TiB * 1024 MiB/GiB / 10 hours / 60 min/hr / 60 sec/min =
66.99... real close to 67 MiB/sec
Since it's multiple TiB we're talking and only two devices, that's almost
certainly spinning rust, not SSD, and on spinning rust, 67 MiB/sec really
isn't /that/ bad, especially if the filesystem wasn't new and had been
reasonably used, thus likely had some fragmentation to deal with.
But the good news is that the 4 GiB remaining should be much faster; @
the 67 MiB/sec average of the above, we're talking about a minute.
Throwing in that "soft" should tell it to ignore the previously converted
data, and only balance data chunks that aren't yet in the target single
profile, so it should only do the 4 GiB that's still raid0, not redo the
multiple TiB.
Tho it will probably have to check the profile on each chunk still, and
if that remaining data is hugely fragmented or something that could take
a bit longer, so it could be two minutes or ten minutes instead of one,
but if it's more than an hour, I'd definitely be wondering what's up!
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
next prev parent reply other threads:[~2014-02-17 3:21 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-03 19:41 btrfsck does not fix Hendrik Friedel
2014-01-03 23:33 ` Chris Murphy
2014-01-04 21:21 ` Hendrik Friedel
2014-01-05 13:36 ` Hendrik Friedel
2014-01-05 16:55 ` Chris Murphy
2014-01-07 20:38 ` Hendrik Friedel
2014-01-10 23:53 ` Hendrik Friedel
2014-01-11 1:05 ` Chris Murphy
2014-01-12 22:31 ` Hendrik Friedel
2014-01-14 0:40 ` Chris Murphy
2014-01-14 6:03 ` Duncan
2014-01-14 7:49 ` Chris Murphy
2014-01-14 9:30 ` Duncan
2014-01-14 9:38 ` Hugo Mills
2014-01-14 17:17 ` Chris Murphy
2014-01-18 7:20 ` Chris Samuel
2014-01-14 8:16 ` Hugo Mills
2014-01-19 19:37 ` Martin Steigerwald
2014-01-21 20:00 ` Hendrik Friedel
2014-01-21 20:01 ` Hendrik Friedel
2014-02-08 22:01 ` Hendrik Friedel
2014-02-09 0:45 ` Chris Murphy
2014-02-09 8:36 ` Hendrik Friedel
2014-02-11 1:45 ` Chris Murphy
2014-02-11 2:23 ` Chris Murphy
2014-02-16 19:18 ` Hendrik Friedel
2014-02-16 19:54 ` Chris Murphy
2014-02-17 3:20 ` Duncan [this message]
2014-02-17 9:41 ` Goswin von Brederlow
2014-02-18 21:55 ` Hendrik Friedel
2014-02-18 22:12 ` Chris Murphy
2014-03-02 18:39 ` Hendrik Friedel
2014-03-03 22:35 ` Chris Murphy
2014-03-04 6:42 ` Hendrik Friedel
2014-03-04 17:02 ` Chris Murphy
2014-03-03 1:09 ` Russell Coker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$475be$cb0c1165$a3d91acc$c206709@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.