All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hans-Kristian Bakke <hkbakke@gmail.com>
To: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Blocket for more than 120 seconds
Date: Sun, 15 Dec 2013 03:35:53 +0100	[thread overview]
Message-ID: <CAD_cGvF8-pxXXvwpB+yNYnKRW09S4c0VuKUGiWa7=8C2uFkvqA@mail.gmail.com> (raw)
In-Reply-To: <840381F8-BDCA-43BF-A170-6E10C2908B8A@colorremedies.com>

I have done some more testing. I turned off everything using the disk
and only did defrag. I have created a script that gives me a list of
the files with the most extents. I started from the top to improve the
fragmentation of the worst files. The most fragmentet file was a file
of about 32GB with over 250 000 extents!
It seems that I can defrag a two to three largish (15-30GB) ~100 000
extents files just fine, but after a while the system locks up (not a
complete hard lock, but everythings hangs and a restart is necessary
to get a fully working system again)

It seems like defrag operations is triggering the issue. Probably in
combination with the large and heavily fragmentet files.

I have slowly managed to defragment the most fragmented files,
rebooting 4 times, so one of the worst files now is this one:

# filefrag vide01.mkv
vide01.mkv: 77810 extents found
# lsattr vide01.mkv
---------------- vide01.mkv

All the large fragmented files are ordinary mkv-files (video). The
reason for the heavy fragmentation was that perhaps 50 to 100 files
were written at the same time over a period of several days, with lots
of other activity going on as well. No problem for the system as it
was network limited most of the time.
Although defrag alone can trigger blocking, so can also straight rsync
from another internal 1000 MB/s continous reads internal array
combined with some random activity. It seems that the cause is just
heavy IO. Is it possible that even though I have seemingly lots of
space free in measured MBytes, that it is all so fragmented that btrfs
can't allocate space efficiently enough? Or would that give other
errors?

I actually downgraded from kernel 3.13-rc2 because of not being able
to do anything else if copying between the internal arrays without
btrfs hanging, although seemingly just temporarily and not as bad as
the defrag blocking.

I will try to free up some space before running more defrag too, just
to check if that is the issue.

Mvh

Hans-Kristian Bakke


On 15 December 2013 02:59, Chris Murphy <lists@colorremedies.com> wrote:
>
> On Dec 14, 2013, at 5:28 PM, Hans-Kristian Bakke <hkbakke@gmail.com> wrote:
>
>> When I look at the entire FS with df-like tools it is reported as
>> 89.4% used (26638.65 of 29808.2 GB). But this is shared amongst both
>> data and metadata I guess?
>
> Yes.
>
>>
>> I do know that ~90%+ seems full, but it is still around 3TB in my
>> case! Are the "percentage rules" of old times still valid with modern
>> disk sizes?
>
> Probably not. But you also reported rather significant fragmentation. And it's also still an experimental file system when not ~ 90% full. I think it's fair to say that this level of fullness is a less tested use case.
>
>
>
>> It seems extremely inconvenient that a filesystem like
>> btrfs is starting to misbehave at "only" 3TB available space for
>> RAID10 mirroring and metadata, which is probably a little bit over 1TB
>> actual filestorage counting everything in.
>
> I'm not suggesting the behavior is either desired or expected, but certainly blocking is better than an oops or a broken file system, and in the not too distant past such things have happened on full volumes. Given the level of fragmentation this behavior might be expected at the current state of development, for all I know.
>
> But if you care about this data, I'd take the blocking as a warning to back off on this usage pattern, unless of course you're intentionally trying to see at what point it breaks and why.
>
>>
>> I would normally expect that there is no difference in 1TB free space
>> on a FS that is 2TB in total, and 1TB free space on a filesystem that
>> is 30TB in total, other than my sense of urge and that you would
>> probably expect data growth to be more rapid on the 30TB FS as there
>> is obviously a need to store a lot of stuff.
>
> Seems reasonable.
>
>
>> Is "free space needed" really a different concept dependning on the
>> size of your FS?
>
> Maybe it depends more on the size and fragmentation of the files being access, and of remaining free space.
>
> Can you do an lsattr on these 25GB files that you say have ~ 100,000 extents? And what are these files?
>
>
>
> Chris Murphy

  reply	other threads:[~2013-12-15  2:35 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-14 20:30 Blocket for more than 120 seconds Hans-Kristian Bakke
2013-12-14 21:35 ` Chris Murphy
2013-12-14 23:19   ` Hans-Kristian Bakke
2013-12-14 23:50     ` Chris Murphy
2013-12-15  0:28       ` Hans-Kristian Bakke
2013-12-15  1:59         ` Chris Murphy
2013-12-15  2:35           ` Hans-Kristian Bakke [this message]
2013-12-15 13:24             ` Duncan
2013-12-15 14:51               ` Hans-Kristian Bakke
2013-12-15 23:08                 ` Duncan
2013-12-16  0:06                   ` Hans-Kristian Bakke
2013-12-16 10:19                     ` Duncan
2013-12-16 10:55                       ` Hans-Kristian Bakke
2013-12-16 15:00                         ` Duncan
2013-12-16 15:18             ` Chris Mason
2013-12-16 16:32               ` Hans-Kristian Bakke
2013-12-16 18:16                 ` Chris Mason
2013-12-16 18:22                   ` Hans-Kristian Bakke
2013-12-16 18:33                     ` Chris Mason
2013-12-16 18:41                       ` Hans-Kristian Bakke
2013-12-15  3:47         ` George Mitchell
2013-12-15 23:39       ` Charles Cazabon
2013-12-16  0:16         ` Hans-Kristian Bakke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAD_cGvF8-pxXXvwpB+yNYnKRW09S4c0VuKUGiWa7=8C2uFkvqA@mail.gmail.com' \
    --to=hkbakke@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.