All of lore.kernel.org
 help / color / mirror / Atom feed
* unclean shutdown and space cache rebuild
@ 2013-06-30 13:56 Shridhar Daithankar
  2013-06-30 17:53 ` Garry T. Williams
  0 siblings, 1 reply; 12+ messages in thread
From: Shridhar Daithankar @ 2013-06-30 13:56 UTC (permalink / raw)
  To: linux-btrfs

Hello,

I have 3 partitions with btrfs(/, /home and /data). All of them have following 
mount options

noatime,space_cache,inode_cache,compress=lzo,defaults

Whenever there is a unclean shutdown(which happens a lot in my case), the next 
reboot, system comes up relatively at the same speed but as systemd is 
starting up daemons, the disk is continuously ( and unusally long) grinding.

This causes random delays with various daemons such as postgresql failing to 
start, kdm timing out of xorg servers etc and I have to reboot after the dust 
settles to bring back the system to the normal. At one time, even 
keyboad/mouse were not responding as some debus service timed out..

I think it is rebuilding the space cache because I saw similar long disk 
activity when I activated it first.

How can I confirm that it is the space cache rebuild thats taking time?

if space cache rebuild is the reason, is there any way to improve it? 

I am running archlinux/systemd/kde setup with two 7200 RPM seagate sata 
disks(no RAID, one 80 GB for / and /home, other 500GB for data). The kernel is 
3.9.8 x86_64.

Thanks.

-- 
Regards
 Shridhar

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-06-30 13:56 unclean shutdown and space cache rebuild Shridhar Daithankar
@ 2013-06-30 17:53 ` Garry T. Williams
  2013-06-30 19:58   ` Pete
                     ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Garry T. Williams @ 2013-06-30 17:53 UTC (permalink / raw)
  To: linux-btrfs

On 6-30-13 19:26:16 Shridhar Daithankar wrote:
> Whenever there is a unclean shutdown(which happens a lot in my
> case), the next reboot, system comes up relatively at the same speed
> but as systemd is starting up daemons, the disk is continuously (and
> unusally long) grinding.

[snip]

> How can I confirm that it is the space cache rebuild thats taking
> time?
> 
> if space cache rebuild is the reason, is there any way to improve
> it?
> 
> I am running archlinux/systemd/kde

I suspect this is, at least in part, related to severe fragmentation
in /home.

There are large files in these directories that are updated frequently
by various components of KDE and the Chrome browser.  (Firefox has its
own databases that are frequently updated, too.)

    ~/.local/share/akonadi
    ~/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
    ~/.cache/chromium/Default/Cache
    ~/.cache/chromium/Default/Media\ Cache

I improved performance dramatically (orders of magnitude) by copying
the database files into an empty file that was modified with:

    chattr -C

and renaming to make the files no COW.  (Note that this is the only
way to change an existing file to no COW.)  I also set the same
attribute on the owning directories so that all new files inherit the
no COW attribute.

I suspect there are other files that fragment badly since I see
periods of high disk activity coming back slowly over a few weeks of
use after making the modifications above.  I intend to track them down
and do the same.

Also, see these:

    https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#Defragmenting_a_directory_doesn.27t_work 
    https://btrfs.wiki.kernel.org/index.php/UseCases#How_do_I_defragment_many_files.3F 

    $ uname -r
    3.9.6-200.fc18.x86_64
    $

-- 
Garry T. Williams


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-06-30 17:53 ` Garry T. Williams
@ 2013-06-30 19:58   ` Pete
  2013-06-30 20:10   ` Clemens Eisserer
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Pete @ 2013-06-30 19:58 UTC (permalink / raw)
  To: linux-btrfs

On 06/30/2013 06:53 PM, Garry T. Williams wrote:
> On 6-30-13 19:26:16 Shridhar Daithankar wrote:
>> Whenever there is a unclean shutdown(which happens a lot in my
>> case), the next reboot, system comes up relatively at the same speed
>> but as systemd is starting up daemons, the disk is continuously (and
>> unusally long) grinding.
>

>> I am running archlinux/systemd/kde
>
> I suspect this is, at least in part, related to severe fragmentation
> in /home.
>

I'm wondering if this is affecting myself.  I have a big issue with my 
data drive slowing down and there being near long periods of high disk 
IO that prevent me doing anything else.  I've noticed from iotop various 
btrfs processes hogging the IO for long periods, e.g. 
btrfs-transacti... & btrfs-submit

I've been running kde which has got unusable (not from reboot, but in 
general).  xfce is less hampered but IO still seems like an issue at 
times.  Of course, xfce hits different files.  I've been using this file 
system a couple of months and not defragged before.  I started 
defragging the various subvolumes a week or two ago - but I did not 
realise this was not recursive until this weekend.  I've got a python 
script running defrag on various files and folders - I can better track 
what it is defragging.  But it is _slow_ many many minutes for a rarely 
accessed folder with little content.  Is this normal?

I too had an issue with unclean shutdowns.  I, relatively infrequently, 
get lockups.  However, I had a spate last week which I have yet to 
resolve.  I wonder if that is related.

I wonder, if I defrag everything on say a weekly basis then will these 
performance issues go away?  Running a 3.9.3 kernel.

Pete



> There are large files in these directories that are updated frequently
> by various components of KDE and the Chrome browser.  (Firefox has its
> own databases that are frequently updated, too.)
>
>      ~/.local/share/akonadi
>      ~/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
>      ~/.cache/chromium/Default/Cache
>      ~/.cache/chromium/Default/Media\ Cache
>
> I improved performance dramatically (orders of magnitude) by copying
> the database files into an empty file that was modified with:
>
>      chattr -C
>
> and renaming to make the files no COW.  (Note that this is the only
> way to change an existing file to no COW.)  I also set the same
> attribute on the owning directories so that all new files inherit the
> no COW attribute.
>
> I suspect there are other files that fragment badly since I see
> periods of high disk activity coming back slowly over a few weeks of
> use after making the modifications above.  I intend to track them down
> and do the same.
>
> Also, see these:
>
>      https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#Defragmenting_a_directory_doesn.27t_work
>      https://btrfs.wiki.kernel.org/index.php/UseCases#How_do_I_defragment_many_files.3F
>
>      $ uname -r
>      3.9.6-200.fc18.x86_64
>      $
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-06-30 17:53 ` Garry T. Williams
  2013-06-30 19:58   ` Pete
@ 2013-06-30 20:10   ` Clemens Eisserer
  2013-06-30 21:20   ` Duncan
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Clemens Eisserer @ 2013-06-30 20:10 UTC (permalink / raw)
  To: linux-btrfs

Hi,

> I suspect this is, at least in part, related to severe fragmentation
> in /home.

In his cause those issues are only present after an unclean shutdown -
whereas fragmentation would show its effect after every reboot.

Regards, Clemens

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-06-30 17:53 ` Garry T. Williams
  2013-06-30 19:58   ` Pete
  2013-06-30 20:10   ` Clemens Eisserer
@ 2013-06-30 21:20   ` Duncan
  2013-06-30 23:12   ` Roger Binns
  2013-07-01  2:50   ` Shridhar Daithankar
  4 siblings, 0 replies; 12+ messages in thread
From: Duncan @ 2013-06-30 21:20 UTC (permalink / raw)
  To: linux-btrfs

Garry T. Williams posted on Sun, 30 Jun 2013 13:53:48 -0400 as excerpted:

> I suspect this is, at least in part, related to severe fragmentation in
> /home.
> 
> There are large files in these directories that are updated frequently
> by various components of KDE and the Chrome browser.  (Firefox has its
> own databases that are frequently updated, too.)
> 
>     ~/.local/share/akonadi
>     ~/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
>     ~/.cache/chromium/Default/Cache ~/.cache/chromium/Default/Media\
>     Cache
> 
> I improved performance dramatically (orders of magnitude) by copying the
> database files into an empty file that was modified with:
> 
>     chattr -C
> 
> and renaming to make the files no COW.  (Note that this is the only way
> to change an existing file to no COW.)  I also set the same attribute on
> the owning directories so that all new files inherit the no COW
> attribute.
> 
> I suspect there are other files that fragment badly since I see periods
> of high disk activity coming back slowly over a few weeks of use after
> making the modifications above.  I intend to track them down and do the
> same.

This definitely won't be practical for everyone, but...

1) I run kde here, but switched away from kmail, akregator, basically 
anything kdepim related, when that akonadified.  I had been using kmail 
for nearly a decade, and it had converted MSOE mail in it from before the 
turn of the century (!!), but one day when akonadi simply lost an email 
for the Nth time in so many days, the question occurred to me, why do I 
put up with this when there's so many sane alternatives?  Yes, I could 
have probably recovered that mail as I had others by redoing the akonadi 
resources or whatever, but the question was, why should I *HAVE* to, 
again, when there's all sorts of saner alternatives that, like kmail 
before the akonadi insanity, didn't lose mail in the first place?

So I switched, choosing claws-mail to replace both kmail and akregator 
here, FWIW, but there's other alternatives for those who don't like claws-
mail.

And when I switched that off, I began wondering about semantic-desktop at 
all, even tho it was run-time switched off.  So being a gentooer who had 
the option, I set USE=-semantic-desktop and a few other flags and rebuilt 
the affected bits of kde.  Now no more semantic-desktop AT ALL! =:^)  
(Unfortunately, the gentoo/kde folks decided to kill the option and hard-
enable semantic-desktop for the coming 4.11, which I'm running the betas 
of ATM, but using the diffs between the 4.10 ebuilds with the option and 
4.11 builds without, I was able to patch out the the support, so now run 
with semantic-desktop build-time hard-disabled (instead of the normal 
gentoo 4.11 hard-enabled) here.

So no gigabytes of nepomuk and akonadi files doing nothing but create 
problems for me, here!  I do run firefox, but haven't seen a problem with 
it, either, possibly due to #2...

2) Throw hardware at the problem.  About a month ago I finally bit the 
financial bullet and upgraded to (mostly) SSD.  My media partition and 
backups are still on spinning rust (on reiserfs since btrfs is still 
experimental), but the main system and /home are now on dual (fairly 
fast, Corsair Neutron) SSD, on btrfs in raid1 mode (both data/metadata).

That's actually why I'm running btrfs here, as my old standby, reiserfs, 
while highly reliable on spinning rust (yes, I know the rumors, but after 
a decade on reiserfs on spinning rust, it has been /extremely/ reliable 
for me, at least it has been since the data=ordered switch in kernel 
2.6.16 IIRC, even thru various hardware issues!), isn't particularly 
appropriate for SSD.

So I run btrfs, which detects my SSDs and activates SSD mode 
automatically, here.  I use dual SSDs in btrfs raid1 mode, in ordered to 
take advantage of btrfs data integrity with the checksumming.

And with the SSDs, there's no mechanical seek latency and the IOPS are 
high enough that, at least with the btrfs autodefrag mount option 
activated from the beginning, I've seen no noticeable fragmentation 
slowdowns either.  (It may also help that I'm close to 100% 
overprovisioned on the SSDs as my effectively write-once-read-many media 
and backups remain on reiserfs on the spinning rust, so even without the 
trim option, the SSDs have plenty of room to do their write-management.)

What I've noticed most is that the stuff that small, often written files 
that caused fragmentation issues on reiserfs on spinning rust, generally 
aren't an issue at all on btrfs on SSD.  This includes my main gentoo 
package tree and overlays, my git kernel checkout, and pan's nntp message 
cache (by default 10 MB with two-week header expiry, but I'm running 
nearly a gig of text messages with no expiry, with messages on some gmane 
mailing-list groups I follow going back to 2002).  All these are an order 
of magnitude at least faster on btrfs on the SSDs than they were on 
reiserfs on spinning rust, without the slowdowns I saw due to 
fragmentation on spinning rust, either.

So that has been my answer to the fragmentation issue.  The space-cache 
issue of the OP may be different.  I had some problems with that and 
unclean shutdown when I first experimented with btrfs a bit over a year 
ago (still on spinning rust at the time), but I decided it wasn't ready 
for me then and waited a year, and I haven't had problems this go-round.  
I've only had a couple unclean shutdowns, however, but the system did 
seem to come right back up afterward.  The SSDs may be playing some role 
in that too, tho.  But I've really not had enough unclean shutdowns to be 
sure, yet.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-06-30 17:53 ` Garry T. Williams
                     ` (2 preceding siblings ...)
  2013-06-30 21:20   ` Duncan
@ 2013-06-30 23:12   ` Roger Binns
  2013-07-01  2:50   ` Shridhar Daithankar
  4 siblings, 0 replies; 12+ messages in thread
From: Roger Binns @ 2013-06-30 23:12 UTC (permalink / raw)
  To: linux-btrfs

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 30/06/13 10:53, Garry T. Williams wrote:
> ~/.cache/chromium/Default/Cache ~/.cache/chromium/Default/Media\ Cache

I've taken to making ~/.cache be tmpfs and all the apps have been fine
with that.  It also meant I didn't have to worry about my btrfs snapshots
being full of transient web junk.

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlHQu0wACgkQmOOfHg372QRTMACg1YQx1B6liiLnVpOZLxnoHC+W
5ewAn1Z40V/52dongHBpg6OUdprUVqwo
=601F
-----END PGP SIGNATURE-----


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-06-30 17:53 ` Garry T. Williams
                     ` (3 preceding siblings ...)
  2013-06-30 23:12   ` Roger Binns
@ 2013-07-01  2:50   ` Shridhar Daithankar
  2013-07-01  9:10     ` Duncan
  4 siblings, 1 reply; 12+ messages in thread
From: Shridhar Daithankar @ 2013-07-01  2:50 UTC (permalink / raw)
  To: Garry T. Williams; +Cc: linux-btrfs

On Sunday, June 30, 2013 01:53:48 PM Garry T. Williams wrote:
> I suspect this is, at least in part, related to severe fragmentation
> in /home.

I don't think so. The problem I have described occur only before anybody logs 
in to the system and /home being a separate partition, it is not the problem 
in this case.
w
> 
> There are large files in these directories that are updated frequently
> by various components of KDE and the Chrome browser.  (Firefox has its
> own databases that are frequently updated, too.)
> 
>     ~/.local/share/akonadi

Thats 3.9MB in my case since I point the akonadi db to a systemwide postgresql 
instance. of course, it will just shift defragmentation there.

>     ~/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend

damn!

# filefrag soprano-virtuoso.db                                                                                                             
soprano-virtuoso.db: 10518 extents found
                                                                                                                               
# btrfs fi defrag soprano-virtuoso.db

# filefrag soprano-virtuoso.db
soprano-virtuoso.db: 957 extents found

How much is an extend anyways?  is it a page or 256M?


>     ~/.cache/chromium/Default/Cache
>     ~/.cache/chromium/Default/Media\ Cache

I don't use chromium. but I get the idea.

But in general, how to find out most fragmented files and folders? mouting 
with autodefrag is a serious degradation..

> I improved performance dramatically (orders of magnitude) by copying
> the database files into an empty file that was modified with:
> 
>     chattr -C
> 
> and renaming to make the files no COW.  (Note that this is the only
> way to change an existing file to no COW.)  I also set the same
> attribute on the owning directories so that all new files inherit the
> no COW attribute.
> 
> I suspect there are other files that fragment badly since I see
> periods of high disk activity coming back slowly over a few weeks of
> use after making the modifications above.  I intend to track them down
> and do the same.

hmm.. a trick to find most badly fragmented files/directories and defragment 
them should do too I think.


-- 
Regards
 Shridhar

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-07-01  2:50   ` Shridhar Daithankar
@ 2013-07-01  9:10     ` Duncan
  2013-07-01 16:19       ` Shridhar Daithankar
  0 siblings, 1 reply; 12+ messages in thread
From: Duncan @ 2013-07-01  9:10 UTC (permalink / raw)
  To: linux-btrfs

Shridhar Daithankar posted on Mon, 01 Jul 2013 08:20:19 +0530 as
excerpted:

> On Sunday, June 30, 2013 01:53:48 PM Garry T. Williams wrote:

[discussing fragmentation]
> 
>> ~/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
> 
> damn!
> 
> # filefrag soprano-virtuoso.db soprano-virtuoso.db: 10518 extents found
>                                                                                                                                
> # btrfs fi defrag soprano-virtuoso.db
> 
> # filefrag soprano-virtuoso.db soprano-virtuoso.db: 957 extents found

While you evidently had quite some fragmentation as the number of extents 
dropped considerably, if you're running btrfs compression, it's worth 
noting that (based on earlier posts here) filefrag always counts 
compressed files as fragmented, even if they're not.  So a sufficiently 
sized file will almost certainly show fragmentation via filefrag if it's 
compressed, and all you can do is use filefrag as a hint in that case; 
defrag may well not do anything if it's not actually fragmented.

> How much is an extend anyways?  is it a page or 256M?

I don't know...
 
> But in general, how to find out most fragmented files and folders?
> mouting with autodefrag is a serious degradation..

It is?  AFAIK, all the autodefrag mount option does is scan files for 
fragmentation as they are written and queue any fragmentation-detected 
files for background defrag by the defrag thread.

I had expected, particularly on spinning rust, that the benefits of 
autodefrag to far exceed the costs, so your performance drag claim is 
interesting to me indeed.  If my expectation is wrong, which it could be, 
I'd love to know why, and see some numbers.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-07-01  9:10     ` Duncan
@ 2013-07-01 16:19       ` Shridhar Daithankar
  2013-07-02 13:00         ` Duncan
  0 siblings, 1 reply; 12+ messages in thread
From: Shridhar Daithankar @ 2013-07-01 16:19 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Monday, July 01, 2013 09:10:41 AM Duncan wrote:
> > But in general, how to find out most fragmented files and folders?
> > mouting with autodefrag is a serious degradation..
> 
> It is?  AFAIK, all the autodefrag mount option does is scan files for
> fragmentation as they are written and queue any fragmentation-detected
> files for background defrag by the defrag thread.
> 
> I had expected, particularly on spinning rust, that the benefits of
> autodefrag to far exceed the costs, so your performance drag claim is
> interesting to me indeed.  If my expectation is wrong, which it could be,
> I'd love to know why, and see some numbers.

while I don't have numbers, I enabled autodefrag on all the partitions and 
rebooted(twice, just to confirm) and its slow.. 

everything has a 10 second tail of disk activity and has quite some visible 
latency. Moving mouse, switching windows, starting new programs, everything 
has visible latency thats unusable.

It seems autodefrag is being too aggressive for its own good.. 

I am sticking with defragging folders individually. /var, /home and a 1GB 
squid cache is what I have narrowed down and things are reasonably fast.
-- 
Regards
 Shridhar

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-07-01 16:19       ` Shridhar Daithankar
@ 2013-07-02 13:00         ` Duncan
  2013-07-02 15:49           ` Shridhar Daithankar
  0 siblings, 1 reply; 12+ messages in thread
From: Duncan @ 2013-07-02 13:00 UTC (permalink / raw)
  To: linux-btrfs

Shridhar Daithankar posted on Mon, 01 Jul 2013 21:49:16 +0530 as
excerpted:

> On Monday, July 01, 2013 09:10:41 AM Duncan wrote:
>> 
>>> mouting with autodefrag is a serious degradation..
>> 
>> It is?  AFAIK, all the autodefrag mount option does is scan files for
>> fragmentation as they are written and queue any fragmentation-detected
>> files for background defrag by the defrag thread.
>> 
>> I had expected, particularly on spinning rust, that the benefits of
>> autodefrag to far exceed the costs, so your performance drag claim is
>> interesting to me indeed.  If my expectation is wrong, which it could
>> be, I'd love to know why, and see some numbers.
> 
> while I don't have numbers, I enabled autodefrag on all the partitions
> and rebooted(twice, just to confirm) and its slow..
> 
> everything has a 10 second tail of disk activity and has quite some
> visible latency. Moving mouse, switching windows, starting new programs,
> everything has visible latency thats unusable.
> 
> It seems autodefrag is being too aggressive for its own good..

Just to be clear, your system, your call.  I'd never /dream/ of 
interfering with that due to the implications for my own system (which is 
certainly highly customized even matched against a peer-group of other 
gentoo installs =:^).  That said...

I'm guessing what you experiences with the autodefrag mount option was 
because you were not in stable-state yet.  The original btrfs filesystem 
setup and fill was very likely without the flag on[2], so there's quite a 
lot of existing fragmentation what would have to be worked thru before 
the filesystem gets defragged and you reach stable-state, at which I'd 
expect the autodefrag mount option to have little overhead.

Tho if what you're saying is correct[1] then it may be that the 
background defrag thread isn't (io-)niced as I would have expected it to 
be.

But I'd still expect there to be some better performance steady state 
after a few mounts gets the basic filesystem defragged.  Tho if the 
fileystem is heavily fragmented[2], in practice it may well be easier to 
backup the filesystem content, do a clean mkfs and mount with autodefrag 
and restore from backup, thus ensuring autodefrag is on while filling the 
filesystem in the first place, than to wait for autodefrag to reach a 
stable system state in normal operation over many mounts.

All that stated, you've definitely demonstrated that I hadn't put enough 
thought into my initial general-case assumptions, which now come with far 
more qualifiers than they did before this subthread.  Thanks. =:^)

[1] I simply don't know from personal experience as I (1) ensured I 
enabled autodefrag on the empty filesystem before I started filling it, 
and (2) I'm on fast ssd, an entirely different world from spinning rust.

[2] Various comments I've read seem to hint that, surprisingly, certain 
distro installers leave a brand new install in a heavily fragmented 
state, as they apparently install without autodefrag on during the 
install and also apparently do heavy rewriting into existing files 
thereby triggering heavy fragmentation without the flag during that 
install.  No, I've not bothered to track which distros, I simply ensured 
autodefrag was on, here, before filling my filesystems in the first place.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-07-02 13:00         ` Duncan
@ 2013-07-02 15:49           ` Shridhar Daithankar
  2013-07-05  3:45             ` Shridhar Daithankar
  0 siblings, 1 reply; 12+ messages in thread
From: Shridhar Daithankar @ 2013-07-02 15:49 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Tuesday, July 02, 2013 01:00:29 PM Duncan wrote:
> Just to be clear, your system, your call.  I'd never /dream/ of
> interfering with that due to the implications for my own system (which is
> certainly highly customized even matched against a peer-group of other
> gentoo installs =:^).  That said...
> 
> I'm guessing what you experiences with the autodefrag mount option was
> because you were not in stable-state yet.  The original btrfs filesystem
> setup and fill was very likely without the flag on[2], so there's quite a
> lot of existing fragmentation what would have to be worked thru before
> the filesystem gets defragged and you reach stable-state, at which I'd
> expect the autodefrag mount option to have little overhead.

Yes, I suspect as much. One of the data parition I have is over an year old 
and never defragged, /home about 3 months old.

and I can see the defragmentation working when run by hand. The initial run on 
/var(only directories, to defrag metadata) was 6 min. Now that I am running it 
daily by hand, entire /var(including files, that includes 1.8G pacman cache, 
few tiny postgresql databases and /var/tmp where kde generates tons of IO) 
takes about 6-7 min.

Despite of my experience with autodefrag, I want to use it because thats the 
best solution(kinda like autovacuum in postgresql) that keeps things clean, 
all the time.

> 
> Tho if what you're saying is correct[1] then it may be that the
> background defrag thread isn't (io-)niced as I would have expected it to
> be.
> 
> But I'd still expect there to be some better performance steady state
> after a few mounts gets the basic filesystem defragged.  Tho if the
> fileystem is heavily fragmented[2], in practice it may well be easier to
> backup the filesystem content, do a clean mkfs and mount with autodefrag
> and restore from backup, thus ensuring autodefrag is on while filling the
> filesystem in the first place, than to wait for autodefrag to reach a
> stable system state in normal operation over many mounts.

well, I think I will bite the bullet and defrag entire / overnight and repeat 
the autodefrag mount option. That should work too.

Thanks

-- 
Regards
 Shridhar

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: unclean shutdown and space cache rebuild
  2013-07-02 15:49           ` Shridhar Daithankar
@ 2013-07-05  3:45             ` Shridhar Daithankar
  0 siblings, 0 replies; 12+ messages in thread
From: Shridhar Daithankar @ 2013-07-05  3:45 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Tuesday, July 02, 2013 09:19:07 PM Shridhar Daithankar wrote:
> On Tuesday, July 02, 2013 01:00:29 PM Duncan wrote:
> > But I'd still expect there to be some better performance steady state
> > after a few mounts gets the basic filesystem defragged.  Tho if the
> > fileystem is heavily fragmented[2], in practice it may well be easier to
> > backup the filesystem content, do a clean mkfs and mount with autodefrag
> > and restore from backup, thus ensuring autodefrag is on while filling the
> > filesystem in the first place, than to wait for autodefrag to reach a
> > stable system state in normal operation over many mounts.
> 
> well, I think I will bite the bullet and defrag entire / overnight and
> repeat the autodefrag mount option. That should work too.

and that worked. defragged all the mount points including files and dirs, 
enabled autodefrag and reboot. Took about 2 hour to defrag the existing files.

but the filesystem is now extremely smooth. faster than ext4 I might say. sure 
there are occasional stalls but they are more of noticable than annoyance and 
thats pretty much compensated by the significant improvement in latency in 
everything.

fun fact, pg_test_fsync now reports 44 fsyncs per seconds instead of earlier 
20. I don't know if that down to defragmentation or compression.

Also another bigger disk of 500GB, the score is around 24 fsyncs per second. 
So I suspect it has to do with tree size.

anyways, good things overall

thanks for the help and suggestions.


-- 
Regards
 Shridhar

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2013-07-05  3:39 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-30 13:56 unclean shutdown and space cache rebuild Shridhar Daithankar
2013-06-30 17:53 ` Garry T. Williams
2013-06-30 19:58   ` Pete
2013-06-30 20:10   ` Clemens Eisserer
2013-06-30 21:20   ` Duncan
2013-06-30 23:12   ` Roger Binns
2013-07-01  2:50   ` Shridhar Daithankar
2013-07-01  9:10     ` Duncan
2013-07-01 16:19       ` Shridhar Daithankar
2013-07-02 13:00         ` Duncan
2013-07-02 15:49           ` Shridhar Daithankar
2013-07-05  3:45             ` Shridhar Daithankar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.