All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
Cc: stan@hardwarefreak.com, "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: suddenly slow writes on XFS Filesystem
Date: Mon, 7 May 2012 17:17:13 +1000	[thread overview]
Message-ID: <20120507071713.GZ5091@dastard> (raw)
In-Reply-To: <4FA76E11.1070708@profihost.ag>

On Mon, May 07, 2012 at 08:39:13AM +0200, Stefan Priebe - Profihost AG wrote:
> Hi,
> 
> after deleting 400GB it was faster. Now there are still 300GB free but
> it is slow as hell again ;-(
> 
> Am 07.05.2012 03:34, schrieb Dave Chinner:
> > On Sun, May 06, 2012 at 11:01:14AM +0200, Stefan Priebe wrote:
> >> Hi,
> >>
> >> since a few days i've experienced a really slow fs on one of our
> >> backup systems.
> >>
> >> I'm not sure whether this is XFS related or related to the
> >> Controller / Disks.
> >>
> >> It is a raid 10 of 20 SATA Disks and i can only write to them with
> >> about 700kb/s while doing random i/o.
> > 
> > What sort of random IO? size, read, write, direct or buffered, data
> > or metadata, etc?
> There are 4 rsync processes running and doing backups of other severs.
> 
> > iostat -x -d -m 5 and vmstat 5 traces would be
> > useful to see if it is your array that is slow.....
> 
> ~ # iostat -x -d -m 5
> Linux 2.6.40.28intel (server844-han)    05/07/2012      _x86_64_
> (8 CPU)
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s  avgrq-sz avgqu-sz   await  svctm  %util
> sdb               0,00     0,00  254,80   25,40     1,72     0,16  13,71     0,86    3,08   2,39  67,06
> sda               0,00     0,20    0,00    1,20     0,00     0,00  6,50     0,00    0,00   0,00   0,00
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s  avgrq-sz avgqu-sz   await  svctm  %util
> sdb               0,00     0,00  187,40   24,20     1,26     0,19  14,05     0,75    3,56   3,33  70,50
> sda               0,00     0,00    0,00    0,40     0,00     0,00  4,50     0,00    0,00   0,00   0,00
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s  avgrq-sz avgqu-sz   await  svctm  %util
> sdb               0,00    11,20  242,40   92,00     1,56     0,89  15,00     4,70   14,06   1,58  52,68
> sda               0,00     0,20    0,00    2,60     0,00     0,02  12,00     0,00    0,00   0,00   0,00
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s  avgrq-sz avgqu-sz   await  svctm  %util
> sdb               0,00     0,00  166,20   24,00     0,99     0,17  12,51     0,57    3,02   2,40  45,56
> sda               0,00     0,00    0,00    0,00     0,00     0,00  0,00     0,00    0,00   0,00   0,00
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
> sdb               0,00     0,00  188,00   25,40     1,22     0,16  13,23     0,44    2,04   1,78  38,02
> sda               0,00     0,00    0,00    0,00     0,00     0,00  0,00     0,00    0,00   0,00   0,00


> # vmstat

"vmstat 5", not vmstat 5 times....  :/

> >> I tried vanilla Kernel 3.0.30
> >> and 3.3.4 - no difference. Writing to another partition on another
> >> xfs array works fine.
> >>
> >> Details:
> >> #~ df -h
> >> /dev/sdb1             4,6T  4,4T  207G  96% /mnt
> > 
> > Your filesystem is near full - the allocation algorithms definitely
> > slow down as you approach ENOSPC, and IO efficiency goes to hell
> > because of a lack of contiguous free space to allocate from.
> I've now 94% used but it is still slow. It seems it was just getting
> fast with more than 450GB free space.
> 
> /dev/sdb1             4,6T  4,3T  310G  94% /mnt

Well, you've probably badly fragmented the free space you have. what
does the 'xfs_db -r -c freesp <dev>' command tell you?

> >> #~ df -i
> >> /dev/sdb1            4875737052 4659318044 216419008  96% /mnt
> > You have 4.6 *billion* inodes in your filesystem?
> Yes - it backups around 100 servers with a lot of files.

So you have what - lots of symlinks? I mean, 4.6 billion inodes
alone requires 1.2TB of space, but if I read the fragmentation
you only have 82 million files with data extents. The only thing
that would other wise use inodes are directories and symlinks....

Still, I can't see how you'd only have 82 million data inodes and 4.5
billion directory inodes - where are all the inodes being consumed?
A massive symlink farm?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-05-07  7:17 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-06  9:01 suddenly slow writes on XFS Filesystem Stefan Priebe
2012-05-06 10:31 ` Martin Steigerwald
2012-05-06 10:33 ` Martin Steigerwald
2012-05-06 15:45   ` Stan Hoeppner
2012-05-06 19:25     ` Stefan Priebe
2012-05-07  1:39       ` Dave Chinner
2012-05-06 21:43     ` Martin Steigerwald
2012-05-07  6:40       ` Stefan Priebe - Profihost AG
2012-05-07  1:34 ` Dave Chinner
2012-05-07  6:39   ` Stefan Priebe - Profihost AG
2012-05-07  7:17     ` Dave Chinner [this message]
2012-05-07  7:22       ` Stefan Priebe - Profihost AG
2012-05-07 16:36         ` Stan Hoeppner
2012-05-07 19:08           ` Martin Steigerwald
2012-05-07 20:05           ` Stefan Priebe
2012-05-09  6:57             ` Stan Hoeppner
2012-05-09  7:04               ` Dave Chinner
2012-05-09  7:36                 ` Stefan Priebe - Profihost AG
2012-05-09  7:49                 ` Stan Hoeppner
2013-02-15 15:06                 ` 32bit apps and inode64 Stefan Priebe - Profihost AG
2013-02-15 21:46                   ` Ben Myers
2013-02-16 10:24                     ` Stefan Priebe - Profihost AG
2013-02-17 21:33                       ` Dave Chinner
2013-02-18  8:12                         ` Stefan Priebe - Profihost AG
2013-02-18 22:06                           ` Dave Chinner
2013-02-17  8:13                     ` Jeff Liu
2013-02-19 19:11                       ` Ben Myers
2012-05-07 23:42         ` suddenly slow writes on XFS Filesystem Dave Chinner
2012-05-07  8:21     ` Martin Steigerwald
2012-05-07 16:44       ` Stan Hoeppner
2012-05-07  8:31     ` Martin Steigerwald
2012-05-07 13:57       ` Stefan Priebe - Profihost AG
2012-05-07 14:32         ` Martin Steigerwald

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120507071713.GZ5091@dastard \
    --to=david@fromorbit.com \
    --cc=s.priebe@profihost.ag \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.