All of lore.kernel.org
 help / color / mirror / Atom feed
* I/O hang, possibly XFS, possibly general
@ 2011-06-02 14:42 Paul Anderson
  2011-06-02 16:17 ` Stan Hoeppner
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Paul Anderson @ 2011-06-02 14:42 UTC (permalink / raw)
  To: xfs-oss

This morning, I had a symptom of a I/O throughput problem in which
dirty pages appeared to be taking a long time to write to disk.

The system is a large x64 192GiB dell 810 server running 2.6.38.5 from
kernel.org - the basic workload was data intensive - concurrent large
NFS (with high metadata/low filesize), rsync/lftp (with low
metadata/high file size) all working in a 200TiB XFS volume on a
software MD raid0 on top of 7 software MD raid6, each w/18 drives.  I
had mounted the filesystem with inode64,largeio,logbufs=8,noatime.

The specific symptom was that 'sync' hung, a dpkg command hung
(presumably trying to issue fsync), and experimenting with "killall
-STOP" or "kill -STOP" of the workload jobs didn't let the system
drain I/O enough to finish the sync.  I probably did not wait long
enough, however.

So here's what I did to diagnose: when all workloads were stopped,
there was still low rate I/O from kflush->md array jobs.  No CPU
starvation, but the I/O rate was low - 5-30MiB/second (the array can
readily do >1000MiB/second for big I/O).  Mind you, one "md5sum
--check" job was able to run at >200MiB/second without trouble - turn
it off or on and the aggregate I/O load shoots right up or down along
with it, so I'm fairly confident in the underlying physical arrays as
well as XFS large data I/O.

I did "echo 3 > /proc/sys/vm/drop_caches" repeatedly and noticed that
according to top, the total amount of cached data would drop down
rapidly (first time had the big drop), but still be stuck at around
8-10Gigabytes.  While continuing to do this, I noticed finally that
the cached data value was in fact dropping slowly (at the rate of
5-30MiB/second), and in fact finally dropped down to approximately
60Megabytes at which point the stuck dpkg command finished, and I was
again able to issue sync commands that finished instantly.

My guess is that I've done something to fill the buffer pool with slow
to flush metadata - and prior to rebooting the machine a few minutes
ago, I removed the largeio option in /etc/fstab.

I can't say this is an XFS bug specifically, but more likely how I am
using it - are there other tools I can use to better diagnose what is
going on?  I do know it will happen again, since we will have 5 of
these machines running at very high rates soon.  Also, any suggestions
for better metadata or log management are very welcome.

This particular machine is probably our worst, since it has the widest
variation in offered file I/O load (tens of millions of small files,
thousands of >1GB  files).  If this workload is pushing XFS too hard,
I can deploy new hardware to split the workload across different
filesystems.

Thanks very much for any thoughts or suggestions,

Paul Anderson

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2011-06-08  8:33 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-02 14:42 I/O hang, possibly XFS, possibly general Paul Anderson
2011-06-02 16:17 ` Stan Hoeppner
2011-06-02 18:56 ` Peter Grandi
2011-06-02 21:24   ` Paul Anderson
2011-06-02 23:59     ` Phil Karn
2011-06-03  0:39       ` Dave Chinner
2011-06-03  2:11         ` Phil Karn
2011-06-03  2:54           ` Dave Chinner
2011-06-03 22:28             ` Phil Karn
2011-06-04  3:12               ` Dave Chinner
2011-06-03 22:19     ` Peter Grandi
2011-06-06  7:29       ` Michael Monnerie
2011-06-07 14:09         ` Peter Grandi
2011-06-08  5:18           ` Dave Chinner
2011-06-08  8:32           ` Michael Monnerie
2011-06-03  0:06   ` Phil Karn
2011-06-03  0:42 ` Christoph Hellwig
2011-06-03  1:39   ` Dave Chinner
2011-06-03 15:59     ` Paul Anderson
2011-06-04  3:15       ` Dave Chinner
2011-06-04  8:14       ` Stan Hoeppner
2011-06-04 10:32         ` Dave Chinner
2011-06-04 12:11           ` Stan Hoeppner
2011-06-04 23:10             ` Dave Chinner
2011-06-05  1:31               ` Stan Hoeppner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.