* XFS Syncd
@ 2015-04-10 4:23 Shrinand Javadekar
2015-04-10 6:32 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-04-10 4:23 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/plain, Size: 1499 bytes --]
Hi,
I am using the XFS filesystem as the backend for Openstack Swift. On
my setup, I have a single server with 8 data disks; each of them is
one XFS volume.
I am running a workload which does many concurrent writes of 256K
files into the XFS volumes. Openstack Swift takes care of evenly
distributing the data across all the 8 disks. It also uses extended
attributes for each of the files it writes. It also explicitly does a
fsync() at the end for each file.
I am seeing a behavior where the system pretty much stalls for ~5
seconds after every 30 seconds. I see that the # of ios goes up but
the actual write bandwidth during this 5 second period is very low
(see attached images). After a fair bit of investigation, we've
narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
This runs at a default interval of 30 seconds.
I have a couple of questions:
1. If all file writes are done with an fsync() at the end, what is
xfssyncd doing for several seconds?
2. How does xfssyncd actually work across several disks? Currently, it
seems that when it runs, it pretty much stalls the entire system.
3. I see that fs.xfs.xfssyncd_centisecs is the parameter to tune the
interval. But that doesn't give us much. Increasing the interval
simply postpones the work. When xfssyncd runs, it takes more time. Are
there any other options I can try to make xfssyncd not stall the
system when it runs?
Thanks in advance.
-Shri
P.S. I'm not a member of this list. Direct replies appreciated.
[-- Attachment #2: write_throughput6.png --]
[-- Type: image/png, Size: 109367 bytes --]
[-- Attachment #3: read_write_requests_complete_rate6.png --]
[-- Type: image/png, Size: 93756 bytes --]
[-- Attachment #4: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-04-10 4:23 XFS Syncd Shrinand Javadekar
@ 2015-04-10 6:32 ` Dave Chinner
2015-04-10 6:51 ` Shrinand Javadekar
0 siblings, 1 reply; 21+ messages in thread
From: Dave Chinner @ 2015-04-10 6:32 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Thu, Apr 09, 2015 at 09:23:55PM -0700, Shrinand Javadekar wrote:
> Hi,
>
> I am using the XFS filesystem as the backend for Openstack Swift. On
> my setup, I have a single server with 8 data disks; each of them is
> one XFS volume.
>
> I am running a workload which does many concurrent writes of 256K
> files into the XFS volumes. Openstack Swift takes care of evenly
> distributing the data across all the 8 disks. It also uses extended
> attributes for each of the files it writes. It also explicitly does a
> fsync() at the end for each file.
What's xfssyncd? :P
$ ps waux |grep [x]fs
root 192 0.0 0.0 0 0 ? S< Mar16 0:00 [xfsalloc]
root 193 0.0 0.0 0 0 ? S< Mar16 0:00 [xfs_mru_cache]
root 194 0.0 0.0 0 0 ? S< Mar16 0:00 [xfslogd]
root 196 0.0 0.0 0 0 ? S< Mar16 0:00 [xfs-data/md0]
root 197 0.0 0.0 0 0 ? S< Mar16 0:00 [xfs-conv/md0]
root 198 0.0 0.0 0 0 ? S< Mar16 0:00 [xfs-cil/md0]
root 199 0.1 0.0 0 0 ? S Mar16 40:27 [xfsaild/md0]
$
Oh, right, it's that workqueue we removed in late 2012 (in the 3.7
cycle) because it was redundant. The only remaining fragment of it
is the xfslogd. What kernel are you running?
> I am seeing a behavior where the system pretty much stalls for ~5
> seconds after every 30 seconds. I see that the # of ios goes up but
> the actual write bandwidth during this 5 second period is very low
> (see attached images). After a fair bit of investigation, we've
> narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
> This runs at a default interval of 30 seconds.
It's doing background inode reclaim which, under some circumstances,
involves truncating specualtive allocation beyond EOF before reclaim
occurs, which results in transactions and inode writeback. It was
highly inefficient, which is why we replaced it.
> I have a couple of questions:
>
> 1. If all file writes are done with an fsync() at the end, what is
> xfssyncd doing for several seconds?
> 2. How does xfssyncd actually work across several disks? Currently, it
> seems that when it runs, it pretty much stalls the entire system.
xfssyncd was actually a workqueue, so it services multiple
filesystems at once. Before that, there was a kernel thread per
filesystem for it. Anyway, it's doing lots of random write IO and
saturating your disks, which will stall any system that is dependent
on IO throughput to function.
> 3. I see that fs.xfs.xfssyncd_centisecs is the parameter to tune the
> interval. But that doesn't give us much. Increasing the interval
> simply postpones the work. When xfssyncd runs, it takes more time. Are
> there any other options I can try to make xfssyncd not stall the
> system when it runs?
Upgrade your kernel to something more recent, and the problem should
go away.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-04-10 6:32 ` Dave Chinner
@ 2015-04-10 6:51 ` Shrinand Javadekar
2015-04-10 7:21 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-04-10 6:51 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
Thanks for the reply Dave!
>
> Oh, right, it's that workqueue we removed in late 2012 (in the 3.7
> cycle) because it was redundant. The only remaining fragment of it
> is the xfslogd. What kernel are you running?
I am running 3.13.0-39-generic on Ubuntu 14.04.
# uname -a
Linux tf-hippo-1 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
>> I am seeing a behavior where the system pretty much stalls for ~5
>> seconds after every 30 seconds. I see that the # of ios goes up but
>> the actual write bandwidth during this 5 second period is very low
>> (see attached images). After a fair bit of investigation, we've
>> narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
>> This runs at a default interval of 30 seconds.
>
> It's doing background inode reclaim which, under some circumstances,
> involves truncating specualtive allocation beyond EOF before reclaim
> occurs, which results in transactions and inode writeback. It was
> highly inefficient, which is why we replaced it.
Oh.. I see. So, this isn't even actual filesystem metadata. And there
is no option to turn the speculative allocation on/off?
What's the downside of not doing the truncation of the speculative
allocation? Does that result in wasted disk space? If so, how much?
>
>> I have a couple of questions:
>>
>> 1. If all file writes are done with an fsync() at the end, what is
>> xfssyncd doing for several seconds?
>> 2. How does xfssyncd actually work across several disks? Currently, it
>> seems that when it runs, it pretty much stalls the entire system.
>
> xfssyncd was actually a workqueue, so it services multiple
> filesystems at once. Before that, there was a kernel thread per
> filesystem for it. Anyway, it's doing lots of random write IO and
> saturating your disks, which will stall any system that is dependent
> on IO throughput to function.
>
>> 3. I see that fs.xfs.xfssyncd_centisecs is the parameter to tune the
>> interval. But that doesn't give us much. Increasing the interval
>> simply postpones the work. When xfssyncd runs, it takes more time. Are
>> there any other options I can try to make xfssyncd not stall the
>> system when it runs?
>
> Upgrade your kernel to something more recent, and the problem should
> go away.
We have several other dependencies on the OS. Not sure if upgrading
above Ubuntu 14.04 and kernel 3.13.0-39-generic is an option. Any
other options to try out?
-Shri
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-04-10 6:51 ` Shrinand Javadekar
@ 2015-04-10 7:21 ` Dave Chinner
2015-04-10 7:29 ` Shrinand Javadekar
0 siblings, 1 reply; 21+ messages in thread
From: Dave Chinner @ 2015-04-10 7:21 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Thu, Apr 09, 2015 at 11:51:17PM -0700, Shrinand Javadekar wrote:
> Thanks for the reply Dave!
>
> >
> > Oh, right, it's that workqueue we removed in late 2012 (in the 3.7
> > cycle) because it was redundant. The only remaining fragment of it
> > is the xfslogd. What kernel are you running?
>
> I am running 3.13.0-39-generic on Ubuntu 14.04.
You can't be running that kernel if you are seeing a process called
xfssyncd in your traces.
$ gl -n 1 5889608
commit 5889608df35783590251cfd440fa5d48f1855179
Author: Dave Chinner <dchinner@redhat.com>
Date: Mon Oct 8 21:56:05 2012 +1100
xfs: syncd workqueue is no more
With the syncd functions moved to the log and/or removed, the syncd
workqueue is the only remaining bit left. It is used by the log
covering/ail pushing work, as well as by the inode reclaim work.
Given how cheap workqueues are these days, give the log and inode
reclaim work their own work queues and kill the syncd work queue.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
$ git describe --contains 5889608
for-linus-v3.8-rc1~71
$
Which, as you can see from the patch, the xfssyncd workqueue was
removed and they were separated into xfs-reclaim/<dev> and
xfs-log/<dev> work queues.
So, what exactly are you calling "xfssyncd"? Can you please post
copies of the output you are seeing that has lead you think this
kernel thread/workqueue exists in your kernel?
> >> I am seeing a behavior where the system pretty much stalls for ~5
> >> seconds after every 30 seconds. I see that the # of ios goes up but
> >> the actual write bandwidth during this 5 second period is very low
> >> (see attached images). After a fair bit of investigation, we've
> >> narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
> >> This runs at a default interval of 30 seconds.
> >
> > It's doing background inode reclaim which, under some circumstances,
> > involves truncating specualtive allocation beyond EOF before reclaim
> > occurs, which results in transactions and inode writeback. It was
> > highly inefficient, which is why we replaced it.
>
> Oh.. I see. So, this isn't even actual filesystem metadata. And there
> is no option to turn the speculative allocation on/off?
You can turn it off, but now you're jumping to conclusions that this
is the cause of your problems. Perhaps you should do some
tracing/profiling whenthe system goes through these stalls to see
what is actually happening? "perf top" and trace-cmd are very useful
for this sort of investigation...
> What's the downside of not doing the truncation of the speculative
> allocation? Does that result in wasted disk space? If so, how much?
Start at:
http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_files_on_XFS_use_more_data_blocks_than_expected.3F
and read the next 4 FAQs...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-04-10 7:21 ` Dave Chinner
@ 2015-04-10 7:29 ` Shrinand Javadekar
2015-04-10 13:12 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-04-10 7:29 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
On Fri, Apr 10, 2015 at 12:21 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Thu, Apr 09, 2015 at 11:51:17PM -0700, Shrinand Javadekar wrote:
>> Thanks for the reply Dave!
>>
>> >
>> > Oh, right, it's that workqueue we removed in late 2012 (in the 3.7
>> > cycle) because it was redundant. The only remaining fragment of it
>> > is the xfslogd. What kernel are you running?
>>
>> I am running 3.13.0-39-generic on Ubuntu 14.04.
>
> You can't be running that kernel if you are seeing a process called
> xfssyncd in your traces.
I don't see a process called xfssyncd. I started investigating the 30
second pauses but looking for xfs config options in sysctl. We found
the option "fs.xfs.xfssyncd_centisecs" whose documentation[1] says it
is the interval in which the "filesystem flushes metadata out to disk
and runs internal cache cleanup routines".
I tweaked this setting and saw the corresponding changes in the
performance. Bumping this value up saw pauses at longer interval,
decreasing this interval saw pauses more frequently.
>
> $ gl -n 1 5889608
> commit 5889608df35783590251cfd440fa5d48f1855179
> Author: Dave Chinner <dchinner@redhat.com>
> Date: Mon Oct 8 21:56:05 2012 +1100
>
> xfs: syncd workqueue is no more
>
> With the syncd functions moved to the log and/or removed, the syncd
> workqueue is the only remaining bit left. It is used by the log
> covering/ail pushing work, as well as by the inode reclaim work.
>
> Given how cheap workqueues are these days, give the log and inode
> reclaim work their own work queues and kill the syncd work queue.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> Reviewed-by: Mark Tinguely <tinguely@sgi.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ben Myers <bpm@sgi.com>
>
> $ git describe --contains 5889608
> for-linus-v3.8-rc1~71
> $
>
> Which, as you can see from the patch, the xfssyncd workqueue was
> removed and they were separated into xfs-reclaim/<dev> and
> xfs-log/<dev> work queues.
>
> So, what exactly are you calling "xfssyncd"? Can you please post
> copies of the output you are seeing that has lead you think this
> kernel thread/workqueue exists in your kernel?
>
>> >> I am seeing a behavior where the system pretty much stalls for ~5
>> >> seconds after every 30 seconds. I see that the # of ios goes up but
>> >> the actual write bandwidth during this 5 second period is very low
>> >> (see attached images). After a fair bit of investigation, we've
>> >> narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
>> >> This runs at a default interval of 30 seconds.
>> >
>> > It's doing background inode reclaim which, under some circumstances,
>> > involves truncating specualtive allocation beyond EOF before reclaim
>> > occurs, which results in transactions and inode writeback. It was
>> > highly inefficient, which is why we replaced it.
>>
>> Oh.. I see. So, this isn't even actual filesystem metadata. And there
>> is no option to turn the speculative allocation on/off?
>
> You can turn it off, but now you're jumping to conclusions that this
> is the cause of your problems. Perhaps you should do some
> tracing/profiling whenthe system goes through these stalls to see
> what is actually happening? "perf top" and trace-cmd are very useful
> for this sort of investigation...
Let me dig deeper here using "perf top" and see what's running during
these stalls.
>
>> What's the downside of not doing the truncation of the speculative
>> allocation? Does that result in wasted disk space? If so, how much?
>
> Start at:
>
> http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_files_on_XFS_use_more_data_blocks_than_expected.3F
>
> and read the next 4 FAQs...
Thanks!
-Shri
[1] http://www.mjmwired.net/kernel/Documentation/filesystems/xfs.txt#265
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-04-10 7:29 ` Shrinand Javadekar
@ 2015-04-10 13:12 ` Dave Chinner
2015-06-02 18:43 ` Shrinand Javadekar
0 siblings, 1 reply; 21+ messages in thread
From: Dave Chinner @ 2015-04-10 13:12 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Fri, Apr 10, 2015 at 12:29:34AM -0700, Shrinand Javadekar wrote:
> On Fri, Apr 10, 2015 at 12:21 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Thu, Apr 09, 2015 at 11:51:17PM -0700, Shrinand Javadekar wrote:
> >> Thanks for the reply Dave!
> >>
> >> >
> >> > Oh, right, it's that workqueue we removed in late 2012 (in the 3.7
> >> > cycle) because it was redundant. The only remaining fragment of it
> >> > is the xfslogd. What kernel are you running?
> >>
> >> I am running 3.13.0-39-generic on Ubuntu 14.04.
> >
> > You can't be running that kernel if you are seeing a process called
> > xfssyncd in your traces.
>
> I don't see a process called xfssyncd. I started investigating the 30
> second pauses but looking for xfs config options in sysctl. We found
> the option "fs.xfs.xfssyncd_centisecs" whose documentation[1] says it
> is the interval in which the "filesystem flushes metadata out to disk
> and runs internal cache cleanup routines".
Right, that's what it does, but even though xfssynd has been
removed, we can't remove or rename the sysctl because it's part
of the userspace ABI.
> I tweaked this setting and saw the corresponding changes in the
> performance. Bumping this value up saw pauses at longer interval,
> decreasing this interval saw pauses more frequently.
Ok, so it's not speculative preallocation that is the problem,
it's metadata writeback that is causing the stalls. I forgot the log
worker is also triggered of that sysctl, and so...
> >> >> I am seeing a behavior where the system pretty much stalls for ~5
> >> >> seconds after every 30 seconds. I see that the # of ios goes up but
> >> >> the actual write bandwidth during this 5 second period is very low
> >> >> (see attached images). After a fair bit of investigation, we've
> >> >> narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
> >> >> This runs at a default interval of 30 seconds.
> >> >
> >> > It's doing background inode reclaim which, under some circumstances,
> >> > involves truncating specualtive allocation beyond EOF before reclaim
> >> > occurs, which results in transactions and inode writeback. It was
> >> > highly inefficient, which is why we replaced it.
> >>
> >> Oh.. I see. So, this isn't even actual filesystem metadata. And there
> >> is no option to turn the speculative allocation on/off?
> >
> > You can turn it off, but now you're jumping to conclusions that this
> > is the cause of your problems. Perhaps you should do some
> > tracing/profiling whenthe system goes through these stalls to see
> > what is actually happening? "perf top" and trace-cmd are very useful
> > for this sort of investigation...
>
> Let me dig deeper here using "perf top" and see what's running during
> these stalls.
... it's much more likely that filesystem metadata writeback is
being run every 30s, and that's what is causing the issue. i.e. you
should see the xfsaild issuing lots of IO very quickly.
See, fsync() doesn't cause metadata writeback; only data writeback.
The metadata is written to the log, not it's final place on disk
during fsync. So some time later it's got to me written back because
it is still dirty in memory, and that's most likely what is
happening.
My guess is you have RAID5 or RAID6 and the partial stripe writes
are causing it to do RMW cycles and hence it's really, really slow
when metadata gets written...
Probably too late now as I've now basically asked for all this info,
but:
http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-04-10 13:12 ` Dave Chinner
@ 2015-06-02 18:43 ` Shrinand Javadekar
2015-06-03 3:57 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-02 18:43 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1: Type: text/plain, Size: 5032 bytes --]
Sorry, I dropped the ball on this one. We found some other problems
and I was busy fixing them.
So, the xfsaild thread/s that kick in every 30 seconds are hitting us
pretty badly. Here's a graph with the latest tests I ran. We get great
throughput for ~18 seconds but then the world pretty much stops for
the next ~12 seconds or so making the final numbers look pretty bad.
This particular graph was plotted when the disk had ~150GB of data
(total capacity of 3TB).
I am using a 3.16.0-38-generic kernel (upgraded since the time I wrote
the first email on this thread).
I know fs.xfs.xfssyncd_centisecs controls this interval of 30 seconds.
What other options can I tune for making this work better?
We have 8 disks. And unfortunately, all 8 disks are brought to a halt
every 30 seconds. Does XFS have options to only work on a subset of
disks at a time?
Also, what does XFS exactly do every 30 seconds? If I understand it
right, metadata can be 3 locations:
1. Memory
2. Log buffer on disk
3. Final location on disk.
Every 30 seconds, from where to where is this metadata being copied?
Are there ways to just disable this to avoid the stop-of-the-world
pauses (at the cost of lower but sustained performance)?
Thanks in advance.
-Shri
On Fri, Apr 10, 2015 at 6:12 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Fri, Apr 10, 2015 at 12:29:34AM -0700, Shrinand Javadekar wrote:
>> On Fri, Apr 10, 2015 at 12:21 AM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Thu, Apr 09, 2015 at 11:51:17PM -0700, Shrinand Javadekar wrote:
>> >> Thanks for the reply Dave!
>> >>
>> >> >
>> >> > Oh, right, it's that workqueue we removed in late 2012 (in the 3.7
>> >> > cycle) because it was redundant. The only remaining fragment of it
>> >> > is the xfslogd. What kernel are you running?
>> >>
>> >> I am running 3.13.0-39-generic on Ubuntu 14.04.
>> >
>> > You can't be running that kernel if you are seeing a process called
>> > xfssyncd in your traces.
>>
>> I don't see a process called xfssyncd. I started investigating the 30
>> second pauses but looking for xfs config options in sysctl. We found
>> the option "fs.xfs.xfssyncd_centisecs" whose documentation[1] says it
>> is the interval in which the "filesystem flushes metadata out to disk
>> and runs internal cache cleanup routines".
>
> Right, that's what it does, but even though xfssynd has been
> removed, we can't remove or rename the sysctl because it's part
> of the userspace ABI.
>
>> I tweaked this setting and saw the corresponding changes in the
>> performance. Bumping this value up saw pauses at longer interval,
>> decreasing this interval saw pauses more frequently.
>
> Ok, so it's not speculative preallocation that is the problem,
> it's metadata writeback that is causing the stalls. I forgot the log
> worker is also triggered of that sysctl, and so...
>
>> >> >> I am seeing a behavior where the system pretty much stalls for ~5
>> >> >> seconds after every 30 seconds. I see that the # of ios goes up but
>> >> >> the actual write bandwidth during this 5 second period is very low
>> >> >> (see attached images). After a fair bit of investigation, we've
>> >> >> narrowed down the problem to XFS's syncd (fs.xfs.xfssyncd_centisecs).
>> >> >> This runs at a default interval of 30 seconds.
>> >> >
>> >> > It's doing background inode reclaim which, under some circumstances,
>> >> > involves truncating specualtive allocation beyond EOF before reclaim
>> >> > occurs, which results in transactions and inode writeback. It was
>> >> > highly inefficient, which is why we replaced it.
>> >>
>> >> Oh.. I see. So, this isn't even actual filesystem metadata. And there
>> >> is no option to turn the speculative allocation on/off?
>> >
>> > You can turn it off, but now you're jumping to conclusions that this
>> > is the cause of your problems. Perhaps you should do some
>> > tracing/profiling whenthe system goes through these stalls to see
>> > what is actually happening? "perf top" and trace-cmd are very useful
>> > for this sort of investigation...
>>
>> Let me dig deeper here using "perf top" and see what's running during
>> these stalls.
>
> ... it's much more likely that filesystem metadata writeback is
> being run every 30s, and that's what is causing the issue. i.e. you
> should see the xfsaild issuing lots of IO very quickly.
>
> See, fsync() doesn't cause metadata writeback; only data writeback.
> The metadata is written to the log, not it's final place on disk
> during fsync. So some time later it's got to me written back because
> it is still dirty in memory, and that's most likely what is
> happening.
>
> My guess is you have RAID5 or RAID6 and the partial stripe writes
> are causing it to do RMW cycles and hence it's really, really slow
> when metadata gets written...
>
> Probably too late now as I've now basically asked for all this info,
> but:
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
[-- Attachment #2: xfs_pauses.png --]
[-- Type: image/png, Size: 88094 bytes --]
[-- Attachment #3: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-02 18:43 ` Shrinand Javadekar
@ 2015-06-03 3:57 ` Dave Chinner
2015-06-03 23:18 ` Shrinand Javadekar
0 siblings, 1 reply; 21+ messages in thread
From: Dave Chinner @ 2015-06-03 3:57 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Tue, Jun 02, 2015 at 11:43:30AM -0700, Shrinand Javadekar wrote:
> Sorry, I dropped the ball on this one. We found some other problems
> and I was busy fixing them.
>
> So, the xfsaild thread/s that kick in every 30 seconds are hitting us
> pretty badly. Here's a graph with the latest tests I ran. We get great
> throughput for ~18 seconds but then the world pretty much stops for
> the next ~12 seconds or so making the final numbers look pretty bad.
> This particular graph was plotted when the disk had ~150GB of data
> (total capacity of 3TB).
>
> I am using a 3.16.0-38-generic kernel (upgraded since the time I wrote
> the first email on this thread).
>
> I know fs.xfs.xfssyncd_centisecs controls this interval of 30 seconds.
> What other options can I tune for making this work better?
>
> We have 8 disks. And unfortunately, all 8 disks are brought to a halt
> every 30 seconds. Does XFS have options to only work on a subset of
> disks at a time?
>
> Also, what does XFS exactly do every 30 seconds? If I understand it
> right, metadata can be 3 locations:
>
> 1. Memory
> 2. Log buffer on disk
> 3. Final location on disk.
>
> Every 30 seconds, from where to where is this metadata being copied?
> Are there ways to just disable this to avoid the stop-of-the-world
> pauses (at the cost of lower but sustained performance)?
I can't use this information to help you as you haven't presented
any of the data I've asked for. We need to restart here and base
everything on data and observation. i.e. first principles.
Can you provide all of the information here:
http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
and most especially the iostat and vmstat outputs while the problem
is occurring. The workload description is not what is going wrong
or what you think is happening, but a description of the application
you are running that causes the problem.
This will give me a baseline of your hardware, the software, the
behaviour and the application you are running, and hence give me
something to start with.
I'd also like to see the output from perf top while the problem is
occurring, so we might be able to see what is generating the IO...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-03 3:57 ` Dave Chinner
@ 2015-06-03 23:18 ` Shrinand Javadekar
2015-06-04 0:35 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-03 23:18 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1: Type: text/plain, Size: 7449 bytes --]
Here you go!
- Kernel version
Linux my-host 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8
09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
- xfsprogs version (xfs_repair -V)
xfs_repair version 3.1.9
- number of CPUs
16
- contents of /proc/meminfo
(attached).
- contents of /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=32965720k,nr_inodes=8241430,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=6595420k,mode=755 0 0
/dev/mapper/troll_root_vg-troll_root_lv / ext4 rw,relatime,data=ordered 0 0
none /sys/fs/cgroup tmpfs rw,relatime,size=4k,mode=755 0 0
none /sys/fs/fuse/connections fusectl rw,relatime 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
none /sys/kernel/security securityfs rw,relatime 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
none /sys/fs/pstore pstore rw,relatime 0 0
/dev/mapper/troll_root_vg-troll_iso_lv /mnt/factory_reset ext4
rw,relatime,data=ordered 0 0
/dev/mapper/TrollGroup-TrollVolume /lvm ext4 rw,relatime,data=ordered 0 0
/dev/mapper/troll_root_vg-troll_log_lv /var/log ext4
rw,relatime,data=ordered 0 0
systemd /sys/fs/cgroup/systemd cgroup
rw,nosuid,nodev,noexec,relatime,name=systemd 0 0
/dev/mapper/35000c50062e6a12b-part2 /srv/node/r1 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062e6a7eb-part2 /srv/node/r2 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062e6a567-part2 /srv/node/r3 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062ea068f-part2 /srv/node/r4 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062ea066b-part2 /srv/node/r5 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062e69ecf-part2 /srv/node/r6 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062ea067b-part2 /srv/node/r7 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
/dev/mapper/35000c50062e6a493-part2 /srv/node/r8 xfs
rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
0 0
- contents of /proc/partitions
(attached)
RAID layout (hardware and/or software)
- No RAID
- LVM configuration
No LVM
- type of disks you are using
Rotational disks
- write cache status of drives
Disabled
- size of BBWC and mode it is running in
No BBWC
- xfs_info output on the filesystem in question
The following is the info on one of the disks. Other 7 disks are identical.
meta-data=/dev/mapper/35000c50062e6a7eb-part2 isize=256 agcount=64,
agsize=11446344 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=732566016, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=357698, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
- dmesg output showing all error messages and stack traces
No errors/stack traces.
- Workload causing the problem:
Openstack Swift. This is what it's doing:
1. A path like /srv/node/r1/objects/1024/eef/tmp already exists.
/srv/node/r1 is the mount point.
2. Creates a tmp file, say tmpfoo in the patch above. Path:
/srv/node/r1/objects/1024/eef/tmp/tmpfoo.
3. Issues a 256KB write into this file.
4. Issues an fsync on the file.
5. Closes this file.
6. Creates another directory named "deadbeef" inside "eef" if it
doesn't exist. Path /srv/node/r1/objects/1024/eef/deadbeef.
7. Moves file tmpfoo into the deadbeef directory using rename().
/srv/node/r1/objects/1023/eef/tmp/tmpfoo -->
/srv/node/r1/objects/1024/eef/deadbeef/foo.data
8. Does a readdir on /srv/node/r1/objects/1024/eef/deadbeef/
9. Iterates over all files obtained in #8 above. Usually #8 gives only one file.
There are 8 mounts for 8 disks: /srv/node/r1 through /srv/node/r8. The
above steps happen concurrently for all 8 disks.
- IOStat and vmstat output
(attached)
- Trace cmd report
Too big to attach. Here's a link:
https://www.dropbox.com/s/3xxe2chsv4fsrv8/trace_report.txt.zip?dl=0
- Perf top output.
Unfortunately, I couldn't run perf top. I keep getting the following error:
WARNING: perf not found for kernel 3.16.0-38
You may need to install the following packages for this specific kernel:
linux-tools-3.16.0-38-generic
linux-cloud-tools-3.16.0-38-generic
On Tue, Jun 2, 2015 at 8:57 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Tue, Jun 02, 2015 at 11:43:30AM -0700, Shrinand Javadekar wrote:
>> Sorry, I dropped the ball on this one. We found some other problems
>> and I was busy fixing them.
>>
>> So, the xfsaild thread/s that kick in every 30 seconds are hitting us
>> pretty badly. Here's a graph with the latest tests I ran. We get great
>> throughput for ~18 seconds but then the world pretty much stops for
>> the next ~12 seconds or so making the final numbers look pretty bad.
>> This particular graph was plotted when the disk had ~150GB of data
>> (total capacity of 3TB).
>>
>> I am using a 3.16.0-38-generic kernel (upgraded since the time I wrote
>> the first email on this thread).
>>
>> I know fs.xfs.xfssyncd_centisecs controls this interval of 30 seconds.
>> What other options can I tune for making this work better?
>>
>> We have 8 disks. And unfortunately, all 8 disks are brought to a halt
>> every 30 seconds. Does XFS have options to only work on a subset of
>> disks at a time?
>>
>> Also, what does XFS exactly do every 30 seconds? If I understand it
>> right, metadata can be 3 locations:
>>
>> 1. Memory
>> 2. Log buffer on disk
>> 3. Final location on disk.
>>
>> Every 30 seconds, from where to where is this metadata being copied?
>> Are there ways to just disable this to avoid the stop-of-the-world
>> pauses (at the cost of lower but sustained performance)?
>
> I can't use this information to help you as you haven't presented
> any of the data I've asked for. We need to restart here and base
> everything on data and observation. i.e. first principles.
>
> Can you provide all of the information here:
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>
> and most especially the iostat and vmstat outputs while the problem
> is occurring. The workload description is not what is going wrong
> or what you think is happening, but a description of the application
> you are running that causes the problem.
>
> This will give me a baseline of your hardware, the software, the
> behaviour and the application you are running, and hence give me
> something to start with.
>
> I'd also like to see the output from perf top while the problem is
> occurring, so we might be able to see what is generating the IO...
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
[-- Attachment #2: mem_info.txt --]
[-- Type: text/plain, Size: 1226 bytes --]
MemTotal: 65954164 kB
MemFree: 13959108 kB
MemAvailable: 32757820 kB
Buffers: 176636 kB
Cached: 6429784 kB
SwapCached: 103432 kB
Active: 27430416 kB
Inactive: 6313768 kB
Active(anon): 24825928 kB
Inactive(anon): 2326792 kB
Active(file): 2604488 kB
Inactive(file): 3986976 kB
Unevictable: 14108 kB
Mlocked: 14108 kB
SwapTotal: 16777212 kB
SwapFree: 16346352 kB
Dirty: 3992 kB
Writeback: 0 kB
AnonPages: 27093116 kB
Mapped: 80260 kB
Shmem: 9484 kB
Slab: 14808144 kB
SReclaimable: 12460664 kB
SUnreclaim: 2347480 kB
KernelStack: 27696 kB
PageTables: 96588 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 49754292 kB
Committed_AS: 41952748 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 543104 kB
VmallocChunk: 34359013376 kB
HardwareCorrupted: 0 kB
AnonHugePages: 22728704 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 1557220 kB
DirectMap2M: 59236352 kB
DirectMap1G: 8388608 kB
[-- Attachment #3: partitions.txt --]
[-- Type: text/plain, Size: 2076 bytes --]
major minor #blocks name
11 0 1048575 sr0
8 48 2930266584 sdd
8 49 1024 sdd1
8 50 2930264064 sdd2
8 32 2930266584 sdc
8 33 1024 sdc1
8 34 2930264064 sdc2
8 64 2930266584 sde
8 65 1024 sde1
8 66 2930264064 sde2
8 96 2930266584 sdg
8 97 1024 sdg1
8 98 2930264064 sdg2
8 80 2930266584 sdf
8 81 1024 sdf1
8 82 2930264064 sdf2
8 112 2930266584 sdh
8 113 1024 sdh1
8 114 2930264064 sdh2
8 128 2930266584 sdi
8 129 1024 sdi1
8 130 2930264064 sdi2
8 144 2930266584 sdj
8 145 1024 sdj1
8 146 2930264064 sdj2
8 160 2930266584 sdk
8 161 1024 sdk1
8 162 2930264064 sdk2
8 176 2930266584 sdl
8 177 1024 sdl1
8 178 2930264064 sdl2
8 192 2930266584 sdm
8 193 1024 sdm1
8 194 2930264064 sdm2
8 208 2930266584 sdn
8 209 1024 sdn1
8 210 2930264064 sdn2
9 127 2930132800 md127
9 126 2930132800 md126
252 0 1465065472 dm-0
252 1 52428800 dm-1
252 2 5242880 dm-2
252 3 16777216 dm-3
252 4 3145728 dm-4
252 6 2930266584 dm-6
252 5 2930266584 dm-5
252 7 2930266584 dm-7
252 8 2930266584 dm-8
252 9 1024 dm-9
252 10 1024 dm-10
252 11 1024 dm-11
252 12 2930264064 dm-12
252 13 1024 dm-13
252 14 2930264064 dm-14
252 15 2930264064 dm-15
252 16 2930264064 dm-16
252 17 2930266584 dm-17
252 18 2930266584 dm-18
252 19 1024 dm-19
252 20 1024 dm-20
252 21 2930264064 dm-21
252 22 2930266584 dm-22
252 24 1024 dm-24
252 25 2930264064 dm-25
252 23 2930264064 dm-23
252 26 2930266584 dm-26
252 27 1024 dm-27
252 28 2930264064 dm-28
[-- Attachment #4: vmstat.out --]
[-- Type: application/octet-stream, Size: 28243 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 81 430840 11774940 181572 8048792 0 1 8 1170 3 2 6 2 76 16 0
2 141 430840 11657636 181572 8118760 0 0 12 142085 11233 66753 47 9 1 42 0
8 80 430840 11587664 181572 8204052 0 0 12 138107 11123 79133 46 10 0 44 0
23 127 430840 11497308 181576 8293056 0 0 16 178857 11766 80215 59 12 0 29 0
33 132 430840 11426840 181576 8355628 0 0 12 166214 12651 76860 52 12 0 36 0
13 139 430840 11347752 181584 8457020 0 0 20 162803 13161 83450 56 12 0 32 0
37 159 430840 11270556 181596 8544224 0 0 28 177468 12830 86928 56 12 0 32 0
13 106 430840 11225544 181608 8576564 0 0 8 145426 10969 81220 44 10 0 46 0
5 129 430840 11076912 181624 8672268 0 0 24 165985 12993 76091 56 12 0 33 0
20 147 430840 11000204 181636 8762772 0 0 12 164560 12386 78853 54 11 0 35 0
4 145 430840 10882896 181644 8881848 0 0 12 194351 11334 87744 56 12 0 31 0
18 142 430840 10815152 181652 8971500 0 0 12 184373 12973 86380 56 12 0 32 0
0 145 430840 10699560 181652 9032636 0 0 20 182760 13675 84937 54 11 0 34 0
26 114 430840 10639944 181676 9111132 0 0 8 165500 10943 84461 49 11 0 40 0
7 145 430840 10603488 181692 9205220 0 0 12 153659 12772 80559 54 12 0 34 0
25 156 430840 10540588 181692 9270948 0 0 16 167579 11766 88772 54 12 0 34 0
4 147 430840 10403392 181696 9352060 0 0 20 189277 11858 84405 49 11 0 40 0
0 6 430840 10346588 181712 9348344 0 0 16 69504 7292 50131 29 6 28 37 0
3 9 430840 10345492 181716 9346532 0 0 4 18988 3512 11669 4 2 67 28 0
1 9 430840 10345580 181728 9347000 0 0 8 18346 4831 15912 5 1 67 27 0
0 9 430840 10345232 181728 9347008 0 0 0 12666 2634 8811 7 1 69 23 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 9 430840 10346556 181728 9347024 0 0 0 18617 2838 9473 2 1 72 25 0
0 9 430840 10346812 181728 9347040 0 0 8 19590 2373 8803 1 1 69 30 0
0 9 430840 10345700 181756 9347060 0 0 0 21247 2489 9173 1 1 69 30 0
0 9 430840 10345664 181756 9347060 0 0 0 16709 2267 8500 1 1 68 30 0
1 13 430840 10330840 181768 9353656 0 0 0 18096 2597 10217 2 1 63 34 0
2 10 430840 10326140 181776 9358208 0 0 0 14195 2449 9825 5 1 54 40 0
2 9 430840 10318944 181780 9369304 0 0 4 9759 2395 10677 4 1 72 23 0
6 169 430840 10320768 181784 9438808 0 0 16 58608 4061 32004 15 3 30 52 0
17 78 430840 10236052 181796 9487588 0 0 12 111005 6549 77658 27 6 1 66 0
1 67 430840 10185708 181796 9522020 0 0 8 151035 8287 74260 35 8 1 56 0
58 115 430840 10152872 181796 9597440 0 0 12 131884 10535 73960 49 11 1 39 0
0 91 430840 10012288 181796 9683248 0 0 16 172878 10104 87051 47 10 0 43 0
22 168 430840 9956108 181800 9789260 0 0 8 181761 12896 88912 58 12 0 30 0
3 60 430840 9844216 181800 9878776 0 0 24 231008 17683 128209 44 10 1 45 0
13 118 430840 9712044 181808 9977872 0 0 12 191476 13030 82508 55 13 0 31 0
52 149 430840 9607220 181828 10068992 0 0 24 172354 12474 79044 53 11 0 36 0
40 136 430836 9546236 181848 10178264 0 0 16 176817 11995 67351 56 11 0 33 0
14 108 430836 9445744 181860 10243344 0 0 16 169853 11594 81571 52 12 0 36 0
16 135 430836 9335812 181868 10314788 0 0 24 156292 12826 80775 55 10 0 34 0
22 135 430832 9285024 181880 10384712 4 0 20 145995 11808 85509 52 11 0 37 0
42 149 430832 9263232 181888 10447480 0 0 8 146207 11540 88523 46 11 0 43 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
39 91 430832 9219216 181888 10497768 0 0 12 169000 11753 81732 47 11 0 42 0
35 149 430832 9006304 181892 10648488 0 0 20 198358 14118 78675 63 12 0 24 0
4 87 430832 8991668 181892 10687788 0 0 16 159891 11603 83881 50 11 0 39 0
6 96 430832 8939668 181892 10775812 0 0 12 183032 12507 86485 51 11 0 38 0
1 17 430832 8907276 181892 10771488 0 0 12 37608 5659 24299 16 4 21 59 0
0 9 430832 8906156 181892 10769664 0 0 4 10412 2715 9971 3 1 67 29 0
2 8 430832 8911668 181900 10761916 0 0 0 10992 2550 9565 2 1 70 26 0
1 8 430832 8913652 181908 10761064 0 0 12 10443 2517 8576 6 1 74 19 0
2 10 430832 8913660 181916 10760972 0 0 0 16448 2668 9185 3 1 71 25 0
0 9 430832 8912368 181920 10761216 0 0 0 19600 2805 9409 1 1 64 34 0
0 10 430832 8912184 181932 10762560 0 0 0 21497 2708 10056 1 1 65 33 0
1 11 430832 8880176 181940 10772672 0 0 0 17626 2670 10452 4 1 56 40 0
0 16 430832 8879400 181940 10773616 0 0 0 12264 2137 9613 4 1 60 36 0
1 12 430832 8877388 181940 10775192 0 0 0 10907 2205 8791 5 1 65 30 0
0 6 430832 8863264 181948 10784336 0 0 0 8803 2266 9541 4 1 61 35 0
2 76 430832 8840196 181948 10820596 0 0 12 32790 3349 23402 10 3 34 54 0
38 96 430832 8778464 181956 10882716 0 0 4 132656 7828 70495 36 9 0 56 0
14 96 430832 8648912 181956 10948704 0 0 12 190094 10448 84937 44 10 0 46 0
28 55 430832 8694372 181956 11004440 0 0 20 150444 10170 75671 42 9 2 47 0
57 92 430832 8471096 181956 11126480 0 0 12 174309 13252 75828 59 12 0 28 0
31 144 430832 8406944 181960 11228676 0 0 16 196865 13793 78086 58 12 0 30 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 116 430832 8331976 181980 11323896 0 0 16 176413 12177 84025 57 12 0 31 0
49 87 430832 8260936 181988 11376764 0 0 20 170800 11914 86297 49 12 0 39 0
8 155 430832 8090556 181988 11512328 0 0 20 207992 14674 74145 60 13 0 28 0
25 178 430832 8025260 181988 11597748 0 0 12 191349 14284 76466 61 13 0 27 0
42 98 430832 7930892 181988 11651632 0 0 12 165709 11675 80743 49 11 0 39 0
20 149 430832 7858084 181992 11755800 0 0 12 192074 13251 83072 63 12 0 25 0
4 65 430832 7783952 181992 11809884 0 0 20 174105 10349 83165 45 11 0 44 0
2 90 430832 7681248 181992 11942988 0 0 28 190811 12488 66305 57 11 1 31 0
29 138 430832 7602920 181992 12014752 0 0 12 199183 13800 85697 59 12 0 29 0
4 134 430832 7478148 181992 12100736 0 0 32 179347 11920 82978 52 12 0 37 0
35 132 430832 7381244 182008 12180892 0 0 12 155177 12773 78983 53 11 0 35 0
11 152 430832 7339688 182012 12280948 0 0 20 183706 15253 90778 54 12 0 34 0
24 173 430832 7292052 182012 12332772 0 0 20 151734 12746 82916 49 11 0 40 0
3 9 430832 7228764 182012 12327060 0 0 12 33570 4709 25918 12 4 27 56 0
4 8 430832 7237628 182036 12318872 0 0 4 11534 2912 10521 7 1 65 27 0
3 12 430832 7236752 182040 12317548 0 0 4 12153 3051 10295 9 1 60 30 0
3 15 430832 7225060 182096 12317524 0 0 0 23207 3056 10277 4 1 40 55 0
4 10 430832 7156440 182104 12318404 0 0 0 15418 3684 10395 10 1 52 37 0
2 9 430832 7110308 182104 12318232 0 0 4 17087 2826 9682 9 1 58 32 0
2 9 430832 7064556 182120 12316444 0 0 4 21729 3175 9569 9 1 61 29 0
5 10 430832 7186772 182120 12317656 0 0 0 22880 3383 11807 9 1 57 33 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 10 430832 7120172 182144 12317680 0 0 0 22418 3104 9531 9 1 56 34 0
3 10 430832 7065944 182144 12317996 0 0 8 21784 2900 10156 10 1 55 35 0
2 36 430832 7221296 182160 12327684 0 0 0 27620 3384 12945 10 1 33 56 0
2 37 430832 7187596 182160 12346464 0 0 4 11365 2790 15446 7 1 31 61 0
37 107 430832 7152468 182160 12430648 0 0 0 113347 6996 62552 34 7 1 57 0
6 109 430832 7095944 182160 12500460 0 0 8 171843 12289 80990 45 11 1 43 0
25 106 430832 7018464 182168 12547420 0 0 12 152434 11468 81525 49 11 0 40 0
30 85 430832 6904544 182184 12655172 0 0 20 167764 12478 79809 53 11 0 36 0
8 119 430832 6805148 182204 12766288 0 0 12 219490 12935 82059 56 12 0 32 0
46 158 430832 6745564 182204 12835452 0 0 12 171273 12375 84811 59 12 0 29 0
16 121 430832 6638284 182220 12904236 0 0 16 158003 10421 88769 47 10 0 43 0
44 140 430832 6523892 182232 13005320 0 0 12 164752 13106 74528 54 12 0 35 0
34 145 430832 6455660 182248 13084188 0 0 16 168500 12615 79133 56 11 1 33 0
49 102 430832 6421512 182264 13138864 0 0 16 161159 11576 82037 48 11 0 41 0
6 163 430832 6285332 182272 13245248 0 0 24 196689 13435 84481 59 13 0 28 0
40 92 430832 6166676 182272 13344892 0 0 16 172684 11047 79840 54 11 0 35 0
39 103 430832 6082588 182272 13430756 0 0 12 205424 13226 82891 54 13 0 33 0
51 97 430832 6046312 182272 13498180 0 0 20 169285 12202 81242 54 13 0 32 0
37 63 430832 5928656 182276 13579156 0 0 20 181206 10705 82726 46 11 0 42 0
35 98 430832 5808700 182276 13688880 0 0 24 177746 13562 77105 59 12 0 30 0
28 202 430832 5640412 182284 13806160 0 0 4 199682 13244 83156 57 12 0 31 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
43 125 430832 5592036 182284 13859136 0 0 20 168065 11394 83517 46 11 0 43 0
5 20 430832 5577060 182284 13840620 0 0 12 48244 5397 30551 15 4 22 59 0
2 11 430832 5579668 182288 13838596 0 0 12 12218 2855 9820 2 1 46 50 0
1 8 430832 5587556 182292 13832900 0 0 0 12900 3086 11968 3 1 58 38 0
0 9 430832 5587884 182292 13832568 0 0 0 12966 2501 8733 6 1 57 36 0
0 9 430832 5587664 182308 13832568 0 0 0 21855 2787 9898 2 1 54 43 0
0 9 430832 5587620 182308 13833256 0 0 12 20376 2247 9171 1 1 65 34 0
0 13 430832 5584580 182324 13833568 0 0 0 18489 2664 10089 1 1 59 38 0
0 12 430832 5580732 182340 13844764 0 0 0 21184 2554 11092 4 1 57 39 0
0 9 430832 5578628 182340 13844972 0 0 4 9983 2199 8723 4 1 60 35 0
1 12 430832 5574260 182348 13846196 0 0 0 10833 2279 10745 5 1 55 39 0
0 9 430832 5568696 182348 13849636 0 0 0 12544 2449 10001 6 1 53 40 0
19 179 430832 5577028 182356 13924548 0 0 0 82009 4769 43711 19 4 14 63 0
2 91 430832 5519288 182356 13955704 0 0 12 124875 9508 90348 32 7 1 60 0
3 70 430832 5499360 182356 13998492 0 0 8 142365 10298 79852 41 9 1 49 0
3 180 430832 5301544 182364 14156156 0 0 8 169733 12106 62983 59 11 1 28 0
2 140 430832 5221368 182364 14207000 0 0 8 164527 11078 88587 50 11 0 39 0
26 123 430832 5183376 182372 14280796 0 0 20 167737 12672 79310 52 12 0 35 0
15 135 430832 5065704 182372 14362916 0 0 12 159556 13275 84934 56 12 0 32 0
12 153 430832 5013148 182376 14456168 0 0 28 187150 14015 81646 58 12 0 30 0
34 103 430832 4917480 182380 14520312 0 0 4 170750 13067 84530 52 12 0 35 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
18 167 430832 4808536 182384 14628588 0 0 20 173385 12763 85654 55 13 0 33 0
5 155 430832 4747012 182388 14700148 0 0 16 158022 11951 84344 49 12 0 40 0
71 101 430832 4781736 182400 14745304 0 0 8 56977 9082 46786 30 6 30 33 0
35 200 430832 4635292 182400 14822104 0 0 16 195590 13737 79284 55 11 0 34 0
1 65 430832 4547856 182400 14871956 0 0 8 141831 10801 78905 42 10 0 48 0
9 148 430832 4432860 182400 14988888 0 0 12 187576 14648 76966 63 13 0 24 0
26 141 430832 4357560 182408 15047352 0 0 28 178253 11865 93931 50 12 0 39 0
1 162 430832 4207760 182420 15163632 0 0 16 204029 12654 79205 58 12 1 30 0
4 209 430832 4131296 182440 15271344 0 0 12 186921 15679 76646 60 14 0 26 0
1 45 430832 4098208 182460 15284380 0 0 16 97978 8029 59292 34 7 0 59 0
2 12 430832 4101352 182464 15276624 0 0 0 15496 3203 11734 5 1 33 60 0
1 8 430832 4113100 182472 15264972 0 0 0 11394 2666 9735 4 1 57 37 0
0 17 430832 4111736 182476 15263100 0 0 0 15363 2758 10325 7 1 54 38 0
0 10 430832 4111728 182476 15263068 0 0 0 20446 3149 10091 2 1 53 44 0
0 11 430832 4111784 182476 15262120 0 0 4 18899 2929 9586 2 1 57 40 0
0 10 430832 4112144 182484 15261648 0 0 0 19696 2359 9707 1 1 53 45 0
1 9 430832 4110560 182504 15263016 0 0 0 18399 2383 9161 1 1 52 46 0
2 10 430832 4109196 182528 15262864 0 0 4 22758 2505 9837 2 1 56 41 0
0 9 430832 4107912 182540 15262420 0 0 4 23262 2317 9232 1 1 55 43 0
0 21 430832 4104448 182540 15264408 0 0 0 26286 2909 11609 2 1 39 58 0
2 27 430832 4080224 182540 15280360 0 0 0 29095 3508 16984 6 2 45 47 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 187 430832 4032536 182548 15358996 0 0 8 87817 4815 39357 19 4 0 77 0
20 72 430832 4027484 182556 15348684 0 0 8 96348 6134 66157 24 6 0 70 0
27 106 430832 3918064 182556 15441256 0 0 8 163189 12319 73545 47 10 1 42 0
54 152 430832 3792916 182556 15538668 0 0 16 170721 14768 77406 58 12 0 30 0
4 53 430832 3705984 182556 15611360 0 0 12 154054 11912 73289 46 11 0 43 0
25 119 430832 3595628 182580 15712648 0 0 16 183145 14543 80054 58 12 0 30 0
19 106 430832 3533264 182596 15776684 0 0 24 185833 13713 91116 51 12 0 37 0
20 81 430832 3462400 182616 15891508 0 0 12 174518 12380 74552 57 13 0 30 0
34 148 430832 3369144 182632 15958612 0 0 12 179627 13759 85650 57 13 0 30 0
0 177 430832 3231968 182632 16060176 0 0 12 174430 13119 83639 56 12 0 32 0
26 120 430832 3223152 182648 16115348 0 0 16 158368 13382 85568 50 12 0 38 0
45 106 430832 3118036 182648 16177148 0 0 8 169991 12321 77723 53 12 0 36 0
37 131 430832 3049760 182648 16272200 0 0 12 166147 12829 80270 54 12 0 34 0
55 68 430832 2981584 182656 16333852 0 0 12 151475 12992 81911 50 11 0 38 0
3 110 430832 2829528 182656 16465024 0 0 20 197492 13530 81724 57 12 0 31 0
1 127 430832 2749504 182656 16546728 0 0 24 186301 12986 81069 55 11 0 34 0
31 142 430832 2672636 182664 16618360 0 0 12 180145 13341 90794 57 13 0 31 0
2 162 430832 2555020 182684 16726176 0 0 24 187627 13346 85254 56 12 0 32 0
6 25 430832 2525616 182696 16747964 0 0 8 100901 9752 57511 35 8 4 53 0
2 9 430832 2537620 182708 16736012 0 0 12 16286 3078 12088 4 2 44 50 0
1 17 430832 2541232 182708 16731672 0 0 0 10914 2938 9183 8 1 43 47 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
5 11 430832 2544004 182728 16729676 0 0 8 22017 3012 9496 4 1 38 58 0
0 9 430832 2545324 182736 16729368 0 0 0 21576 2810 9820 2 1 58 39 0
0 9 430832 2545116 182736 16729444 0 0 0 18897 2548 9423 1 1 54 43 0
0 9 430832 2545076 182736 16729448 0 0 0 19571 2279 8870 1 1 53 46 0
0 9 430832 2545248 182736 16729448 0 0 0 19973 2244 9438 1 1 52 46 0
0 9 430832 2545356 182748 16730176 0 0 0 18652 2523 9070 1 1 49 49 0
2 12 430832 2544080 182748 16731836 0 0 12 20761 2424 10289 3 1 41 55 0
0 11 430832 2545028 182752 16731688 0 0 0 17075 2235 9011 1 1 52 46 0
1 32 430832 2539412 182752 16735996 0 0 0 27469 2577 12891 3 1 43 52 0
8 110 430832 2519564 182752 16805588 0 0 0 73009 4281 39222 16 3 7 74 0
1 105 430832 2441724 182768 16842800 0 0 8 140899 11582 74153 46 10 0 44 0
27 115 430832 2375164 182768 16926712 0 0 4 144975 13520 74639 48 11 0 41 0
34 69 430832 2324140 182768 16965344 0 0 12 127883 11410 75747 45 10 1 44 0
9 132 430832 2152868 182776 17111340 0 0 12 188683 14050 72602 59 13 0 28 0
7 163 430832 2163632 182792 17156332 0 0 16 172252 12196 87277 54 13 0 33 0
20 40 430832 2076112 182804 17189192 0 0 8 120106 10686 74399 41 10 1 48 0
1 85 430832 1924744 182820 17342348 0 0 12 159463 13397 67332 59 11 1 29 0
34 153 430832 1868456 182828 17390088 0 0 24 183372 14339 89236 55 12 0 33 0
26 170 430832 1828708 182828 17486124 0 0 20 182416 12573 79244 55 12 0 33 0
11 106 430832 1693648 182828 17555336 0 0 12 167428 11882 85383 49 11 0 39 0
2 65 430832 1597684 182836 17658604 0 0 24 170663 12278 70350 56 11 0 33 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
7 169 430832 1631772 182836 17742856 0 0 20 221064 14262 89363 63 12 0 25 0
3 83 430832 1466188 182836 17820396 0 0 16 163149 11726 77926 50 10 0 40 0
51 120 430832 1304428 182840 17916860 0 0 8 172262 12976 76891 56 12 0 32 0
27 153 430832 1193128 182840 18020932 0 0 20 186348 14004 81098 58 13 0 28 0
28 134 430832 1181268 182840 18069516 0 0 16 157109 11957 79511 50 12 0 38 0
6 139 430832 1068888 182856 18160588 0 0 16 172594 12572 82179 53 13 0 34 0
1 25 430832 1003848 182856 18182364 0 0 28 115430 8290 55645 32 7 1 60 0
1 8 430832 1012784 182856 18171048 0 0 8 15559 3442 12224 3 1 35 61 0
1 10 430832 1021236 182856 18163564 0 0 0 10502 3033 10048 9 1 52 39 0
2 9 430832 1024408 182856 18161248 0 0 0 16771 2590 9542 3 1 62 34 0
0 10 430832 1024824 182880 18161772 0 0 0 21187 2666 9747 2 1 57 40 0
0 10 430832 1025152 182888 18161780 0 0 0 17663 2620 9359 2 1 57 40 0
1 11 430832 1024996 182896 18161772 0 0 0 18770 2152 8765 1 1 60 39 0
1 13 430832 1002788 182896 18171208 0 0 0 22036 2955 11688 4 1 51 44 0
1 10 430832 1004196 182896 18177680 0 0 8 15530 2479 10650 4 1 63 32 0
1 9 430832 1003124 182900 18176756 0 0 4 10914 2031 8559 1 0 61 38 0
1 14 430832 1003532 182900 18175952 0 0 0 12138 2260 8640 7 0 40 52 0
6 13 430832 1000420 182900 18177596 0 0 4 18478 2264 9532 2 1 36 61 0
2 198 430832 946456 182900 18255060 0 0 4 82451 4629 39420 19 4 15 62 0
3 66 430832 898988 182908 18274052 0 0 8 104219 7377 64847 27 7 2 65 0
1 109 430832 791516 182908 18372952 0 0 12 147772 9655 67059 44 9 3 44 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 109 430832 731132 182916 18430284 0 0 16 157728 10811 76750 47 9 1 43 0
39 144 430832 646208 182916 18541404 0 0 12 183448 14205 74913 64 13 0 23 0
31 97 430824 569624 182928 18609336 0 0 24 165115 12842 82921 49 11 0 40 0
7 157 430816 429552 182932 18707260 0 0 24 196444 13382 83411 59 14 0 27 0
17 168 430816 384728 182936 18782688 0 0 12 174890 12994 80957 56 12 0 33 0
36 89 430808 385484 182944 18760340 0 0 20 158756 11935 82384 52 11 0 36 0
5 72 430804 407844 182944 18765852 0 0 28 171527 12945 85837 48 12 0 39 0
7 145 430804 383856 182944 18777612 0 0 12 198561 13099 73132 64 13 0 23 0
3 164 430804 372484 182944 18779448 0 0 20 165163 11670 86767 54 11 0 35 0
6 67 430804 369876 182944 18763424 0 0 12 145928 12015 84816 42 11 0 47 0
49 84 430804 362540 182960 18722528 0 0 16 175787 13562 72600 58 12 1 29 0
3 166 430796 357128 182960 18706376 0 0 16 192270 13351 80310 60 13 1 26 0
16 165 430796 348484 182960 18727420 0 0 12 167477 13190 82223 51 12 0 37 0
51 105 430788 388024 182960 18703624 0 0 12 164181 12882 79848 55 12 0 33 0
46 115 430788 384244 182960 18715664 0 0 20 172312 13639 79320 54 12 0 34 0
0 15 430784 403400 182960 18660664 0 0 4 101439 8324 53302 31 7 10 52 0
3 8 430784 406028 182976 18657228 0 0 8 17188 3128 10893 5 1 56 38 0
0 8 430784 407976 182976 18655184 0 0 4 10062 2634 9018 4 1 59 36 0
1 8 430784 409228 182976 18653168 0 0 4 9691 2461 7590 7 1 61 31 0
2 17 430784 409504 182976 18653256 0 0 0 20021 2681 9304 3 1 59 38 0
0 9 430784 409156 182976 18653480 0 0 0 18441 2738 9420 2 1 41 57 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 12 430784 409076 182984 18654084 0 0 8 23968 2505 9788 2 1 63 35 0
0 9 430784 408268 182984 18654344 0 0 0 26352 2411 9835 1 1 63 35 0
1 11 430784 407632 182984 18655692 0 0 0 25572 2495 9989 2 1 59 38 0
2 15 430784 413260 182984 18661304 0 0 0 22855 2588 11306 4 1 36 59 0
8 16 430784 383664 182984 18674532 0 0 0 12244 2826 11535 6 1 25 68 0
0 67 430784 352400 182992 18698808 0 0 4 22768 3257 19960 7 2 6 85 0
12 95 430784 369888 182992 18653920 0 0 16 98346 5364 60237 23 5 1 70 0
45 58 430780 374428 183000 18674616 0 0 12 79089 6236 57382 23 6 2 69 0
50 76 430768 388068 183000 18715480 0 0 8 137153 10369 64411 43 9 5 43 0
14 151 430768 392516 183000 18690316 0 0 12 223120 13612 75940 62 13 0 25 0
47 76 430768 399368 183008 18691376 0 0 12 155333 13291 84338 51 11 0 37 0
38 122 430768 370248 183008 18711480 0 0 16 179436 13975 75826 58 13 0 29 0
55 157 430760 376116 183008 18734916 0 0 20 169438 12794 81610 55 13 0 32 0
7 108 430760 363152 183008 18702464 0 0 16 164565 12782 81214 54 12 0 34 0
6 130 430760 367076 183016 18676408 0 0 16 187664 13958 84970 56 13 0 32 0
57 136 430760 378568 183024 18627816 0 0 8 162469 14710 76188 59 12 0 28 0
7 110 430760 386636 183024 18642952 0 0 24 206321 11646 86940 48 12 0 39 0
4 154 430760 391104 183024 18635520 0 0 8 185573 13988 78816 61 13 0 26 0
1 134 430760 363172 183024 18633084 0 0 16 157201 12162 84052 52 12 0 36 0
30 197 430760 366768 183024 18645804 0 0 20 170547 13654 85282 56 13 0 31 0
26 158 430760 377744 183024 18666712 0 0 16 155309 12877 83817 49 12 0 39 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 99 430760 362928 183028 18634604 0 0 24 218924 15695 109322 50 12 0 38 0
2 200 430760 357756 183028 18650376 0 0 12 183368 14874 75523 59 12 0 28 0
3 100 430760 349328 183036 18659112 0 0 16 145257 11148 85772 48 11 0 41 0
0 15 430760 390748 183036 18613312 0 0 12 98759 8320 56866 28 7 7 58 0
0 12 430760 395236 183036 18607208 0 0 8 16015 3481 12286 6 2 43 49 0
1 15 430760 427108 183040 18589320 0 0 12 11584 2975 9625 3 1 40 55 0
0 13 430760 426500 183044 18588740 0 0 0 13786 2863 9728 7 1 49 43 0
2 10 430760 426672 183044 18588944 0 0 0 21478 2836 10244 2 1 42 55 0
0 10 430760 425940 183048 18588740 0 0 0 20641 2577 9683 2 1 53 45 0
0 9 430760 422324 183048 18588252 0 0 0 22714 2483 10005 2 1 57 40 0
1 11 430760 419928 183052 18589812 0 0 0 21773 2576 10375 2 1 50 48 0
0 13 430760 418280 183052 18590356 0 0 0 18764 2712 9921 1 1 53 45 0
2 17 430760 415504 183052 18601296 0 0 0 20593 2593 11064 3 1 48 47 0
0 9 430760 405800 183052 18600864 0 0 12 16455 2407 10650 3 1 51 45 0
1 86 430760 384908 183052 18623728 0 0 0 21050 2632 14939 6 2 37 55 0
0 99 430760 374308 183064 18629356 0 0 4 98204 6887 56363 32 6 0 61 0
1 8 430760 389852 183064 18596248 0 0 8 105864 9786 75405 29 8 4 59 0
47 146 430760 416216 183064 18567844 0 0 12 153230 14397 67270 57 12 5 27 0
11 129 430760 379180 183064 18565504 0 0 12 174608 12458 86243 51 12 0 37 0
39 150 430760 439632 183064 18540912 0 0 12 178039 15192 75506 59 13 0 27 0
11 112 430760 407348 183072 18534556 0 0 20 165649 12497 82486 53 11 0 36 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 129 430760 362580 183080 18544040 0 0 8 171153 12459 85940 53 12 0 36 0
5 115 430760 386612 183080 18541576 0 0 16 168684 15806 80199 57 12 0 31 0
1 156 430760 367868 183080 18562772 0 0 20 177696 16903 84089 59 13 0 28 0
9 133 430760 447312 183080 18508320 0 0 492 175445 13406 87204 53 12 0 34 0
9 168 430760 374048 183084 18560744 0 0 20 173814 13570 85486 56 13 0 31 0
35 157 430760 392688 183084 18539148 0 0 24 163056 12867 85943 51 12 0 37 0
23 183 430760 421188 183084 18462656 0 0 12 159419 12733 80328 55 12 0 33 0
57 78 430760 409948 183092 18473132 0 0 16 151895 11828 84786 49 11 0 40 0
2 108 430760 446884 183092 18434888 0 0 20 217834 12870 85467 56 13 0 30 0
4 176 430760 418088 183096 18514200 0 0 16 194506 15142 86374 57 13 0 30 0
50 154 430760 427636 183096 18493032 0 0 20 168653 13874 81895 57 12 0 31 0
6 154 430760 402420 183096 18492916 0 0 12 167640 13178 83834 54 11 0 35 0
1 15 430760 403432 183096 18446628 0 0 4 85586 7829 51823 26 6 9 60 0
0 13 430760 437096 183104 18412140 0 0 12 13608 3209 12965 4 2 45 49 0
1 8 430760 444408 183112 18404704 0 0 0 10311 2869 9907 9 1 40 51 0
0 9 430760 451044 183112 18399344 0 0 0 23790 2771 10215 3 1 50 46 0
0 9 430760 451012 183112 18399444 0 0 4 18831 2866 10231 3 1 52 44 0
0 14 430760 449656 183112 18399840 0 0 4 21923 3074 11328 3 1 45 51 0
0 9 430760 449988 183112 18399844 0 0 0 20268 2365 8966 1 1 50 49 0
2 12 430760 450636 183128 18399688 0 0 0 22752 2462 10160 1 1 49 49 0
0 9 430760 450280 183128 18399584 0 0 8 21597 2341 9420 2 1 54 44 0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 9 430760 449924 183144 18399880 0 0 0 20993 2272 8934 1 1 59 39 0
0 17 430760 448660 183164 18396684 0 0 0 23313 3012 12095 4 1 39 56 0
1 17 430760 438264 183164 18412324 0 0 0 17152 2786 12393 7 1 30 62 0
3 173 430760 401696 183176 18489708 0 0 8 80968 6144 43300 26 5 24 45 0
15 68 430760 388424 183184 18474668 0 0 8 108106 8751 82842 31 9 0 59 0
47 54 430760 399944 183184 18469960 0 0 16 147029 11545 82947 47 11 0 42 0
[-- Attachment #5: iostat.out --]
[-- Type: application/octet-stream, Size: 256575 bytes --]
Linux 3.16.0-38-generic (tie-fighter-bottom) 06/03/15 _x86_64_ (16 CPU)
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.01 0.00 0.86 36.90 0.01 2.14 116.51 2.20 58.24 80.21 57.73 6.48 24.45
sde2 0.01 0.00 0.88 37.21 0.01 2.15 115.90 2.40 62.91 76.21 62.59 6.45 24.57
sdg2 0.01 0.00 0.85 36.86 0.01 2.14 116.61 2.05 54.31 80.57 53.70 6.47 24.41
sdh2 0.01 0.00 0.85 36.98 0.01 2.14 116.17 2.01 53.12 77.27 52.56 6.44 24.36
sdj2 0.01 0.00 0.85 36.97 0.01 2.14 116.19 2.20 58.27 83.51 57.68 6.47 24.48
sdn2 0.01 0.00 0.86 36.94 0.01 2.14 116.50 2.25 59.47 81.99 58.95 6.48 24.50
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 19.99 319.77 22.90 219.06 0.00 219.06 7.81 100.00
sde2 0.00 0.00 0.00 100.00 0.00 13.86 283.88 10.71 76.20 0.00 76.20 9.12 91.20
sdg2 0.00 0.00 0.00 119.00 0.00 16.80 289.07 9.88 56.94 0.00 56.94 8.34 99.20
sdh2 0.00 0.00 0.00 127.00 0.00 13.85 223.43 10.76 81.76 0.00 81.76 7.87 100.00
sdj2 0.00 0.00 0.00 122.00 0.00 15.95 267.78 12.70 95.93 0.00 95.93 7.97 97.20
sdn2 0.00 0.00 0.00 110.00 0.00 13.69 254.92 10.37 69.31 0.00 69.31 8.55 94.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 138.00 0.00 21.80 323.59 30.00 215.91 0.00 215.91 7.25 100.00
sde2 0.00 0.00 0.00 121.00 0.00 17.48 295.88 16.73 141.59 0.00 141.59 8.26 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 18.56 304.03 11.67 116.64 0.00 116.64 8.00 100.00
sdh2 0.00 0.00 0.00 124.00 0.00 14.86 245.46 17.52 148.55 0.00 148.55 8.06 100.00
sdj2 0.00 0.00 0.00 121.00 0.00 16.87 285.58 24.10 182.02 0.00 182.02 8.26 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 18.09 294.09 15.38 143.75 0.00 143.75 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 123.00 0.00 18.31 304.95 32.16 218.41 0.00 218.41 8.13 100.00
sde2 0.00 0.00 0.00 130.00 0.00 21.86 344.35 18.79 135.26 0.00 135.26 7.69 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 19.65 327.11 12.22 98.50 0.00 98.50 8.00 98.40
sdh2 0.00 0.00 0.00 133.00 0.00 21.72 334.47 17.60 128.45 0.00 128.45 7.52 100.00
sdj2 0.00 0.00 0.00 134.00 0.00 21.92 334.96 21.15 182.57 0.00 182.57 7.46 100.00
sdn2 0.00 0.00 0.00 124.00 0.00 18.68 308.51 12.32 90.61 0.00 90.61 8.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 136.00 0.00 21.38 321.89 23.83 219.74 0.00 219.74 7.35 100.00
sde2 0.00 0.00 0.00 129.00 0.00 19.11 303.36 14.59 120.96 0.00 120.96 7.75 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 20.13 322.13 15.24 83.62 0.00 83.62 7.81 100.00
sdh2 0.00 0.00 0.00 132.00 0.00 20.87 323.73 12.58 101.55 0.00 101.55 7.58 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 22.14 354.21 14.46 119.97 0.00 119.97 7.81 100.00
sdn2 0.00 0.00 0.00 139.00 0.00 21.08 310.60 19.12 115.42 0.00 115.42 7.19 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 17.01 278.71 26.61 220.77 0.00 220.77 8.00 100.00
sde2 0.00 0.00 0.00 144.00 0.00 23.06 327.90 18.69 134.92 0.00 134.92 6.94 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 19.27 283.90 19.61 159.83 0.00 159.83 7.19 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 20.85 318.64 23.06 161.52 0.00 161.52 7.46 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 17.00 272.05 22.88 152.22 0.00 152.22 7.81 100.00
sdn2 0.00 0.00 0.00 134.00 0.00 22.08 337.52 21.60 188.27 0.00 188.27 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 142.57 0.00 21.73 312.11 25.08 141.47 0.00 141.47 6.97 99.41
sde2 0.00 0.00 0.00 129.70 0.00 19.98 315.47 16.70 130.96 0.00 130.96 7.66 99.41
sdg2 0.00 0.00 0.00 134.65 0.00 20.54 312.41 22.03 162.06 0.00 162.06 7.38 99.41
sdh2 0.00 0.00 0.00 132.67 0.00 17.83 275.22 15.46 119.61 0.00 119.61 7.49 99.41
sdj2 0.00 0.00 0.00 143.56 0.00 20.33 290.01 22.19 145.77 0.00 145.77 6.92 99.41
sdn2 0.00 0.00 0.00 128.71 0.00 19.04 303.02 18.36 101.51 0.00 101.51 7.72 99.41
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 20.74 334.50 28.16 250.87 0.00 250.87 7.87 100.00
sde2 0.00 0.00 0.00 130.00 0.00 18.71 294.76 11.93 103.38 0.00 103.38 7.69 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 18.71 306.53 24.07 194.82 0.00 194.82 8.00 100.00
sdh2 0.00 0.00 0.00 118.00 0.00 17.52 304.14 11.56 115.22 0.00 115.22 8.47 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 20.17 325.28 24.24 229.17 0.00 229.17 7.87 100.00
sdn2 0.00 0.00 0.00 118.00 0.00 20.04 347.84 22.16 218.88 0.00 218.88 8.47 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 143.00 0.00 23.93 342.72 22.69 170.01 0.00 170.01 6.99 100.00
sde2 0.00 0.00 0.00 130.00 0.00 20.79 327.50 14.85 102.86 0.00 102.86 7.69 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 19.22 312.41 18.88 151.40 0.00 151.40 7.94 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 18.44 302.10 12.48 96.74 0.00 96.74 8.00 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 20.14 317.21 18.94 141.97 0.00 141.97 7.69 100.00
sdn2 0.00 0.00 0.00 134.00 0.00 18.24 278.81 25.94 183.58 0.00 183.58 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 19.40 301.05 18.61 137.21 0.00 137.21 7.58 100.00
sde2 0.00 0.00 0.00 130.00 0.00 17.22 271.28 18.47 122.00 0.00 122.00 7.69 100.00
sdg2 0.00 0.00 0.00 120.00 0.00 19.18 327.37 13.94 119.87 0.00 119.87 8.33 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 21.99 333.56 16.44 113.69 0.00 113.69 7.41 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 20.99 335.79 14.80 115.09 0.00 115.09 7.81 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 20.73 314.46 21.02 150.90 0.00 150.90 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 139.00 0.00 22.67 333.96 17.77 113.24 0.00 113.24 7.19 100.00
sde2 0.00 0.00 0.00 144.00 0.00 22.98 326.78 19.26 146.06 0.00 146.06 6.94 100.00
sdg2 0.00 0.00 0.00 140.00 0.00 23.54 344.43 29.05 164.11 0.00 164.11 7.14 100.00
sdh2 0.00 0.00 0.00 136.00 0.00 22.30 335.79 15.71 108.12 0.00 108.12 7.35 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 22.82 346.16 24.59 167.70 0.00 167.70 7.41 100.00
sdn2 0.00 0.00 0.00 134.00 0.00 26.44 404.03 24.85 197.31 0.00 197.31 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 23.78 371.69 23.64 167.76 0.00 167.76 7.63 100.00
sde2 0.00 0.00 0.00 137.00 0.00 24.58 367.45 20.44 141.84 0.00 141.84 7.30 100.00
sdg2 0.00 0.00 0.00 141.00 0.00 22.42 325.65 26.42 199.77 0.00 199.77 7.09 100.00
sdh2 0.00 0.00 0.00 124.00 0.00 18.39 303.66 12.22 105.16 0.00 105.16 8.06 100.00
sdj2 0.00 0.00 0.00 133.00 0.00 23.96 368.99 21.78 169.83 0.00 169.83 7.52 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 19.65 309.53 17.38 135.94 0.00 135.94 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 20.80 340.75 24.61 202.18 0.00 202.18 8.00 100.00
sde2 0.00 0.00 0.00 132.00 0.00 23.12 358.67 18.86 142.48 0.00 142.48 7.58 100.00
sdg2 0.00 0.00 0.00 150.00 0.00 24.18 330.10 24.40 183.97 0.00 183.97 6.67 100.00
sdh2 0.00 0.00 0.00 145.00 0.00 22.07 311.76 20.04 126.57 0.00 126.57 6.90 100.00
sdj2 0.00 0.00 0.00 142.00 0.00 22.84 329.44 18.02 130.11 0.00 130.11 7.04 100.00
sdn2 0.00 0.00 0.00 136.00 0.00 21.86 329.17 24.18 170.24 0.00 170.24 7.35 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 138.00 0.00 22.44 333.04 29.06 217.68 0.00 217.68 7.25 100.00
sde2 0.00 0.00 0.00 131.00 0.00 20.41 319.09 24.28 168.06 0.00 168.06 7.63 100.00
sdg2 0.00 0.00 0.00 135.00 0.00 19.86 301.24 26.74 186.99 0.00 186.99 7.41 100.00
sdh2 0.00 0.00 0.00 121.00 0.00 17.25 292.03 19.24 164.07 0.00 164.07 8.26 100.00
sdj2 0.00 0.00 0.00 126.00 0.00 20.57 334.42 18.68 163.40 0.00 163.40 7.94 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 17.99 299.54 16.46 154.93 0.00 154.93 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 18.81 294.05 23.20 193.50 0.00 193.50 7.63 100.00
sde2 0.00 0.00 0.00 131.00 0.00 18.06 282.28 17.14 160.34 0.00 160.34 7.63 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 19.74 303.92 22.52 178.98 0.00 178.98 7.52 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 19.60 301.83 15.24 121.26 0.00 121.26 7.52 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 16.55 251.02 16.31 105.21 0.00 105.21 7.41 100.00
sdn2 0.00 0.00 0.00 122.00 0.00 16.19 271.84 25.12 166.59 0.00 166.59 8.20 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 136.00 0.00 17.64 265.57 17.83 111.44 0.00 111.44 7.35 100.00
sde2 0.00 0.00 0.00 141.00 0.00 20.41 296.51 16.49 121.16 0.00 121.16 7.09 100.00
sdg2 0.00 0.00 0.00 140.00 0.00 21.95 321.13 13.44 111.20 0.00 111.20 7.14 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 20.13 298.76 21.76 143.65 0.00 143.65 7.25 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 17.71 268.67 24.65 167.02 0.00 167.02 7.41 100.00
sdn2 0.00 0.00 0.00 145.00 0.00 22.20 313.59 25.18 188.50 0.00 188.50 6.90 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.00 0.00 20.00 335.69 18.13 178.23 0.00 178.23 8.20 100.00
sde2 0.00 0.00 0.00 116.00 0.00 18.01 317.97 9.56 90.97 0.00 90.97 8.62 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 23.97 399.15 10.18 81.85 0.00 81.85 8.13 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 24.76 367.51 19.69 159.07 0.00 159.07 7.25 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 23.25 352.78 25.20 188.47 0.00 188.47 7.41 100.00
sdn2 0.00 0.00 0.00 144.00 0.00 24.52 348.78 22.77 159.61 0.00 159.61 6.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 94.00 0.00 12.60 274.51 19.68 100.21 0.00 100.21 6.81 64.00
sde2 0.00 0.00 0.00 205.00 0.00 11.49 114.76 72.14 180.86 0.00 180.86 4.70 96.40
sdg2 0.00 0.00 0.00 214.00 0.00 10.48 100.25 73.92 144.60 0.00 144.60 4.67 100.00
sdh2 0.00 0.00 0.00 75.00 0.00 13.75 375.48 12.09 185.28 0.00 185.28 8.43 63.20
sdj2 0.00 0.00 0.00 195.00 0.00 15.41 161.85 59.42 170.24 0.00 170.24 5.13 100.00
sdn2 0.00 0.00 0.00 151.00 0.00 12.09 163.91 43.98 163.31 0.00 163.31 5.99 90.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 254.00 0.00 1.94 15.65 143.26 379.98 0.00 379.98 3.94 100.00
sde2 0.00 0.00 0.00 173.00 0.00 1.28 15.17 142.94 673.25 0.00 673.25 5.78 100.00
sdg2 0.00 0.00 0.00 189.00 0.00 1.41 15.23 145.08 650.58 0.00 650.58 5.29 100.00
sdh2 0.00 0.00 0.00 259.00 0.00 1.87 14.77 136.66 265.62 0.00 265.62 3.68 95.20
sdj2 0.00 0.00 0.00 203.00 0.00 1.53 15.42 142.87 538.34 0.00 538.34 4.93 100.00
sdn2 0.00 0.00 0.00 225.00 0.00 1.68 15.31 143.85 471.63 0.00 471.63 4.44 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 158.00 0.00 1.23 15.91 143.30 874.58 0.00 874.58 6.33 100.00
sde2 0.00 0.00 0.00 168.00 0.00 1.34 16.35 143.55 855.02 0.00 855.02 5.95 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.29 15.68 143.62 875.38 0.00 875.38 5.95 100.00
sdh2 0.00 0.00 0.00 168.00 0.00 1.31 15.95 143.64 880.86 0.00 880.86 5.95 100.00
sdj2 0.00 0.00 0.00 176.00 0.00 1.36 15.80 143.92 853.70 0.00 853.70 5.68 100.00
sdn2 0.00 0.00 0.00 185.00 0.00 1.45 16.10 143.72 792.02 0.00 792.02 5.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 165.00 0.00 1.23 15.30 142.52 855.20 0.00 855.20 6.06 100.00
sde2 0.00 0.00 0.00 168.00 0.00 1.26 15.31 143.90 872.76 0.00 872.76 5.95 100.00
sdg2 0.00 0.00 0.00 180.00 0.00 1.39 15.84 143.97 817.58 0.00 817.58 5.56 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.33 15.83 143.12 852.19 0.00 852.19 5.81 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.27 15.54 142.80 830.38 0.00 830.38 5.95 100.00
sdn2 0.00 0.00 0.00 165.00 0.00 1.26 15.68 143.80 852.68 0.00 852.68 6.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 165.00 0.00 1.26 15.62 143.11 894.81 0.00 894.81 6.06 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.30 15.73 143.70 826.89 0.00 826.89 5.92 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.27 15.52 144.03 821.81 0.00 821.81 5.95 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.25 14.90 142.87 826.53 0.00 826.53 5.81 100.00
sdj2 0.00 0.00 0.00 176.00 0.00 1.37 15.95 143.97 824.18 0.00 824.18 5.68 100.00
sdn2 0.00 0.00 0.00 178.00 0.00 1.43 16.42 142.22 833.89 0.00 833.89 5.62 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 172.00 0.00 1.33 15.78 143.46 835.53 0.00 835.53 5.81 100.00
sde2 0.00 0.00 0.00 155.00 0.00 1.24 16.43 143.86 902.89 0.00 902.89 6.45 100.00
sdg2 0.00 0.00 0.00 169.00 0.00 1.31 15.83 142.94 885.23 0.00 885.23 5.92 100.00
sdh2 0.00 0.00 0.00 169.00 0.00 1.29 15.63 143.33 863.69 0.00 863.69 5.92 100.00
sdj2 0.00 0.00 0.00 160.00 0.00 1.22 15.60 142.11 879.20 0.00 879.20 6.25 100.00
sdn2 0.00 0.00 0.00 170.00 0.00 1.29 15.55 143.19 811.34 0.00 811.34 5.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.26 15.77 143.72 859.98 0.00 859.98 6.10 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.31 15.56 142.81 867.14 0.00 867.14 5.81 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.28 15.60 143.36 842.19 0.00 842.19 5.95 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.28 15.28 143.08 806.21 0.00 806.21 5.81 100.00
sdj2 0.00 0.00 0.00 172.00 0.00 1.35 16.10 143.62 846.44 0.00 846.44 5.81 100.00
sdn2 0.00 0.00 0.00 179.00 0.00 1.36 15.58 143.83 811.82 0.00 811.82 5.59 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.23 15.05 143.11 832.89 0.00 832.89 5.99 100.00
sde2 0.00 0.00 0.00 171.00 0.00 1.28 15.36 144.66 834.08 0.00 834.08 5.85 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 1.27 15.85 144.21 867.12 0.00 867.12 6.10 100.00
sdh2 0.00 0.00 0.00 179.00 0.00 1.37 15.65 143.35 815.51 0.00 815.51 5.59 100.00
sdj2 0.00 0.00 0.00 162.00 0.00 1.21 15.23 143.33 848.30 0.00 848.30 6.17 100.00
sdn2 0.00 0.00 0.00 161.00 0.00 1.25 15.91 143.52 864.40 0.00 864.40 6.21 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.18 14.70 143.22 868.00 0.00 868.00 6.10 100.00
sde2 0.00 0.00 0.00 165.00 0.00 1.30 16.17 143.29 854.38 0.00 854.38 6.06 100.00
sdg2 0.00 0.00 0.00 170.00 0.00 1.26 15.18 144.50 845.69 0.00 845.69 5.88 100.00
sdh2 0.00 0.00 0.00 173.00 0.00 1.32 15.61 143.64 836.37 0.00 836.37 5.78 100.00
sdj2 0.00 0.00 0.00 175.00 0.00 1.37 16.01 143.25 850.81 0.00 850.81 5.71 100.00
sdn2 0.00 0.00 0.00 152.00 0.00 1.15 15.43 143.03 896.79 0.00 896.79 6.58 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 179.00 0.00 1.39 15.88 144.20 845.25 0.00 845.25 5.59 100.00
sde2 0.00 0.00 0.00 176.00 0.00 1.29 15.01 142.83 855.89 0.00 855.89 5.68 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.29 16.01 143.28 883.18 0.00 883.18 6.06 100.00
sdh2 0.00 0.00 0.00 159.00 0.00 1.17 15.11 143.48 832.08 0.00 832.08 6.29 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.28 15.63 144.09 842.48 0.00 842.48 5.95 100.00
sdn2 0.00 0.00 0.00 173.00 0.00 1.31 15.47 142.48 897.66 0.00 897.66 5.78 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.26 15.49 143.80 852.05 0.00 852.05 5.99 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.33 15.83 144.43 802.58 0.00 802.58 5.81 100.00
sdg2 0.00 0.00 0.00 163.00 0.00 1.22 15.33 143.66 855.58 0.00 855.58 6.13 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.29 15.31 142.81 878.09 0.00 878.09 5.81 100.00
sdj2 0.00 0.00 0.00 160.00 0.00 1.57 20.11 144.47 864.17 0.00 864.17 6.25 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.33 16.06 143.39 810.37 0.00 810.37 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 160.00 0.00 1.71 21.94 140.95 884.33 0.00 884.33 6.25 100.00
sde2 0.00 0.00 0.00 166.00 0.00 1.77 21.84 114.76 894.80 0.00 894.80 6.02 100.00
sdg2 0.00 0.00 0.00 163.00 0.00 1.26 15.82 81.82 887.51 0.00 887.51 6.13 100.00
sdh2 0.00 0.00 0.00 175.00 0.00 1.27 14.82 143.00 834.17 0.00 834.17 5.71 100.00
sdj2 0.00 0.00 0.00 155.00 0.00 1.18 15.55 142.98 959.95 0.00 959.95 6.45 100.00
sdn2 0.00 0.00 0.00 158.00 0.00 1.21 15.66 103.31 878.91 0.00 878.91 6.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 149.00 0.00 6.36 87.41 54.14 745.74 0.00 745.74 6.71 100.00
sde2 0.00 0.00 0.00 116.00 0.00 10.16 179.46 10.07 343.48 0.00 343.48 8.62 100.00
sdg2 0.00 0.00 0.00 69.00 0.00 7.04 209.07 2.98 74.49 0.00 74.49 11.65 80.40
sdh2 0.00 0.00 0.00 153.00 0.00 4.90 65.59 67.08 786.14 0.00 786.14 6.54 100.00
sdj2 0.00 0.00 0.00 153.00 0.00 7.61 101.80 59.76 778.93 0.00 778.93 6.54 100.00
sdn2 0.00 0.00 0.00 114.00 0.00 9.99 179.53 6.88 300.81 0.00 300.81 8.77 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 115.00 0.00 18.47 328.97 10.96 123.41 0.00 123.41 8.70 100.00
sde2 0.00 0.00 0.00 104.00 0.00 12.21 240.49 4.24 40.69 0.00 40.69 9.54 99.20
sdg2 0.00 0.00 0.00 102.00 0.00 16.48 330.84 5.20 47.76 0.00 47.76 9.14 93.20
sdh2 0.00 0.00 0.00 115.00 0.00 16.28 289.84 13.48 152.14 0.00 152.14 8.70 100.00
sdj2 0.00 0.00 0.00 119.00 0.00 17.61 303.08 9.94 109.41 0.00 109.41 8.40 100.00
sdn2 0.00 0.00 0.00 91.00 0.00 14.76 332.24 6.14 65.63 0.00 65.63 10.55 96.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 115.00 0.00 15.78 281.01 10.66 97.57 0.00 97.57 8.70 100.00
sde2 0.00 0.00 0.00 111.00 0.00 19.85 366.32 8.74 80.14 0.00 80.14 8.50 94.40
sdg2 0.00 0.00 0.00 95.00 0.00 15.05 324.40 7.00 73.89 0.00 73.89 9.22 87.60
sdh2 0.00 0.00 0.00 99.00 0.00 15.62 323.20 7.86 95.76 0.00 95.76 8.77 86.80
sdj2 0.00 0.00 0.00 116.00 0.00 16.97 299.57 7.38 67.03 0.00 67.03 8.62 100.00
sdn2 0.00 0.00 0.00 101.00 0.00 14.20 287.94 7.38 77.47 0.00 77.47 9.15 92.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 104.00 0.00 16.32 321.44 14.07 98.35 0.00 98.35 9.23 96.00
sde2 0.00 0.00 0.00 100.00 0.00 12.78 261.66 10.61 72.44 0.00 72.44 9.44 94.40
sdg2 0.00 0.00 0.00 117.00 0.00 17.72 310.16 9.36 63.73 0.00 63.73 8.44 98.80
sdh2 0.00 0.00 0.00 111.00 0.00 14.56 268.62 9.74 71.42 0.00 71.42 8.29 92.00
sdj2 0.00 0.00 0.00 119.00 0.00 17.89 307.91 10.14 59.03 0.00 59.03 8.37 99.60
sdn2 0.00 0.00 0.00 116.00 0.00 15.97 281.97 12.70 76.28 0.00 76.28 8.31 96.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 134.00 0.00 22.74 347.49 17.40 148.60 0.00 148.60 7.46 100.00
sde2 0.00 0.00 0.00 126.00 0.00 20.80 338.05 28.28 239.75 0.00 239.75 7.94 100.00
sdg2 0.00 0.00 0.00 122.00 0.00 20.45 343.30 21.92 177.21 0.00 177.21 8.20 100.00
sdh2 0.00 0.00 0.00 110.00 0.00 17.15 319.36 14.89 144.62 0.00 144.62 9.09 100.00
sdj2 0.00 0.00 0.00 134.00 0.00 22.39 342.27 18.22 141.10 0.00 141.10 7.46 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 20.42 321.65 25.66 202.52 0.00 202.52 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 19.50 322.06 8.95 76.87 0.00 76.87 8.06 100.00
sde2 0.00 0.00 0.00 120.00 0.00 19.56 333.82 35.36 234.43 0.00 234.43 8.33 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 17.26 276.15 22.01 172.16 0.00 172.16 7.81 100.00
sdh2 0.00 0.00 0.00 136.00 0.00 21.35 321.51 14.27 96.29 0.00 96.29 7.35 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 22.22 337.04 26.35 180.06 0.00 180.06 7.41 100.00
sdn2 0.00 0.00 0.00 146.00 0.00 23.74 333.01 26.73 191.81 0.00 191.81 6.85 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 115.00 0.00 15.38 273.89 10.74 102.54 0.00 102.54 8.35 96.00
sde2 0.00 0.00 0.00 107.00 0.00 18.58 355.63 26.00 279.40 0.00 279.40 9.35 100.00
sdg2 0.00 0.00 0.00 115.00 0.00 15.55 276.94 11.32 119.58 0.00 119.58 8.49 97.60
sdh2 0.00 0.00 0.00 105.00 0.00 14.80 288.70 7.76 92.27 0.00 92.27 8.30 87.20
sdj2 0.00 0.00 0.00 116.00 0.00 15.10 266.51 12.63 147.72 0.00 147.72 8.00 92.80
sdn2 0.00 0.00 0.00 126.00 0.00 17.67 287.17 21.93 186.54 0.00 186.54 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 123.00 0.00 21.63 360.07 14.00 103.90 0.00 103.90 8.13 100.00
sde2 0.00 0.00 0.00 145.00 0.00 23.64 333.88 24.32 186.73 0.00 186.73 6.90 100.00
sdg2 0.00 0.00 0.00 116.00 0.00 21.16 373.53 13.28 99.14 0.00 99.14 8.10 94.00
sdh2 0.00 0.00 0.00 108.00 0.00 16.54 313.70 10.18 83.78 0.00 83.78 8.70 94.00
sdj2 0.00 0.00 0.00 123.00 0.00 16.71 278.28 12.06 85.27 0.00 85.27 8.10 99.60
sdn2 0.00 0.00 0.00 125.00 0.00 18.32 300.21 13.86 100.10 0.00 100.10 7.97 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 20.76 334.79 15.50 128.13 0.00 128.13 7.87 100.00
sde2 0.00 0.00 0.00 124.00 0.00 23.00 379.83 23.22 168.10 0.00 168.10 8.06 100.00
sdg2 0.00 0.00 0.00 119.00 0.00 20.01 344.45 10.75 101.41 0.00 101.41 8.40 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 21.10 345.64 10.93 92.29 0.00 92.29 8.00 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 23.42 377.60 12.69 107.40 0.00 107.40 7.87 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 21.18 336.29 18.00 146.23 0.00 146.23 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 116.00 0.00 21.23 374.84 17.29 129.48 0.00 129.48 8.62 100.00
sde2 0.00 0.00 0.00 135.00 0.00 24.91 377.84 24.00 193.39 0.00 193.39 7.41 100.00
sdg2 0.00 0.00 0.00 112.00 0.00 18.44 337.15 11.63 106.18 0.00 106.18 8.75 98.00
sdh2 0.00 0.00 0.00 130.00 0.00 24.74 389.77 18.72 139.29 0.00 139.29 7.69 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 22.20 347.08 16.94 122.26 0.00 122.26 7.63 100.00
sdn2 0.00 0.00 0.00 125.00 0.00 19.38 317.56 13.50 113.92 0.00 113.92 8.00 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 136.00 0.00 21.04 316.79 16.15 115.71 0.00 115.71 7.35 100.00
sde2 0.00 0.00 0.00 131.00 0.00 23.11 361.23 25.22 182.84 0.00 182.84 7.63 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 21.66 352.04 15.22 108.10 0.00 108.10 7.94 100.00
sdh2 0.00 0.00 0.00 137.00 0.00 24.80 370.68 21.10 128.09 0.00 128.09 7.30 100.00
sdj2 0.00 0.00 0.00 141.00 0.00 25.01 363.33 21.74 131.72 0.00 131.72 7.09 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 21.06 319.45 19.02 116.56 0.00 116.56 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 20.44 319.57 23.32 176.46 0.00 176.46 7.63 100.00
sde2 0.00 0.00 0.00 146.00 0.00 22.82 320.10 23.16 155.26 0.00 155.26 6.85 100.00
sdg2 0.00 0.00 0.00 122.00 0.00 16.96 284.68 10.82 97.80 0.00 97.80 8.20 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 17.01 278.63 18.94 164.64 0.00 164.64 8.00 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 18.65 293.81 22.69 193.72 0.00 193.72 7.69 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 19.68 314.89 15.67 140.06 0.00 140.06 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 19.45 313.63 24.13 187.40 0.00 187.40 7.87 100.00
sde2 0.00 0.00 0.00 127.00 0.00 18.71 301.77 23.18 188.28 0.00 188.28 7.87 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 20.03 310.70 17.61 124.39 0.00 124.39 7.58 100.00
sdh2 0.00 0.00 0.00 124.00 0.00 18.93 312.71 16.62 144.23 0.00 144.23 8.06 100.00
sdj2 0.00 0.00 0.00 136.00 0.00 20.36 306.63 25.56 163.24 0.00 163.24 7.35 100.00
sdn2 0.00 0.00 0.00 131.00 0.00 17.96 280.82 18.82 143.15 0.00 143.15 7.63 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 134.00 0.00 19.50 297.98 16.82 149.97 0.00 149.97 7.46 100.00
sde2 0.00 0.00 0.00 137.00 0.00 19.73 294.96 18.98 146.48 0.00 146.48 7.30 100.00
sdg2 0.00 0.00 0.00 127.00 0.00 14.88 239.92 20.76 141.10 0.00 141.10 7.87 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 17.95 276.45 16.96 123.19 0.00 123.19 7.52 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 16.44 258.93 26.18 242.68 0.00 242.68 7.69 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 15.67 246.90 15.30 116.52 0.00 116.52 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 121.00 0.00 17.08 289.05 10.93 83.90 0.00 83.90 8.26 100.00
sde2 0.00 0.00 0.00 127.00 0.00 17.38 280.27 17.78 135.24 0.00 135.24 7.87 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 18.22 284.82 19.19 170.60 0.00 170.60 7.63 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 18.44 307.10 16.04 139.32 0.00 139.32 8.13 100.00
sdj2 0.00 0.00 0.00 143.00 0.00 21.14 302.73 21.30 128.03 0.00 128.03 6.99 100.00
sdn2 0.00 0.00 0.00 121.00 0.00 16.75 283.42 13.85 112.33 0.00 112.33 8.26 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 19.77 306.70 18.12 116.97 0.00 116.97 7.58 100.00
sde2 0.00 0.00 0.00 122.00 0.00 21.57 362.02 14.26 129.15 0.00 129.15 8.20 100.00
sdg2 0.00 0.00 0.00 130.00 0.00 21.36 336.48 17.70 115.94 0.00 115.94 7.69 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 19.90 325.98 19.36 127.30 0.00 127.30 8.00 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 20.93 329.69 19.56 127.08 0.00 127.08 7.69 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 23.99 350.99 17.40 122.66 0.00 122.66 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 22.66 348.86 18.05 134.86 0.00 134.86 7.52 100.00
sde2 0.00 0.00 0.00 149.00 0.00 24.45 336.09 20.86 119.87 0.00 119.87 6.71 100.00
sdg2 0.00 0.00 0.00 141.00 0.00 25.28 367.22 26.04 193.87 0.00 193.87 7.09 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 20.08 309.14 19.64 166.80 0.00 166.80 7.52 100.00
sdj2 0.00 0.00 0.00 142.00 0.00 23.35 336.82 26.75 201.27 0.00 201.27 7.04 100.00
sdn2 0.00 0.00 0.00 149.00 0.00 24.65 338.84 17.18 107.76 0.00 107.76 6.71 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 20.96 322.72 15.55 135.61 0.00 135.61 7.52 100.00
sde2 0.00 0.00 0.00 134.00 0.00 19.92 304.37 18.32 156.96 0.00 156.96 7.46 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 21.36 314.71 15.01 122.50 0.00 122.50 7.19 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 18.01 290.45 8.15 81.39 0.00 81.39 7.87 100.00
sdj2 0.00 0.00 0.00 142.00 0.00 20.78 299.68 24.65 193.72 0.00 193.72 7.04 100.00
sdn2 0.00 0.00 0.00 144.00 0.00 22.11 314.48 21.65 149.83 0.00 149.83 6.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 105.00 0.00 17.24 336.25 11.22 114.82 0.00 114.82 7.70 80.80
sde2 0.00 0.00 0.00 214.00 0.00 14.08 134.70 70.53 163.36 0.00 163.36 4.67 100.00
sdg2 0.00 0.00 0.00 156.00 0.00 15.52 203.75 70.80 190.10 0.00 190.10 6.41 100.00
sdh2 0.00 0.00 0.00 99.00 0.00 15.01 310.49 8.31 84.28 0.00 84.28 8.53 84.40
sdj2 0.00 0.00 0.00 175.00 0.00 15.99 187.09 56.43 136.11 0.00 136.11 5.71 100.00
sdn2 0.00 0.00 0.00 155.00 0.00 18.73 247.53 32.34 120.18 0.00 120.18 6.45 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 273.00 0.00 2.01 15.06 144.27 325.64 0.00 325.64 3.66 100.00
sde2 0.00 0.00 0.00 169.00 0.00 2.96 35.83 148.24 655.86 0.00 655.86 5.92 100.00
sdg2 0.00 0.00 0.00 184.00 0.00 2.02 22.43 146.72 643.09 0.00 643.09 5.43 100.00
sdh2 0.00 0.00 0.00 219.00 0.00 1.69 15.82 127.44 287.80 0.00 287.80 4.02 88.00
sdj2 0.00 0.00 0.00 171.00 0.00 3.62 43.42 146.46 664.28 0.00 664.28 5.87 100.40
sdn2 0.00 0.00 0.00 222.00 0.00 1.69 15.59 144.95 407.62 0.00 407.62 4.52 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 157.00 0.00 1.27 16.54 144.16 846.52 0.00 846.52 6.37 100.00
sde2 0.00 0.00 0.00 167.00 0.00 1.34 16.49 144.77 947.14 0.00 947.14 6.01 100.40
sdg2 0.00 0.00 0.00 165.00 0.00 1.27 15.72 144.35 922.08 0.00 922.08 6.08 100.40
sdh2 0.00 0.00 0.00 163.00 0.00 1.21 15.18 143.38 877.72 0.00 877.72 6.13 100.00
sdj2 0.00 0.00 0.00 149.00 0.00 1.06 14.56 143.97 989.72 0.00 989.72 6.71 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.31 15.88 144.36 913.49 0.00 913.49 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.21 14.84 144.14 894.99 0.00 894.99 5.99 100.00
sde2 0.00 0.00 0.00 167.00 0.00 1.30 15.90 143.32 853.37 0.00 853.37 5.96 99.60
sdg2 0.00 0.00 0.00 177.00 0.00 1.32 15.26 143.00 841.29 0.00 841.29 5.63 99.60
sdh2 0.00 0.00 0.00 176.00 0.00 1.31 15.20 143.55 860.00 0.00 860.00 5.66 99.60
sdj2 0.00 0.00 0.00 166.00 0.00 1.35 16.64 142.95 864.39 0.00 864.39 6.00 99.60
sdn2 0.00 0.00 0.00 169.00 0.00 1.21 14.72 142.84 844.83 0.00 844.83 5.89 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 140.00 0.00 1.04 15.20 142.32 901.23 0.00 901.23 7.17 100.40
sde2 0.00 0.00 0.00 173.00 0.00 1.38 16.29 144.14 850.61 0.00 850.61 5.80 100.40
sdg2 0.00 0.00 0.00 168.00 0.00 1.28 15.61 145.06 819.52 0.00 819.52 5.98 100.40
sdh2 0.00 0.00 0.00 174.00 0.00 1.30 15.32 143.72 788.74 0.00 788.74 5.77 100.40
sdj2 0.00 0.00 0.00 171.00 0.00 1.33 15.93 144.83 873.80 0.00 873.80 5.87 100.40
sdn2 0.00 0.00 0.00 167.00 0.00 1.29 15.77 143.65 859.90 0.00 859.90 6.01 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 165.00 0.00 1.23 15.28 143.30 957.07 0.00 957.07 6.06 100.00
sde2 0.00 0.00 0.00 167.00 0.00 1.38 16.97 143.60 848.50 0.00 848.50 5.99 100.00
sdg2 0.00 0.00 0.00 160.00 0.00 1.16 14.79 144.76 875.95 0.00 875.95 6.25 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.31 15.60 143.84 852.84 0.00 852.84 5.81 100.00
sdj2 0.00 0.00 0.00 171.00 0.00 1.31 15.74 143.66 817.10 0.00 817.10 5.85 100.00
sdn2 0.00 0.00 0.00 160.00 0.00 1.24 15.86 143.42 857.45 0.00 857.45 6.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.17 14.65 143.33 883.56 0.00 883.56 6.10 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.25 14.94 144.48 838.35 0.00 838.35 5.81 100.00
sdg2 0.00 0.00 0.00 174.00 0.00 1.30 15.30 144.29 883.40 0.00 883.40 5.75 100.00
sdh2 0.00 0.00 0.00 170.00 0.00 1.31 15.74 143.66 842.59 0.00 842.59 5.88 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.32 16.20 143.00 858.30 0.00 858.30 5.99 100.00
sdn2 0.00 0.00 0.00 163.00 0.00 1.22 15.33 142.92 914.01 0.00 914.01 6.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.11 14.00 142.78 873.10 0.00 873.10 6.11 99.60
sde2 0.00 0.00 0.00 155.00 0.00 1.19 15.75 143.15 873.01 0.00 873.01 6.43 99.60
sdg2 0.00 0.00 0.00 174.00 0.00 1.27 14.91 143.74 840.02 0.00 840.02 5.72 99.60
sdh2 0.00 0.00 0.00 169.00 0.00 1.26 15.25 143.27 850.96 0.00 850.96 5.92 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.24 15.20 143.46 856.57 0.00 856.57 5.99 100.00
sdn2 0.00 0.00 0.00 171.00 0.00 1.27 15.23 144.01 862.64 0.00 862.64 5.85 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.34 16.41 144.45 847.38 0.00 847.38 6.01 100.40
sde2 0.00 0.00 0.00 153.00 0.00 1.12 15.03 145.81 985.20 0.00 985.20 6.56 100.40
sdg2 0.00 0.00 0.00 162.00 0.00 1.65 20.80 146.76 850.74 0.00 850.74 6.20 100.40
sdh2 0.00 0.00 0.00 171.00 0.00 1.31 15.65 142.97 840.40 0.00 840.40 5.85 100.00
sdj2 0.00 0.00 0.00 164.00 0.00 1.33 16.60 145.11 881.90 0.00 881.90 6.10 100.00
sdn2 0.00 0.00 0.00 160.00 0.00 1.26 16.11 142.93 840.83 0.00 840.83 6.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 178.00 0.00 1.47 16.91 143.89 841.53 0.00 841.53 5.62 100.00
sde2 0.00 0.00 0.00 177.00 0.00 1.37 15.88 143.51 817.83 0.00 817.83 5.65 100.00
sdg2 0.00 0.00 0.00 151.00 0.00 2.25 30.57 146.62 912.42 0.00 912.42 6.62 100.00
sdh2 0.00 0.00 0.00 161.00 0.00 1.22 15.47 143.39 864.05 0.00 864.05 6.21 100.00
sdj2 0.00 0.00 0.00 158.00 0.00 1.37 17.79 144.70 882.46 0.00 882.46 6.33 100.00
sdn2 0.00 0.00 0.00 166.00 0.00 1.41 17.41 144.22 884.34 0.00 884.34 6.02 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 150.00 0.00 1.11 15.17 144.25 902.35 0.00 902.35 6.67 100.00
sde2 0.00 0.00 0.00 161.00 0.00 1.24 15.80 144.80 882.19 0.00 882.19 6.21 100.00
sdg2 0.00 0.00 0.00 155.00 0.00 1.15 15.17 144.96 961.39 0.00 961.39 6.45 100.00
sdh2 0.00 0.00 0.00 171.00 0.00 1.32 15.79 143.90 858.11 0.00 858.11 5.85 100.00
sdj2 0.00 0.00 0.00 163.00 0.00 1.23 15.48 144.75 896.10 0.00 896.10 6.13 100.00
sdn2 0.00 0.00 0.00 168.00 0.00 1.31 15.93 143.50 862.64 0.00 862.64 5.95 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 165.00 0.00 1.26 15.65 142.65 920.68 0.00 920.68 6.06 100.00
sde2 0.00 0.00 0.00 135.00 0.00 0.90 13.70 118.98 970.58 0.00 970.58 7.41 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.50 18.58 107.22 909.82 0.00 909.82 6.06 100.00
sdh2 0.00 0.00 0.00 162.00 0.00 1.22 15.43 91.70 865.11 0.00 865.11 6.17 100.00
sdj2 0.00 0.00 0.00 154.00 0.00 1.24 16.44 141.04 928.18 0.00 928.18 6.49 100.00
sdn2 0.00 0.00 0.00 151.00 0.00 1.10 14.99 134.11 904.50 0.00 904.50 6.62 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 150.00 0.00 6.77 92.39 63.77 793.87 0.00 793.87 6.67 100.00
sde2 0.00 0.00 0.00 120.00 0.00 8.79 150.01 23.53 642.33 0.00 642.33 8.27 99.20
sdg2 0.00 0.00 0.00 117.00 0.00 9.44 165.17 10.90 325.61 0.00 325.61 8.00 93.60
sdh2 0.00 0.00 0.00 98.00 0.00 10.19 212.97 4.70 176.65 0.00 176.65 8.49 83.20
sdj2 0.00 0.00 0.00 147.00 0.00 8.05 112.20 58.70 816.19 0.00 816.19 6.80 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 9.92 145.18 39.42 705.20 0.00 705.20 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 23.05 354.91 14.22 126.38 0.00 126.38 7.52 100.00
sde2 0.00 0.00 0.00 129.00 0.00 20.61 327.13 21.01 99.69 0.00 99.69 7.75 100.00
sdg2 0.00 0.00 0.00 129.00 0.00 19.42 308.39 12.29 89.67 0.00 89.67 7.75 100.00
sdh2 0.00 0.00 0.00 103.00 0.00 16.43 326.78 7.57 63.18 0.00 63.18 9.32 96.00
sdj2 0.00 0.00 0.00 132.00 0.00 24.04 373.06 21.95 170.33 0.00 170.33 7.58 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 21.29 337.92 16.21 122.60 0.00 122.60 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 110.00 0.00 19.49 362.85 9.10 92.55 0.00 92.55 8.80 96.80
sde2 0.00 0.00 0.00 118.00 0.00 21.80 378.41 32.56 323.49 0.00 323.49 8.47 100.00
sdg2 0.00 0.00 0.00 112.00 0.00 18.41 336.61 7.32 75.75 0.00 75.75 8.21 92.00
sdh2 0.00 0.00 0.00 99.00 0.00 13.89 287.41 5.05 61.94 0.00 61.94 8.53 84.40
sdj2 0.00 0.00 0.00 118.00 0.00 20.08 348.50 11.22 121.76 0.00 121.76 8.44 99.60
sdn2 0.00 0.00 0.00 126.00 0.00 23.25 377.85 10.74 91.81 0.00 91.81 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 102.00 0.00 18.02 361.73 9.50 80.24 0.00 80.24 9.80 100.00
sde2 0.00 0.00 0.00 135.00 0.00 21.85 331.40 20.46 143.29 0.00 143.29 7.41 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 24.12 377.08 14.04 81.74 0.00 81.74 7.63 100.00
sdh2 0.00 0.00 0.00 116.00 0.00 16.78 296.32 11.10 57.66 0.00 57.66 8.62 100.00
sdj2 0.00 0.00 0.00 124.00 0.00 21.92 362.03 9.44 57.58 0.00 57.58 8.06 100.00
sdn2 0.00 0.00 0.00 124.00 0.00 20.28 334.91 11.16 67.39 0.00 67.39 8.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 22.53 349.62 16.17 126.73 0.00 126.73 7.58 100.00
sde2 0.00 0.00 0.00 141.00 0.00 25.19 365.94 18.44 153.11 0.00 153.11 7.09 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 21.52 358.33 19.89 176.59 0.00 176.59 8.13 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 21.67 333.75 23.48 205.62 0.00 205.62 7.52 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 24.22 356.90 17.21 133.73 0.00 133.73 7.19 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 20.08 321.25 18.01 150.53 0.00 150.53 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 20.33 320.20 27.23 182.34 0.00 182.34 7.69 100.00
sde2 0.00 0.00 0.00 127.00 0.00 21.50 346.73 20.28 145.13 0.00 145.13 7.87 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 24.30 371.38 22.36 148.90 0.00 148.90 7.46 100.00
sdh2 0.00 0.00 0.00 142.00 0.00 22.55 325.24 21.57 137.77 0.00 137.77 7.04 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 21.33 330.93 23.68 167.67 0.00 167.67 7.58 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 21.39 329.30 15.11 114.83 0.00 114.83 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 24.99 390.65 25.16 205.47 0.00 205.47 7.63 100.00
sde2 0.00 0.00 0.00 134.00 0.00 24.27 370.96 22.38 158.24 0.00 158.24 7.46 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 21.90 337.16 19.22 163.79 0.00 163.79 7.52 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 22.83 351.62 22.60 173.17 0.00 173.17 7.52 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 19.79 329.47 20.29 147.74 0.00 147.74 8.13 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 23.06 368.97 15.01 119.91 0.00 119.91 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 23.95 357.98 12.97 101.49 0.00 101.49 7.30 100.00
sde2 0.00 0.00 0.00 123.00 0.00 20.34 338.66 19.29 163.93 0.00 163.93 8.13 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 22.69 363.05 16.17 107.00 0.00 107.00 7.81 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 20.94 329.96 19.55 149.78 0.00 149.78 7.69 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 22.27 350.85 15.62 140.28 0.00 140.28 7.69 100.00
sdn2 0.00 0.00 0.00 124.00 0.00 20.41 337.05 14.02 94.77 0.00 94.77 8.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 20.55 336.62 18.17 131.49 0.00 131.49 8.00 100.00
sde2 0.00 0.00 0.00 133.00 0.00 24.10 371.11 13.64 114.20 0.00 114.20 7.52 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 24.69 363.78 19.24 152.66 0.00 152.66 7.19 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 25.42 388.55 17.74 124.99 0.00 124.99 7.46 100.00
sdj2 0.00 0.00 0.00 121.00 0.00 20.21 342.11 10.95 99.90 0.00 99.90 8.26 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 22.94 364.19 23.48 163.94 0.00 163.94 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 23.54 376.61 18.47 165.91 0.00 165.91 7.81 100.00
sde2 0.00 0.00 0.00 126.00 0.00 21.28 345.84 23.10 149.14 0.00 149.14 7.94 100.00
sdg2 0.00 0.00 0.00 140.00 0.00 23.78 347.94 18.31 122.74 0.00 122.74 7.14 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 25.75 390.64 23.41 191.91 0.00 191.91 7.41 100.00
sdj2 0.00 0.00 0.00 125.00 0.00 23.37 382.97 26.70 197.66 0.00 197.66 8.00 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 24.06 351.90 22.62 179.89 0.00 179.89 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 129.00 0.00 19.59 310.96 14.01 93.43 0.00 93.43 7.75 100.00
sde2 0.00 0.00 0.00 130.00 0.00 21.92 345.38 24.22 198.92 0.00 198.92 7.69 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 23.73 371.01 19.93 154.87 0.00 154.87 7.63 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 23.44 355.59 16.74 114.01 0.00 114.01 7.41 100.00
sdj2 0.00 0.00 0.00 122.00 0.00 19.56 328.42 15.12 135.48 0.00 135.48 8.20 100.00
sdn2 0.00 0.00 0.00 127.00 0.00 23.76 383.09 19.10 155.18 0.00 155.18 7.87 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 24.46 382.46 24.71 177.95 0.00 177.95 7.63 100.00
sde2 0.00 0.00 0.00 138.00 0.00 21.10 313.12 23.14 184.72 0.00 184.72 7.25 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 21.26 340.12 29.96 200.59 0.00 200.59 7.81 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 22.92 361.09 17.73 147.85 0.00 147.85 7.69 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 20.50 318.02 15.49 122.58 0.00 122.58 7.58 100.00
sdn2 0.00 0.00 0.00 145.00 0.00 26.78 378.24 18.61 137.27 0.00 137.27 6.90 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 108.00 0.00 18.39 348.70 11.84 138.33 0.00 138.33 8.19 88.40
sde2 0.00 0.00 0.00 128.00 0.00 21.18 338.93 14.99 120.50 0.00 120.50 7.81 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 22.50 374.63 14.97 168.00 0.00 168.00 8.13 100.00
sdh2 0.00 0.00 0.00 111.00 0.00 17.13 316.14 9.61 80.18 0.00 80.18 8.22 91.20
sdj2 0.00 0.00 0.00 108.00 0.00 17.15 325.18 12.34 113.56 0.00 113.56 8.81 95.20
sdn2 0.00 0.00 0.00 126.00 0.00 20.35 330.75 12.50 99.75 0.00 99.75 7.90 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 24.01 372.54 18.50 132.24 0.00 132.24 7.58 100.00
sde2 0.00 0.00 0.00 149.00 0.00 27.17 373.43 22.40 114.47 0.00 114.47 6.71 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 23.27 360.99 16.82 128.52 0.00 128.52 7.58 100.00
sdh2 0.00 0.00 0.00 143.00 0.00 26.52 379.82 18.92 135.66 0.00 135.66 6.99 100.00
sdj2 0.00 0.00 0.00 147.00 0.00 32.63 454.56 20.64 122.18 0.00 122.18 6.80 100.00
sdn2 0.00 0.00 0.00 143.00 0.00 28.09 402.34 19.40 126.01 0.00 126.01 6.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 134.00 0.00 23.71 362.32 21.40 155.97 0.00 155.97 7.46 100.00
sde2 0.00 0.00 0.00 137.00 0.00 26.36 394.02 23.75 207.71 0.00 207.71 7.30 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 21.70 339.27 15.92 110.44 0.00 110.44 7.63 100.00
sdh2 0.00 0.00 0.00 114.00 0.00 18.40 330.50 6.71 63.93 0.00 63.93 8.67 98.80
sdj2 0.00 0.00 0.00 133.00 0.00 22.35 344.15 19.84 168.06 0.00 168.06 7.52 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 21.16 325.88 18.91 142.56 0.00 142.56 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 21.66 336.11 17.64 145.52 0.00 145.52 7.58 100.00
sde2 0.00 0.00 0.00 128.00 0.00 21.41 342.55 22.36 181.06 0.00 181.06 7.81 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 22.38 349.86 21.68 147.76 0.00 147.76 7.63 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 20.03 306.19 17.16 109.52 0.00 109.52 7.46 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 21.18 333.65 14.66 110.68 0.00 110.68 7.69 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 20.17 322.71 16.92 140.62 0.00 140.62 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 19.15 297.17 20.92 143.76 0.00 143.76 7.58 100.00
sde2 0.00 0.00 0.00 124.00 0.00 17.93 296.16 33.18 222.16 0.00 222.16 8.06 100.00
sdg2 0.00 0.00 0.00 136.00 0.00 20.70 311.76 21.61 185.18 0.00 185.18 7.35 100.00
sdh2 0.00 0.00 0.00 129.00 0.00 16.75 265.98 19.26 155.81 0.00 155.81 7.75 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 18.90 286.64 19.28 138.81 0.00 138.81 7.41 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 20.43 332.11 16.87 139.52 0.00 139.52 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 20.38 304.59 16.68 141.20 0.00 141.20 7.30 100.00
sde2 0.00 0.00 0.00 136.00 0.00 20.39 307.05 25.13 228.76 0.00 228.76 7.35 100.00
sdg2 0.00 0.00 0.00 127.00 0.00 19.29 311.03 17.06 128.47 0.00 128.47 7.87 100.00
sdh2 0.00 0.00 0.00 128.00 0.00 20.50 328.00 18.12 140.97 0.00 140.97 7.81 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 20.54 321.07 20.92 151.69 0.00 151.69 7.63 100.00
sdn2 0.00 0.00 0.00 121.00 0.00 16.22 274.60 11.15 87.93 0.00 87.93 8.26 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 103.00 0.00 12.74 253.34 7.07 69.83 0.00 69.83 8.19 84.40
sde2 0.00 0.00 0.00 168.00 0.00 18.01 219.51 67.18 157.71 0.00 157.71 5.95 100.00
sdg2 0.00 0.00 0.00 176.00 0.00 15.42 179.43 60.51 164.18 0.00 164.18 5.68 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 17.57 292.50 19.08 167.97 0.00 167.97 8.13 100.00
sdj2 0.00 0.00 0.00 148.00 0.00 14.57 201.57 39.86 123.24 0.00 123.24 6.76 100.00
sdn2 0.00 0.00 0.00 127.00 0.00 17.68 285.13 28.76 145.10 0.00 145.10 7.87 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 239.00 0.00 1.80 15.40 133.64 308.69 0.00 308.69 3.88 92.80
sde2 0.00 0.00 0.00 173.00 0.00 3.59 42.46 148.63 698.08 0.00 698.08 5.78 100.00
sdg2 0.00 0.00 0.00 179.00 0.00 2.89 33.09 146.32 637.65 0.00 637.65 5.59 100.00
sdh2 0.00 0.00 0.00 224.00 0.00 1.76 16.11 115.36 204.30 0.00 204.30 3.79 84.80
sdj2 0.00 0.00 0.00 207.00 0.00 2.83 27.95 144.96 494.30 0.00 494.30 4.83 100.00
sdn2 0.00 0.00 0.00 211.00 0.00 2.41 23.40 147.76 473.18 0.00 473.18 4.74 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 162.00 0.00 1.24 15.67 143.48 879.98 0.00 879.98 6.17 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.71 20.55 144.03 902.24 0.00 902.24 5.88 100.00
sdg2 0.00 0.00 0.00 166.00 0.00 1.26 15.60 144.26 878.67 0.00 878.67 6.02 100.00
sdh2 0.00 0.00 0.00 161.00 0.00 1.22 15.50 143.41 933.09 0.00 933.09 6.21 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.32 15.94 142.92 876.78 0.00 876.78 5.88 100.00
sdn2 0.00 0.00 0.00 168.00 0.00 1.26 15.32 143.99 837.26 0.00 837.26 5.95 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 159.00 0.00 1.24 16.01 143.23 871.72 0.00 871.72 6.29 100.00
sde2 0.00 0.00 0.00 177.00 0.00 1.48 17.08 145.62 808.54 0.00 808.54 5.65 100.00
sdg2 0.00 0.00 0.00 172.00 0.00 1.31 15.55 143.82 858.84 0.00 858.84 5.81 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.34 15.93 144.01 848.95 0.00 848.95 5.81 100.00
sdj2 0.00 0.00 0.00 158.00 0.00 1.16 15.04 143.06 869.34 0.00 869.34 6.33 100.00
sdn2 0.00 0.00 0.00 176.00 0.00 1.35 15.73 144.16 837.00 0.00 837.00 5.68 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.24 15.49 143.07 886.15 0.00 886.15 6.10 100.00
sde2 0.00 0.00 0.00 161.00 0.00 1.69 21.43 144.88 880.10 0.00 880.10 6.21 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 2.46 30.77 147.09 846.41 0.00 846.41 6.10 100.00
sdh2 0.00 0.00 0.00 169.00 0.00 1.27 15.44 143.33 832.33 0.00 832.33 5.92 100.00
sdj2 0.00 0.00 0.00 172.00 0.00 1.33 15.85 142.58 861.70 0.00 861.70 5.81 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.25 15.62 144.62 874.29 0.00 874.29 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 160.00 0.00 1.13 14.41 143.56 890.52 0.00 890.52 6.25 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.36 16.43 145.02 886.47 0.00 886.47 5.88 100.00
sdg2 0.00 0.00 0.00 160.00 0.00 1.65 21.13 144.56 906.42 0.00 906.42 6.25 100.00
sdh2 0.00 0.00 0.00 165.00 0.00 1.19 14.80 143.24 891.18 0.00 891.18 6.06 100.00
sdj2 0.00 0.00 0.00 166.00 0.00 1.28 15.77 143.66 881.49 0.00 881.49 6.02 100.00
sdn2 0.00 0.00 0.00 170.00 0.00 1.40 16.82 143.60 839.81 0.00 839.81 5.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 166.00 0.00 1.26 15.59 143.20 876.02 0.00 876.02 6.02 100.00
sde2 0.00 0.00 0.00 173.00 0.00 1.31 15.53 144.36 842.34 0.00 842.34 5.78 100.00
sdg2 0.00 0.00 0.00 173.00 0.00 1.56 18.51 144.47 880.55 0.00 880.55 5.78 100.00
sdh2 0.00 0.00 0.00 167.00 0.00 1.30 15.92 143.58 832.67 0.00 832.67 5.99 100.00
sdj2 0.00 0.00 0.00 161.00 0.00 1.26 16.01 143.90 854.24 0.00 854.24 6.21 100.00
sdn2 0.00 0.00 0.00 176.00 0.00 1.35 15.73 142.78 838.09 0.00 838.09 5.68 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 158.00 0.00 1.17 15.15 143.70 880.35 0.00 880.35 6.33 100.00
sde2 0.00 0.00 0.00 171.00 0.00 1.33 15.93 143.66 823.65 0.00 823.65 5.85 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.41 17.21 144.50 850.07 0.00 850.07 5.95 100.00
sdh2 0.00 0.00 0.00 179.00 0.00 1.40 16.02 144.45 837.56 0.00 837.56 5.59 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.29 15.49 144.40 866.38 0.00 866.38 5.88 100.00
sdn2 0.00 0.00 0.00 167.00 0.00 1.25 15.39 143.52 831.02 0.00 831.02 5.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.27 15.44 143.05 883.71 0.00 883.71 5.95 100.00
sde2 0.00 0.00 0.00 159.00 0.00 1.20 15.40 144.28 865.38 0.00 865.38 6.29 100.00
sdg2 0.00 0.00 0.00 163.00 0.00 1.30 16.32 144.14 876.79 0.00 876.79 6.13 100.00
sdh2 0.00 0.00 0.00 166.00 0.00 1.23 15.19 143.14 848.29 0.00 848.29 6.02 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.26 15.19 143.23 850.68 0.00 850.68 5.88 100.00
sdn2 0.00 0.00 0.00 161.00 0.00 1.26 16.07 143.79 875.16 0.00 875.16 6.21 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 175.00 0.00 1.30 15.23 143.82 837.23 0.00 837.23 5.71 100.00
sde2 0.00 0.00 0.00 165.00 0.00 1.27 15.75 143.86 916.15 0.00 916.15 6.06 100.00
sdg2 0.00 0.00 0.00 166.00 0.00 1.30 15.99 143.02 868.41 0.00 868.41 6.02 100.00
sdh2 0.00 0.00 0.00 163.00 0.00 1.23 15.42 142.83 865.57 0.00 865.57 6.13 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.23 14.81 143.22 838.12 0.00 838.12 5.88 100.00
sdn2 0.00 0.00 0.00 171.00 0.00 1.32 15.81 143.33 869.82 0.00 869.82 5.85 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.17 14.75 144.23 868.76 0.00 868.76 6.13 100.00
sde2 0.00 0.00 0.00 162.00 0.00 1.25 15.80 146.84 832.94 0.00 832.94 6.17 100.00
sdg2 0.00 0.00 0.00 167.00 0.00 1.31 16.12 145.09 861.60 0.00 861.60 5.99 100.00
sdh2 0.00 0.00 0.00 162.00 0.00 1.17 14.83 143.37 884.30 0.00 884.30 6.17 100.00
sdj2 0.00 0.00 0.00 162.00 0.00 1.24 15.72 144.46 854.74 0.00 854.74 6.17 100.00
sdn2 0.00 0.00 0.00 170.00 0.00 1.28 15.38 143.29 846.33 0.00 846.33 5.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.31 16.06 133.51 851.16 0.00 851.16 5.99 100.00
sde2 0.00 0.00 0.00 153.00 0.00 2.25 30.14 148.72 963.19 0.00 963.19 6.54 100.00
sdg2 0.00 0.00 0.00 153.00 0.00 1.87 25.01 143.16 904.16 0.00 904.16 6.54 100.00
sdh2 0.00 0.00 0.00 170.00 0.00 1.31 15.74 147.11 860.19 0.00 860.19 5.88 100.00
sdj2 0.00 0.00 0.00 151.00 0.00 1.75 23.70 143.10 933.51 0.00 933.51 6.62 100.00
sdn2 0.00 0.00 0.00 163.00 0.00 1.24 15.55 146.76 862.67 0.00 862.67 6.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 129.00 0.00 7.35 116.72 25.18 600.09 0.00 600.09 7.75 100.00
sde2 0.00 0.00 0.00 156.00 0.00 1.93 25.29 105.88 973.38 0.00 973.38 6.41 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 3.87 62.92 45.49 884.32 0.00 884.32 7.94 100.00
sdh2 0.00 0.00 0.00 157.00 0.00 1.95 25.48 89.74 921.89 0.00 921.89 6.37 100.00
sdj2 0.00 0.00 0.00 148.00 0.00 5.06 70.04 48.89 796.46 0.00 796.46 6.76 100.00
sdn2 0.00 0.00 0.00 153.00 0.00 1.70 22.70 86.38 930.46 0.00 930.46 6.54 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 114.00 0.00 15.60 280.20 9.94 75.37 0.00 75.37 8.77 100.00
sde2 0.00 0.00 0.00 138.00 0.00 15.43 228.95 33.62 423.22 0.00 423.22 7.25 100.00
sdg2 0.00 0.00 0.00 120.00 0.00 18.60 317.46 19.62 156.37 0.00 156.37 8.33 100.00
sdh2 0.00 0.00 0.00 126.00 0.00 19.38 314.94 16.96 186.57 0.00 186.57 7.94 100.00
sdj2 0.00 0.00 0.00 119.00 0.00 17.93 308.57 9.89 67.93 0.00 67.93 8.40 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 20.83 320.83 23.76 218.35 0.00 218.35 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 21.55 344.73 16.47 130.75 0.00 130.75 7.81 100.00
sde2 0.00 0.00 0.00 127.00 0.00 21.28 343.15 19.14 153.92 0.00 153.92 7.87 100.00
sdg2 0.00 0.00 0.00 130.00 0.00 22.44 353.56 17.43 152.58 0.00 152.58 7.69 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 16.39 264.28 24.32 148.85 0.00 148.85 7.87 100.00
sdj2 0.00 0.00 0.00 118.00 0.00 19.34 335.73 20.43 153.97 0.00 153.97 8.47 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 20.51 323.15 15.66 142.12 0.00 142.12 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 15.67 250.72 14.57 104.75 0.00 104.75 7.81 100.00
sde2 0.00 0.00 0.00 134.00 0.00 21.10 322.55 16.73 138.48 0.00 138.48 7.46 100.00
sdg2 0.00 0.00 0.00 121.00 0.00 13.73 232.31 12.84 102.08 0.00 102.08 8.26 100.00
sdh2 0.00 0.00 0.00 129.00 0.00 19.94 316.55 26.98 230.95 0.00 230.95 7.75 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 17.27 287.53 17.46 171.84 0.00 171.84 8.13 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 20.55 311.76 20.53 145.84 0.00 145.84 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 23.18 359.64 23.18 172.09 0.00 172.09 7.58 100.00
sde2 0.00 0.00 0.00 129.00 0.00 23.68 375.98 16.02 111.41 0.00 111.41 7.75 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 17.10 277.91 24.34 148.44 0.00 148.44 7.94 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 23.99 356.04 23.50 173.07 0.00 173.07 7.25 100.00
sdj2 0.00 0.00 0.00 134.00 0.00 24.36 372.25 14.67 95.49 0.00 95.49 7.46 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 23.63 345.74 19.44 126.03 0.00 126.03 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 134.00 0.00 27.27 416.71 17.90 135.49 0.00 135.49 7.46 100.00
sde2 0.00 0.00 0.00 123.00 0.00 22.73 378.45 14.80 105.24 0.00 105.24 8.13 100.00
sdg2 0.00 0.00 0.00 140.00 0.00 28.53 417.30 22.01 194.69 0.00 194.69 7.14 100.00
sdh2 0.00 0.00 0.00 126.00 0.00 23.93 388.88 24.42 200.95 0.00 200.95 7.94 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 28.09 439.08 16.65 145.13 0.00 145.13 7.63 100.00
sdn2 0.00 0.00 0.00 113.00 0.00 21.89 396.74 16.76 156.14 0.00 156.14 8.85 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 18.22 293.81 19.08 139.59 0.00 139.59 7.87 100.00
sde2 0.00 0.00 0.00 139.00 0.00 19.04 280.50 23.38 174.33 0.00 174.33 7.19 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 19.88 303.91 21.91 160.15 0.00 160.15 7.46 100.00
sdh2 0.00 0.00 0.00 137.00 0.00 20.36 304.30 25.03 174.86 0.00 174.86 7.30 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 16.17 269.24 16.60 112.13 0.00 112.13 8.13 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 20.32 297.27 22.99 151.54 0.00 151.54 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 22.07 353.12 22.30 195.91 0.00 195.91 7.81 100.00
sde2 0.00 0.00 0.00 121.00 0.00 19.60 331.75 23.96 217.02 0.00 217.02 8.26 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 19.96 332.36 9.92 96.23 0.00 96.23 8.13 100.00
sdh2 0.00 0.00 0.00 137.00 0.00 22.12 330.71 18.31 166.01 0.00 166.01 7.30 100.00
sdj2 0.00 0.00 0.00 114.00 0.00 17.66 317.34 9.16 106.25 0.00 106.25 8.70 99.20
sdn2 0.00 0.00 0.00 137.00 0.00 22.29 333.25 22.46 180.35 0.00 180.35 7.30 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 20.18 333.23 22.17 164.65 0.00 164.65 8.06 100.00
sde2 0.00 0.00 0.00 135.00 0.00 22.87 346.93 26.14 178.19 0.00 178.19 7.41 100.00
sdg2 0.00 0.00 0.00 116.00 0.00 17.89 315.79 14.36 123.10 0.00 123.10 7.72 89.60
sdh2 0.00 0.00 0.00 137.00 0.00 21.66 323.81 18.37 122.48 0.00 122.48 7.30 100.00
sdj2 0.00 0.00 0.00 134.00 0.00 20.81 318.07 17.58 118.30 0.00 118.30 7.46 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 20.34 338.62 15.82 135.51 0.00 135.51 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 120.00 0.00 18.42 314.42 18.87 153.23 0.00 153.23 8.33 100.00
sde2 0.00 0.00 0.00 134.00 0.00 22.42 342.63 16.18 128.03 0.00 128.03 7.46 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 18.94 310.26 12.04 91.07 0.00 91.07 8.00 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 22.21 349.82 22.69 178.92 0.00 178.92 7.69 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 17.70 294.79 10.82 95.25 0.00 95.25 8.13 100.00
sdn2 0.00 0.00 0.00 120.00 0.00 18.33 312.79 10.38 90.13 0.00 90.13 8.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 126.00 0.00 20.75 337.21 25.50 188.83 0.00 188.83 7.94 100.00
sde2 0.00 0.00 0.00 136.00 0.00 22.89 344.70 18.94 123.88 0.00 123.88 7.35 100.00
sdg2 0.00 0.00 0.00 124.00 0.00 19.38 320.02 12.20 90.26 0.00 90.26 8.06 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 18.39 289.77 22.88 142.74 0.00 142.74 7.69 100.00
sdj2 0.00 0.00 0.00 122.00 0.00 19.51 327.53 13.46 99.34 0.00 99.34 7.97 97.20
sdn2 0.00 0.00 0.00 113.00 0.00 18.71 339.19 10.24 79.75 0.00 79.75 8.18 92.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 21.93 337.68 24.21 190.08 0.00 190.08 7.52 100.00
sde2 0.00 0.00 0.00 141.00 0.00 21.50 312.24 19.40 151.72 0.00 151.72 7.09 100.00
sdg2 0.00 0.00 0.00 127.00 0.00 20.11 324.28 13.69 107.59 0.00 107.59 7.87 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 23.25 355.28 30.77 239.43 0.00 239.43 7.46 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 25.87 401.31 19.81 166.73 0.00 166.73 7.58 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 21.38 329.17 16.31 114.41 0.00 114.41 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 26.81 442.87 19.77 164.23 0.00 164.23 8.06 100.00
sde2 0.00 0.00 0.00 133.00 0.00 23.59 363.24 19.74 142.47 0.00 142.47 7.52 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 23.96 368.88 14.89 120.57 0.00 120.57 7.52 100.00
sdh2 0.00 0.00 0.00 126.00 0.00 24.32 395.23 22.30 196.92 0.00 196.92 7.94 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 25.28 395.28 17.36 116.31 0.00 116.31 7.63 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 22.20 355.19 25.56 219.19 0.00 219.19 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 145.00 0.00 28.75 406.07 21.30 143.14 0.00 143.14 6.90 100.00
sde2 0.00 0.00 0.00 118.00 0.00 24.11 418.50 9.63 106.51 0.00 106.51 8.47 100.00
sdg2 0.00 0.00 0.00 136.00 0.00 23.94 360.54 14.35 86.50 0.00 86.50 7.35 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 27.57 409.22 29.46 166.78 0.00 166.78 7.25 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 23.00 356.80 20.26 150.45 0.00 150.45 7.58 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 23.27 353.09 20.57 123.44 0.00 123.44 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 22.42 335.11 21.63 167.82 0.00 167.82 7.30 100.00
sde2 0.00 0.00 0.00 112.00 0.00 17.75 324.65 8.86 76.04 0.00 76.04 8.89 99.60
sdg2 0.00 0.00 0.00 134.00 0.00 20.97 320.44 22.45 157.13 0.00 157.13 7.46 100.00
sdh2 0.00 0.00 0.00 139.00 0.00 24.24 357.22 24.40 208.55 0.00 208.55 7.19 100.00
sdj2 0.00 0.00 0.00 122.00 0.00 20.97 352.07 22.94 172.46 0.00 172.46 8.20 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 21.13 343.37 13.42 115.71 0.00 115.71 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 21.54 331.75 15.94 131.40 0.00 131.40 7.52 100.00
sde2 0.00 0.00 0.00 118.00 0.00 18.50 321.08 9.11 76.17 0.00 76.17 8.47 100.00
sdg2 0.00 0.00 0.00 138.00 0.00 24.35 361.40 28.75 228.09 0.00 228.09 7.25 100.00
sdh2 0.00 0.00 0.00 126.00 0.00 20.93 340.18 17.63 161.43 0.00 161.43 7.94 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 22.18 344.14 19.14 167.76 0.00 167.76 7.58 100.00
sdn2 0.00 0.00 0.00 116.00 0.00 20.21 356.84 12.75 117.24 0.00 117.24 7.76 90.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 123.00 0.00 19.99 332.82 14.42 106.83 0.00 106.83 8.13 100.00
sde2 0.00 0.00 0.00 132.00 0.00 21.99 341.15 14.04 96.42 0.00 96.42 7.58 100.00
sdg2 0.00 0.00 0.00 140.00 0.00 22.66 331.46 21.89 137.03 0.00 137.03 7.14 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 16.71 278.28 13.13 87.25 0.00 87.25 8.13 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 19.52 302.92 13.22 102.61 0.00 102.61 7.58 100.00
sdn2 0.00 0.00 0.00 143.00 0.00 22.96 328.79 19.98 133.87 0.00 133.87 6.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 129.00 0.00 21.71 344.67 16.92 129.98 0.00 129.98 7.75 100.00
sde2 0.00 0.00 0.00 128.00 0.00 18.76 300.22 14.88 107.69 0.00 107.69 7.81 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 23.17 359.43 18.18 164.52 0.00 164.52 7.58 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 25.03 382.57 16.04 140.84 0.00 140.84 7.46 100.00
sdj2 0.00 0.00 0.00 122.00 0.00 21.01 352.65 15.24 99.67 0.00 99.67 8.20 100.00
sdn2 0.00 0.00 0.00 127.00 0.00 20.67 333.32 24.52 172.31 0.00 172.31 7.87 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 123.00 0.00 19.71 328.23 9.47 98.50 0.00 98.50 8.00 98.40
sde2 0.00 0.00 0.00 170.00 0.00 20.15 242.71 50.92 129.32 0.00 129.32 5.88 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 18.92 236.21 45.07 105.24 0.00 105.24 6.10 100.00
sdh2 0.00 0.00 0.00 116.00 0.00 17.19 303.51 9.57 84.24 0.00 84.24 7.97 92.40
sdj2 0.00 0.00 0.00 124.00 0.00 22.05 364.19 33.71 152.81 0.00 152.81 8.06 100.00
sdn2 0.00 0.00 0.00 118.00 0.00 20.77 360.41 17.51 192.75 0.00 192.75 8.47 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 229.00 0.00 1.70 15.20 122.39 270.52 0.00 270.52 3.72 85.20
sde2 0.00 0.00 0.00 166.00 0.00 5.00 61.72 151.73 677.13 0.00 677.13 6.02 100.00
sdg2 0.00 0.00 0.00 205.00 0.00 2.67 26.70 147.84 532.72 0.00 532.72 4.88 100.00
sdh2 0.00 0.00 0.00 221.00 0.00 1.81 16.74 103.44 174.03 0.00 174.03 3.51 77.60
sdj2 0.00 0.00 0.00 230.00 0.00 2.93 26.08 145.06 422.23 0.00 422.23 4.35 100.00
sdn2 0.00 0.00 0.00 245.00 0.00 2.01 16.80 141.46 333.39 0.00 333.39 4.08 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.34 16.20 143.51 858.26 0.00 858.26 5.88 100.00
sde2 0.00 0.00 0.00 159.00 0.00 1.24 16.00 143.81 959.32 0.00 959.32 6.29 100.00
sdg2 0.00 0.00 0.00 169.00 0.00 1.53 18.60 144.28 902.84 0.00 902.84 5.92 100.00
sdh2 0.00 0.00 0.00 164.00 0.00 1.23 15.39 143.27 875.00 0.00 875.00 6.10 100.00
sdj2 0.00 0.00 0.00 162.00 0.00 1.24 15.70 144.18 917.11 0.00 917.11 6.17 100.00
sdn2 0.00 0.00 0.00 177.00 0.00 1.38 16.02 142.95 840.93 0.00 840.93 5.65 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 161.00 0.00 1.15 14.62 143.64 847.30 0.00 847.30 6.21 100.00
sde2 0.00 0.00 0.00 176.00 0.00 1.38 16.02 143.54 839.45 0.00 839.45 5.68 100.00
sdg2 0.00 0.00 0.00 175.00 0.00 1.37 15.99 143.59 814.08 0.00 814.08 5.71 100.00
sdh2 0.00 0.00 0.00 167.00 0.00 1.26 15.44 143.04 879.74 0.00 879.74 5.99 100.00
sdj2 0.00 0.00 0.00 171.00 0.00 1.26 15.04 144.03 853.59 0.00 853.59 5.85 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.28 15.99 144.01 823.85 0.00 823.85 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.25 15.77 142.74 862.50 0.00 862.50 6.13 100.00
sde2 0.00 0.00 0.00 173.00 0.00 1.36 16.09 145.56 825.46 0.00 825.46 5.78 100.00
sdg2 0.00 0.00 0.00 170.00 0.00 1.33 15.99 143.28 831.76 0.00 831.76 5.88 100.00
sdh2 0.00 0.00 0.00 167.00 0.00 1.26 15.44 143.64 847.88 0.00 847.88 5.99 100.00
sdj2 0.00 0.00 0.00 171.00 0.00 1.34 16.06 143.58 836.00 0.00 836.00 5.85 100.00
sdn2 0.00 0.00 0.00 172.00 0.00 1.37 16.31 143.37 850.40 0.00 850.40 5.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.26 15.88 143.28 901.28 0.00 901.28 6.13 100.00
sde2 0.00 0.00 0.00 167.00 0.00 1.32 16.16 144.33 847.69 0.00 847.69 5.99 100.00
sdg2 0.00 0.00 0.00 183.00 0.00 1.37 15.38 144.42 813.84 0.00 813.84 5.46 100.00
sdh2 0.00 0.00 0.00 176.00 0.00 1.29 15.06 144.15 851.14 0.00 851.14 5.68 100.00
sdj2 0.00 0.00 0.00 165.00 0.00 1.27 15.73 142.74 862.38 0.00 862.38 6.06 100.00
sdn2 0.00 0.00 0.00 161.00 0.00 1.21 15.39 142.96 870.24 0.00 870.24 6.21 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 165.00 0.00 1.20 14.90 143.72 889.58 0.00 889.58 6.06 100.00
sde2 0.00 0.00 0.00 177.00 0.00 1.40 16.24 144.23 843.93 0.00 843.93 5.65 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.27 15.74 143.64 832.61 0.00 832.61 6.06 100.00
sdh2 0.00 0.00 0.00 166.00 0.00 1.20 14.82 143.33 829.52 0.00 829.52 6.02 100.00
sdj2 0.00 0.00 0.00 163.00 0.00 1.29 16.15 144.38 874.97 0.00 874.97 6.13 100.00
sdn2 0.00 0.00 0.00 173.00 0.00 1.27 14.98 142.62 862.87 0.00 862.87 5.78 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.27 15.63 143.88 868.69 0.00 868.69 5.99 100.00
sde2 0.00 0.00 0.00 163.00 0.00 1.29 16.24 144.60 835.29 0.00 835.29 6.13 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.27 15.75 143.91 890.50 0.00 890.50 6.06 100.00
sdh2 0.00 0.00 0.00 175.00 0.00 1.33 15.51 143.72 849.37 0.00 849.37 5.71 100.00
sdj2 0.00 0.00 0.00 174.00 0.00 1.29 15.21 144.86 851.70 0.00 851.70 5.75 100.00
sdn2 0.00 0.00 0.00 165.00 0.00 1.31 16.29 144.63 841.36 0.00 841.36 6.08 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.50 18.68 144.28 854.20 0.00 854.20 6.10 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.31 15.85 143.56 888.40 0.00 888.40 5.92 100.00
sdg2 0.00 0.00 0.00 156.00 0.00 1.23 16.17 144.35 914.38 0.00 914.38 6.41 100.00
sdh2 0.00 0.00 0.00 177.00 0.00 1.38 15.92 142.18 800.47 0.00 800.47 5.65 100.00
sdj2 0.00 0.00 0.00 172.00 0.00 1.35 16.10 145.64 835.23 0.00 835.23 5.81 100.00
sdn2 0.00 0.00 0.00 155.00 0.00 1.23 16.28 144.18 886.74 0.00 886.74 6.45 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.26 15.44 143.54 881.70 0.00 881.70 5.99 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.31 15.93 143.30 855.10 0.00 855.10 5.92 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.28 16.29 143.24 877.49 0.00 877.49 6.21 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.34 16.01 144.78 831.60 0.00 831.60 5.81 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.63 19.88 143.46 868.02 0.00 868.02 5.95 100.00
sdn2 0.00 0.00 0.00 165.00 0.00 1.24 15.38 143.55 917.14 0.00 917.14 6.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 152.00 0.00 1.16 15.61 143.59 877.39 0.00 877.39 6.58 100.00
sde2 0.00 0.00 0.00 174.00 0.00 1.29 15.15 142.96 827.75 0.00 827.75 5.75 100.00
sdg2 0.00 0.00 0.00 156.00 0.00 1.13 14.85 145.68 900.59 0.00 900.59 6.41 100.00
sdh2 0.00 0.00 0.00 170.00 0.00 1.26 15.13 143.58 841.04 0.00 841.04 5.88 100.00
sdj2 0.00 0.00 0.00 159.00 0.00 1.21 15.62 144.20 886.04 0.00 886.04 6.29 100.00
sdn2 0.00 0.00 0.00 167.00 0.00 1.28 15.72 143.76 845.49 0.00 845.49 5.96 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 177.00 0.00 1.34 15.56 143.96 875.89 0.00 875.89 5.65 100.00
sde2 0.00 0.00 0.00 160.00 0.00 1.26 16.18 121.16 844.92 0.00 844.92 6.25 100.00
sdg2 0.00 0.00 0.00 144.00 0.00 1.27 18.04 111.92 986.64 0.00 986.64 6.94 100.00
sdh2 0.00 0.00 0.00 166.00 0.00 1.35 16.65 134.88 852.14 0.00 852.14 6.02 100.00
sdj2 0.00 0.00 0.00 161.00 0.00 1.21 15.42 124.24 877.89 0.00 877.89 6.24 100.40
sdn2 0.00 0.00 0.00 163.00 0.00 1.29 16.26 139.86 892.39 0.00 892.39 6.16 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 166.00 0.00 1.22 15.00 121.25 851.98 0.00 851.98 6.02 100.00
sde2 0.00 0.00 0.00 129.00 0.00 9.34 148.22 23.90 521.15 0.00 521.15 7.75 100.00
sdg2 0.00 0.00 0.00 114.00 0.00 10.31 185.16 15.93 483.12 0.00 483.12 8.77 100.00
sdh2 0.00 0.00 0.00 139.00 0.00 7.40 109.01 41.24 675.17 0.00 675.17 7.19 100.00
sdj2 0.00 0.00 0.00 126.00 0.00 7.92 128.72 27.82 571.84 0.00 571.84 7.90 99.60
sdn2 0.00 0.00 0.00 144.00 0.00 8.29 117.96 51.15 726.25 0.00 726.25 6.92 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 108.00 0.00 12.98 246.22 41.64 722.78 0.00 722.78 9.26 100.00
sde2 0.00 0.00 0.00 113.00 0.00 15.27 276.77 10.06 97.91 0.00 97.91 8.85 100.00
sdg2 0.00 0.00 0.00 106.00 0.00 11.21 216.58 4.54 47.89 0.00 47.89 9.40 99.60
sdh2 0.00 0.00 0.00 118.00 0.00 14.70 255.18 14.40 136.20 0.00 136.20 8.47 100.00
sdj2 0.00 0.00 0.00 115.00 0.00 15.12 269.33 8.55 87.55 0.00 87.55 8.45 97.20
sdn2 0.00 0.00 0.00 125.00 0.00 14.72 241.14 11.76 133.66 0.00 133.66 8.00 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 142.00 0.00 20.57 296.67 25.65 158.70 0.00 158.70 7.04 100.00
sde2 0.00 0.00 0.00 112.00 0.00 14.32 261.85 9.05 68.93 0.00 68.93 8.82 98.80
sdg2 0.00 0.00 0.00 122.00 0.00 17.58 295.10 9.98 67.21 0.00 67.21 8.16 99.60
sdh2 0.00 0.00 0.00 115.00 0.00 15.99 284.79 6.14 54.99 0.00 54.99 8.70 100.00
sdj2 0.00 0.00 0.00 115.00 0.00 14.09 250.87 7.66 65.88 0.00 65.88 8.70 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 19.47 299.81 16.18 104.69 0.00 104.69 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 134.00 0.00 20.67 315.99 25.30 204.78 0.00 204.78 7.46 100.00
sde2 0.00 0.00 0.00 100.00 0.00 12.20 249.85 5.18 52.40 0.00 52.40 8.64 86.40
sdg2 0.00 0.00 0.00 102.00 0.00 15.72 315.60 11.64 102.67 0.00 102.67 9.69 98.80
sdh2 0.00 0.00 0.00 89.00 0.00 9.87 227.10 5.98 46.16 0.00 46.16 9.17 81.60
sdj2 0.00 0.00 0.00 98.00 0.00 12.51 261.43 9.74 72.82 0.00 72.82 8.90 87.20
sdn2 0.00 0.00 0.00 109.00 0.00 13.17 247.51 8.04 73.21 0.00 73.21 8.92 97.20
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 24.12 377.04 17.32 144.98 0.00 144.98 7.63 100.00
sde2 0.00 0.00 0.00 115.00 0.00 19.62 349.44 11.75 98.05 0.00 98.05 8.70 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 21.91 337.44 15.76 131.49 0.00 131.49 7.52 100.00
sdh2 0.00 0.00 0.00 132.00 0.00 23.77 368.75 22.13 169.91 0.00 169.91 7.58 100.00
sdj2 0.00 0.00 0.00 136.00 0.00 22.23 334.74 28.36 190.62 0.00 190.62 7.35 100.00
sdn2 0.00 0.00 0.00 132.00 0.00 24.08 373.55 20.48 159.39 0.00 159.39 7.58 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 18.70 299.23 18.47 146.25 0.00 146.25 7.81 100.00
sde2 0.00 0.00 0.00 123.00 0.00 16.32 271.72 11.67 107.12 0.00 107.12 8.13 100.00
sdg2 0.00 0.00 0.00 122.00 0.00 19.43 326.24 10.79 97.08 0.00 97.08 8.20 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 22.08 347.91 12.61 109.42 0.00 109.42 7.69 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 19.35 322.11 25.40 236.16 0.00 236.16 8.13 100.00
sdn2 0.00 0.00 0.00 122.00 0.00 19.04 319.56 11.72 99.21 0.00 99.21 8.23 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 21.82 360.40 17.60 133.52 0.00 133.52 8.06 100.00
sde2 0.00 0.00 0.00 126.00 0.00 21.67 352.15 16.44 115.05 0.00 115.05 7.94 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 22.14 331.00 22.68 136.61 0.00 136.61 7.30 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 21.78 340.53 16.10 111.54 0.00 111.54 7.63 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 21.11 332.56 18.44 143.60 0.00 143.60 7.69 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 18.59 309.49 26.00 173.20 0.00 173.20 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 18.46 295.29 16.18 130.84 0.00 130.84 7.81 100.00
sde2 0.00 0.00 0.00 137.00 0.00 18.95 283.28 24.28 174.51 0.00 174.51 7.30 100.00
sdg2 0.00 0.00 0.00 135.00 0.00 18.20 276.12 14.60 135.70 0.00 135.70 7.41 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 20.85 325.91 12.77 102.35 0.00 102.35 7.63 100.00
sdj2 0.00 0.00 0.00 148.00 0.00 23.81 329.52 20.18 136.22 0.00 136.22 6.78 100.40
sdn2 0.00 0.00 0.00 140.00 0.00 23.81 348.30 24.20 201.26 0.00 201.26 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 134.00 0.00 20.65 315.58 21.48 145.58 0.00 145.58 7.46 100.00
sde2 0.00 0.00 0.00 134.00 0.00 23.78 363.46 27.97 176.81 0.00 176.81 7.46 100.00
sdg2 0.00 0.00 0.00 121.00 0.00 21.07 356.58 13.39 83.37 0.00 83.37 8.26 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 24.31 371.51 15.91 120.81 0.00 120.81 7.46 100.00
sdj2 0.00 0.00 0.00 136.00 0.00 23.47 353.37 20.72 128.62 0.00 128.62 7.32 99.60
sdn2 0.00 0.00 0.00 131.00 0.00 21.92 342.62 19.77 154.17 0.00 154.17 7.60 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 20.64 317.83 27.20 185.89 0.00 185.89 7.52 100.00
sde2 0.00 0.00 0.00 134.00 0.00 19.67 300.63 27.46 194.81 0.00 194.81 7.46 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 19.34 289.09 22.42 178.66 0.00 178.66 7.30 100.00
sdh2 0.00 0.00 0.00 124.00 0.00 18.27 301.69 15.61 123.94 0.00 123.94 8.06 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 20.11 321.75 20.44 163.88 0.00 163.88 7.81 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 23.58 344.87 25.67 169.80 0.00 169.80 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 18.07 296.01 23.98 211.39 0.00 211.39 8.00 100.00
sde2 0.00 0.00 0.00 121.00 0.00 22.01 372.57 31.06 265.16 0.00 265.16 8.26 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 15.92 260.88 17.04 117.79 0.00 117.79 8.00 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 18.59 304.54 13.00 92.77 0.00 92.77 8.00 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 20.85 326.01 22.96 193.74 0.00 193.74 7.63 100.00
sdn2 0.00 0.00 0.00 138.00 0.00 21.07 312.72 16.86 131.59 0.00 131.59 7.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 73.00 0.00 10.21 286.49 7.44 157.97 0.00 157.97 8.00 58.40
sde2 0.00 0.00 0.00 93.00 0.00 12.87 283.38 14.62 234.37 0.00 234.37 8.65 80.40
sdg2 0.00 0.00 0.00 75.00 0.00 12.64 345.12 11.08 200.80 0.00 200.80 9.28 69.60
sdh2 0.00 0.00 0.00 70.00 0.00 10.21 298.73 5.83 118.80 0.00 118.80 7.54 52.80
sdj2 0.00 0.00 0.00 81.00 0.00 12.60 318.62 6.62 103.70 0.00 103.70 7.80 63.20
sdn2 0.00 0.00 0.00 64.00 0.00 9.24 295.81 7.56 156.06 0.00 156.06 8.69 55.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 92.00 0.00 12.45 277.13 17.23 131.57 0.00 131.57 9.04 83.20
sde2 0.00 0.00 0.00 101.00 0.00 15.62 316.64 12.98 117.39 0.00 117.39 7.60 76.80
sdg2 0.00 0.00 0.00 98.00 0.00 13.30 277.85 13.04 111.39 0.00 111.39 8.57 84.00
sdh2 0.00 0.00 0.00 107.00 0.00 15.64 299.35 12.21 82.47 0.00 82.47 7.81 83.60
sdj2 0.00 0.00 0.00 108.00 0.00 15.71 297.96 14.64 102.85 0.00 102.85 7.30 78.80
sdn2 0.00 0.00 0.00 106.00 0.00 15.95 308.14 18.62 144.68 0.00 144.68 8.04 85.20
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 135.00 0.00 21.95 333.00 26.33 201.42 0.00 201.42 7.41 100.00
sde2 0.00 0.00 0.00 133.00 0.00 18.40 283.26 16.86 126.92 0.00 126.92 7.52 100.00
sdg2 0.00 0.00 0.00 127.00 0.00 19.40 312.76 29.04 197.64 0.00 197.64 7.87 100.00
sdh2 0.00 0.00 0.00 121.00 0.00 14.60 247.17 19.54 149.22 0.00 149.22 8.26 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 20.42 324.20 18.60 149.74 0.00 149.74 7.75 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 20.78 315.22 21.48 156.83 0.00 156.83 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 20.78 310.58 19.94 161.23 0.00 161.23 7.30 100.00
sde2 0.00 0.00 0.00 125.00 0.00 20.49 335.68 15.56 104.22 0.00 104.22 8.00 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 19.33 302.26 24.89 198.38 0.00 198.38 7.63 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 20.93 329.70 16.04 156.00 0.00 156.00 7.69 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 22.33 346.38 16.57 138.42 0.00 138.42 7.58 100.00
sdn2 0.00 0.00 0.00 125.00 0.00 21.69 355.34 19.31 154.94 0.00 154.94 8.00 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 135.00 0.00 19.37 293.89 23.25 157.66 0.00 157.66 7.41 100.00
sde2 0.00 0.00 0.00 145.00 0.00 22.88 323.16 23.56 157.19 0.00 157.19 6.90 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 20.93 324.70 24.46 196.97 0.00 196.97 7.58 100.00
sdh2 0.00 0.00 0.00 137.00 0.00 20.91 312.64 18.96 117.23 0.00 117.23 7.30 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 19.86 301.34 18.69 133.90 0.00 133.90 7.41 100.00
sdn2 0.00 0.00 0.00 137.00 0.00 20.18 301.60 14.34 126.39 0.00 126.39 7.30 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 23.15 356.41 20.27 166.65 0.00 166.65 7.52 100.00
sde2 0.00 0.00 0.00 131.00 0.00 23.59 368.79 17.52 162.14 0.00 162.14 7.63 100.00
sdg2 0.00 0.00 0.00 121.00 0.00 19.27 326.10 14.42 134.58 0.00 134.58 8.26 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 25.76 396.67 15.18 127.76 0.00 127.76 7.52 100.00
sdj2 0.00 0.00 0.00 124.00 0.00 19.05 314.69 15.23 110.48 0.00 110.48 8.06 100.00
sdn2 0.00 0.00 0.00 139.00 0.00 25.47 375.32 20.48 111.48 0.00 111.48 7.22 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 129.00 0.00 25.83 410.05 15.56 123.60 0.00 123.60 7.78 100.40
sde2 0.00 0.00 0.00 124.00 0.00 21.61 356.84 15.96 120.58 0.00 120.58 8.10 100.40
sdg2 0.00 0.00 0.00 126.00 0.00 22.04 358.22 18.33 154.54 0.00 154.54 7.97 100.40
sdh2 0.00 0.00 0.00 134.00 0.00 24.81 379.12 13.02 104.78 0.00 104.78 7.49 100.40
sdj2 0.00 0.00 0.00 130.00 0.00 24.98 393.48 17.87 139.94 0.00 139.94 7.72 100.40
sdn2 0.00 0.00 0.00 132.00 0.00 24.30 377.02 19.56 172.97 0.00 172.97 7.58 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 23.09 345.12 24.87 153.72 0.00 153.72 7.27 99.60
sde2 0.00 0.00 0.00 165.00 0.00 19.90 246.98 39.76 110.47 0.00 110.47 6.04 99.60
sdg2 0.00 0.00 0.00 158.00 0.00 18.29 237.13 37.80 103.87 0.00 103.87 6.30 99.60
sdh2 0.00 0.00 0.00 124.00 0.00 19.81 327.12 12.65 94.90 0.00 94.90 8.03 99.60
sdj2 0.00 0.00 0.00 126.00 0.00 18.10 294.25 33.98 189.52 0.00 189.52 7.90 99.60
sdn2 0.00 0.00 0.00 138.00 0.00 21.59 320.44 17.30 130.84 0.00 130.84 7.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 214.00 0.00 5.53 52.89 118.60 272.45 0.00 272.45 4.69 100.40
sde2 0.00 0.00 0.00 189.00 0.00 5.40 58.53 151.29 551.32 0.00 551.32 5.31 100.40
sdg2 0.00 0.00 0.00 199.00 0.00 4.95 50.97 157.47 473.37 0.00 473.37 5.05 100.40
sdh2 0.00 0.00 0.00 223.00 0.00 3.35 30.79 94.04 219.07 0.00 219.07 3.89 86.80
sdj2 0.00 0.00 0.00 160.00 0.00 6.42 82.19 153.28 560.30 0.00 560.30 6.27 100.40
sdn2 0.00 0.00 0.00 229.00 0.00 3.87 34.57 132.81 307.65 0.00 307.65 4.37 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 158.00 0.00 1.15 14.96 143.97 918.08 0.00 918.08 6.33 100.00
sde2 0.00 0.00 0.00 163.00 0.00 1.27 15.97 143.96 918.60 0.00 918.60 6.13 100.00
sdg2 0.00 0.00 0.00 173.00 0.00 1.72 20.40 145.86 988.05 0.00 988.05 5.78 100.00
sdh2 0.00 0.00 0.00 161.00 0.00 1.24 15.79 143.26 798.29 0.00 798.29 6.21 100.00
sdj2 0.00 0.00 0.00 166.00 0.00 1.28 15.80 144.91 959.11 0.00 959.11 6.02 100.00
sdn2 0.00 0.00 0.00 166.00 0.00 1.63 20.13 144.72 895.35 0.00 895.35 6.02 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.25 15.28 145.00 874.13 0.00 874.13 5.99 100.00
sde2 0.00 0.00 0.00 173.00 0.00 1.36 16.05 144.12 843.28 0.00 843.28 5.78 100.00
sdg2 0.00 0.00 0.00 171.00 0.00 1.28 15.38 144.52 838.97 0.00 838.97 5.85 100.00
sdh2 0.00 0.00 0.00 166.00 0.00 1.27 15.72 143.00 858.67 0.00 858.67 6.02 100.00
sdj2 0.00 0.00 0.00 174.00 0.00 1.36 15.98 143.44 833.72 0.00 833.72 5.75 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.21 15.10 144.42 849.88 0.00 849.88 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 156.00 0.00 1.59 20.82 145.34 894.33 0.00 894.33 6.41 100.00
sde2 0.00 0.00 0.00 161.00 0.00 1.51 19.16 144.16 879.58 0.00 879.58 6.21 100.00
sdg2 0.00 0.00 0.00 170.00 0.00 1.60 19.31 143.94 845.39 0.00 845.39 5.88 100.00
sdh2 0.00 0.00 0.00 168.00 0.00 1.29 15.74 143.35 854.48 0.00 854.48 5.95 100.00
sdj2 0.00 0.00 0.00 169.00 0.00 1.26 15.28 144.42 842.65 0.00 842.65 5.92 100.00
sdn2 0.00 0.00 0.00 165.00 0.00 1.20 14.86 142.48 889.75 0.00 889.75 6.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.21 15.15 143.28 908.22 0.00 908.22 6.13 100.00
sde2 0.00 0.00 0.00 159.00 0.00 1.23 15.87 145.28 897.56 0.00 897.56 6.29 100.00
sdg2 0.00 0.00 0.00 166.00 0.00 1.28 15.81 143.29 864.72 0.00 864.72 6.02 100.00
sdh2 0.00 0.00 0.00 163.00 0.00 1.24 15.57 143.36 861.96 0.00 861.96 6.13 100.00
sdj2 0.00 0.00 0.00 165.00 0.00 1.23 15.22 144.19 857.60 0.00 857.60 6.06 100.00
sdn2 0.00 0.00 0.00 174.00 0.00 1.29 15.23 143.66 832.14 0.00 832.14 5.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 166.00 0.00 1.23 15.18 144.34 886.43 0.00 886.43 6.02 100.00
sde2 0.00 0.00 0.00 168.00 0.00 1.35 16.49 145.38 895.83 0.00 895.83 5.95 100.00
sdg2 0.00 0.00 0.00 167.00 0.00 1.24 15.17 143.70 854.71 0.00 854.71 5.99 100.00
sdh2 0.00 0.00 0.00 177.00 0.00 1.36 15.79 143.30 837.83 0.00 837.83 5.65 100.00
sdj2 0.00 0.00 0.00 163.00 0.00 1.32 16.58 144.37 901.10 0.00 901.10 6.13 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.29 15.59 143.56 831.57 0.00 831.57 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 162.00 0.00 1.22 15.36 144.11 861.04 0.00 861.04 6.17 100.00
sde2 0.00 0.00 0.00 162.00 0.00 1.73 21.88 144.21 882.94 0.00 882.94 6.17 100.00
sdg2 0.00 0.00 0.00 169.00 0.00 1.24 14.98 145.54 849.96 0.00 849.96 5.92 100.00
sdh2 0.00 0.00 0.00 174.00 0.00 1.34 15.75 143.04 822.39 0.00 822.39 5.75 100.00
sdj2 0.00 0.00 0.00 165.00 0.00 1.29 16.05 144.24 879.27 0.00 879.27 6.06 100.00
sdn2 0.00 0.00 0.00 173.00 0.00 1.31 15.50 142.96 826.40 0.00 826.40 5.78 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.21 14.81 143.40 875.62 0.00 875.62 5.99 100.00
sde2 0.00 0.00 0.00 158.00 0.00 1.19 15.37 144.23 898.43 0.00 898.43 6.33 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.53 18.62 144.39 860.52 0.00 860.52 5.95 100.00
sdh2 0.00 0.00 0.00 168.00 0.00 1.23 14.96 142.71 864.76 0.00 864.76 5.95 100.00
sdj2 0.00 0.00 0.00 166.00 0.00 1.23 15.22 144.12 859.59 0.00 859.59 6.02 100.00
sdn2 0.00 0.00 0.00 167.00 0.00 1.33 16.28 144.16 856.10 0.00 856.10 5.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.25 15.25 143.21 871.12 0.00 871.12 5.95 100.00
sde2 0.00 0.00 0.00 161.00 0.00 1.24 15.76 144.51 903.80 0.00 903.80 6.21 100.00
sdg2 0.00 0.00 0.00 159.00 0.00 1.34 17.25 144.22 877.69 0.00 877.69 6.29 100.00
sdh2 0.00 0.00 0.00 173.00 0.00 1.29 15.27 143.70 799.86 0.00 799.86 5.78 100.00
sdj2 0.00 0.00 0.00 166.00 0.00 1.30 16.08 144.94 857.76 0.00 857.76 6.02 100.00
sdn2 0.00 0.00 0.00 159.00 0.00 1.23 15.91 144.23 900.43 0.00 900.43 6.29 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 177.00 0.00 1.35 15.57 144.63 818.17 0.00 818.17 5.65 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.35 16.24 143.78 854.78 0.00 854.78 5.88 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.26 16.03 144.60 908.99 0.00 908.99 6.21 100.00
sdh2 0.00 0.00 0.00 167.00 0.00 1.22 14.96 143.78 867.52 0.00 867.52 5.99 100.00
sdj2 0.00 0.00 0.00 181.00 0.00 1.38 15.59 144.72 860.07 0.00 860.07 5.52 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.27 15.39 144.31 870.96 0.00 870.96 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 157.00 0.00 1.31 17.12 144.56 862.62 0.00 862.62 6.37 100.00
sde2 0.00 0.00 0.00 154.00 0.00 1.20 16.01 128.29 927.45 0.00 927.45 6.49 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.28 16.24 106.12 927.73 0.00 927.73 6.21 100.00
sdh2 0.00 0.00 0.00 164.00 0.00 1.19 14.85 142.40 870.00 0.00 870.00 6.10 100.00
sdj2 0.00 0.00 0.00 146.00 0.00 1.66 23.25 148.69 888.99 0.00 888.99 6.85 100.00
sdn2 0.00 0.00 0.00 170.00 0.00 1.27 15.34 143.54 847.04 0.00 847.04 5.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.25 15.73 108.81 925.72 0.00 925.72 6.13 100.00
sde2 0.00 0.00 0.00 115.00 0.00 7.65 136.20 28.26 673.08 0.00 673.08 8.49 97.60
sdg2 0.00 0.00 0.00 88.00 0.00 5.69 132.42 6.03 334.32 0.00 334.32 10.23 90.00
sdh2 0.00 0.00 0.00 155.00 0.00 5.04 66.64 60.08 762.37 0.00 762.37 6.45 100.00
sdj2 0.00 0.00 0.00 151.00 0.00 1.84 24.94 137.11 995.05 0.00 995.05 6.62 100.00
sdn2 0.00 0.00 0.00 156.00 0.00 1.16 15.19 127.22 907.79 0.00 907.79 6.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 121.00 0.00 16.45 278.40 22.90 395.90 0.00 395.90 8.26 100.00
sde2 0.00 0.00 0.00 90.00 0.00 11.90 270.72 3.48 32.80 0.00 32.80 9.60 86.40
sdg2 0.00 0.00 0.00 94.00 0.00 15.62 340.43 6.84 59.83 0.00 59.83 9.57 90.00
sdh2 0.00 0.00 0.00 112.00 0.00 16.30 298.12 6.38 76.14 0.00 76.14 8.71 97.60
sdj2 0.00 0.00 0.00 113.00 0.00 9.68 175.50 57.40 957.24 0.00 957.24 8.85 100.00
sdn2 0.00 0.00 0.00 117.00 0.00 11.46 200.56 49.65 733.30 0.00 733.30 8.55 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 114.00 0.00 17.57 315.57 8.02 91.47 0.00 91.47 8.77 100.00
sde2 0.00 0.00 0.00 102.00 0.00 12.96 260.12 4.99 51.65 0.00 51.65 9.76 99.60
sdg2 0.00 0.00 0.00 107.00 0.00 15.25 291.91 5.25 59.59 0.00 59.59 9.27 99.20
sdh2 0.00 0.00 0.00 112.00 0.00 12.51 228.70 5.23 52.29 0.00 52.29 8.36 93.60
sdj2 0.00 0.00 0.00 123.00 0.00 17.85 297.26 15.32 205.07 0.00 205.07 8.13 100.00
sdn2 0.00 0.00 0.00 127.00 0.00 25.26 407.32 19.66 231.81 0.00 231.81 7.87 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 119.00 0.00 16.80 289.08 20.00 137.11 0.00 137.11 8.40 100.00
sde2 0.00 0.00 0.00 119.00 0.00 16.36 281.50 10.40 76.03 0.00 76.03 8.40 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 19.85 317.58 12.62 96.41 0.00 96.41 7.81 100.00
sdh2 0.00 0.00 0.00 120.00 0.00 18.45 314.94 11.48 78.10 0.00 78.10 8.33 100.00
sdj2 0.00 0.00 0.00 120.00 0.00 18.64 318.09 10.10 85.00 0.00 85.00 8.33 100.00
sdn2 0.00 0.00 0.00 118.00 0.00 16.67 289.34 13.59 116.75 0.00 116.75 8.47 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 22.75 352.92 26.24 182.91 0.00 182.91 7.58 100.00
sde2 0.00 0.00 0.00 131.00 0.00 21.65 338.48 20.69 165.83 0.00 165.83 7.63 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 22.60 370.34 18.52 137.89 0.00 137.89 8.00 100.00
sdh2 0.00 0.00 0.00 126.00 0.00 19.49 316.83 18.03 135.24 0.00 135.24 7.94 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 22.65 343.58 16.10 108.30 0.00 108.30 7.41 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 23.89 367.82 18.66 129.74 0.00 129.74 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 135.00 0.00 22.14 335.85 25.10 218.04 0.00 218.04 7.41 100.00
sde2 0.00 0.00 0.00 118.00 0.00 15.76 273.49 13.00 85.80 0.00 85.80 8.47 100.00
sdg2 0.00 0.00 0.00 117.00 0.00 16.68 292.02 9.32 91.59 0.00 91.59 8.55 100.00
sdh2 0.00 0.00 0.00 139.00 0.00 23.98 353.26 27.42 174.71 0.00 174.71 7.19 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 16.72 278.43 15.04 94.11 0.00 94.11 8.13 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 20.03 320.42 11.92 106.78 0.00 106.78 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 21.89 350.19 23.93 173.38 0.00 173.38 7.81 100.00
sde2 0.00 0.00 0.00 130.00 0.00 21.65 341.02 16.76 140.34 0.00 140.34 7.69 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 21.54 352.86 11.13 78.21 0.00 78.21 8.00 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 22.89 339.74 26.07 219.16 0.00 219.16 7.25 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 21.76 362.32 24.96 207.90 0.00 207.90 8.13 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 19.03 309.36 16.54 93.68 0.00 93.68 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 22.06 339.74 15.52 120.96 0.00 120.96 7.52 100.00
sde2 0.00 0.00 0.00 123.00 0.00 19.32 321.63 14.02 105.17 0.00 105.17 8.13 100.00
sdg2 0.00 0.00 0.00 129.00 0.00 20.11 319.22 14.88 92.03 0.00 92.03 7.75 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 21.82 323.75 25.06 145.88 0.00 145.88 7.25 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 21.20 333.92 18.34 155.48 0.00 155.48 7.69 100.00
sdn2 0.00 0.00 0.00 120.00 0.00 19.26 328.70 15.73 158.27 0.00 158.27 8.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 21.04 344.78 15.42 131.17 0.00 131.17 8.00 100.00
sde2 0.00 0.00 0.00 124.00 0.00 20.40 336.99 13.73 120.77 0.00 120.77 8.06 100.00
sdg2 0.00 0.00 0.00 136.00 0.00 25.18 379.21 24.55 194.26 0.00 194.26 7.35 100.00
sdh2 0.00 0.00 0.00 141.00 0.00 24.23 351.96 24.27 193.56 0.00 193.56 7.09 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 22.70 366.12 19.18 134.65 0.00 134.65 7.87 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 21.75 348.02 15.14 115.25 0.00 115.25 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 118.00 0.00 17.89 310.58 13.35 124.27 0.00 124.27 8.47 100.00
sde2 0.00 0.00 0.00 129.00 0.00 18.12 287.61 18.36 147.26 0.00 147.26 7.75 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 16.81 273.31 16.10 132.41 0.00 132.41 7.94 100.00
sdh2 0.00 0.00 0.00 147.00 0.00 23.10 321.82 32.22 208.35 0.00 208.35 6.80 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 19.56 288.27 16.93 141.90 0.00 141.90 7.19 100.00
sdn2 0.00 0.00 0.00 125.00 0.00 19.20 314.54 25.95 200.93 0.00 200.93 8.00 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 20.37 320.97 16.74 128.83 0.00 128.83 7.69 100.00
sde2 0.00 0.00 0.00 118.00 0.00 16.97 294.54 12.21 102.81 0.00 102.81 8.47 100.00
sdg2 0.00 0.00 0.00 127.00 0.00 18.97 305.91 19.51 156.63 0.00 156.63 7.87 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 22.68 349.20 26.69 217.56 0.00 217.56 7.52 100.00
sdj2 0.00 0.00 0.00 138.00 0.00 22.17 328.94 22.26 155.19 0.00 155.19 7.25 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 23.27 340.44 26.08 168.71 0.00 168.71 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 121.00 0.00 18.23 308.47 13.42 80.56 0.00 80.56 8.26 100.00
sde2 0.00 0.00 0.00 143.00 0.00 21.97 314.71 17.00 97.59 0.00 97.59 6.99 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 20.79 320.17 17.17 125.95 0.00 125.95 7.52 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 18.64 310.28 15.19 134.31 0.00 134.31 8.13 100.00
sdj2 0.00 0.00 0.00 140.00 0.00 24.65 360.61 23.19 158.69 0.00 158.69 7.14 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 21.58 327.33 29.33 223.59 0.00 223.59 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 126.00 0.00 18.23 296.24 14.97 143.65 0.00 143.65 7.94 100.00
sde2 0.00 0.00 0.00 131.00 0.00 20.38 318.62 19.20 160.09 0.00 160.09 7.63 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 17.35 284.28 10.89 98.98 0.00 98.98 8.00 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 21.52 347.06 14.77 122.52 0.00 122.52 7.87 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 23.46 364.03 23.28 163.30 0.00 163.30 7.58 100.00
sdn2 0.00 0.00 0.00 131.00 0.00 20.66 323.02 25.14 208.89 0.00 208.89 7.63 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 19.92 306.81 14.86 109.08 0.00 109.08 7.52 100.00
sde2 0.00 0.00 0.00 137.00 0.00 20.70 309.50 18.88 134.31 0.00 134.31 7.30 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 20.12 312.12 20.76 138.30 0.00 138.30 7.58 100.00
sdh2 0.00 0.00 0.00 136.00 0.00 19.25 289.88 19.87 146.32 0.00 146.32 7.35 100.00
sdj2 0.00 0.00 0.00 126.00 0.00 18.00 292.51 27.12 215.59 0.00 215.59 7.94 100.00
sdn2 0.00 0.00 0.00 137.00 0.00 20.71 309.56 26.40 181.93 0.00 181.93 7.30 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 20.60 340.19 13.71 106.74 0.00 106.74 8.06 100.00
sde2 0.00 0.00 0.00 125.00 0.00 18.30 299.90 23.15 160.32 0.00 160.32 8.00 100.00
sdg2 0.00 0.00 0.00 138.00 0.00 23.80 353.15 18.50 139.48 0.00 139.48 7.25 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 20.55 336.67 16.03 106.85 0.00 106.85 8.00 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 22.91 358.12 21.31 180.98 0.00 180.98 7.63 100.00
sdn2 0.00 0.00 0.00 125.00 0.00 22.41 367.10 18.65 171.81 0.00 171.81 8.00 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 129.00 0.00 21.90 347.73 22.50 142.57 0.00 142.57 7.75 100.00
sde2 0.00 0.00 0.00 127.00 0.00 22.00 354.80 24.86 222.55 0.00 222.55 7.87 100.00
sdg2 0.00 0.00 0.00 124.00 0.00 20.28 334.98 16.75 117.90 0.00 117.90 8.06 100.00
sdh2 0.00 0.00 0.00 121.00 0.00 22.36 378.44 23.90 173.55 0.00 173.55 8.26 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 23.67 348.76 23.89 143.80 0.00 143.80 7.19 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 22.96 348.36 21.79 147.91 0.00 147.91 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 22.92 361.04 26.48 212.83 0.00 212.83 7.69 100.00
sde2 0.00 0.00 0.00 128.00 0.00 19.62 313.91 20.42 141.97 0.00 141.97 7.81 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 22.57 366.90 15.86 148.06 0.00 148.06 7.94 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 23.84 364.30 19.11 178.21 0.00 178.21 7.46 100.00
sdj2 0.00 0.00 0.00 137.00 0.00 25.60 382.68 25.86 209.34 0.00 209.34 7.30 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 24.74 380.91 25.47 179.97 0.00 179.97 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 22.30 351.25 20.60 157.26 0.00 157.26 7.69 100.00
sde2 0.00 0.00 0.00 128.00 0.00 22.53 360.53 20.55 183.94 0.00 183.94 7.81 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 18.63 305.30 24.34 119.78 0.00 119.78 8.00 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 23.23 363.15 14.36 116.24 0.00 116.24 7.63 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 23.44 355.56 24.26 174.79 0.00 174.79 7.41 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 22.69 331.85 18.58 150.49 0.00 150.49 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 139.00 0.00 22.84 336.56 18.09 142.19 0.00 142.19 7.19 100.00
sde2 0.00 0.00 0.00 138.00 0.00 19.89 295.22 36.87 167.36 0.00 167.36 7.25 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 20.73 324.10 40.60 289.40 0.00 289.40 7.63 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 21.91 359.00 13.23 97.09 0.00 97.09 8.00 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 24.60 362.39 20.06 157.29 0.00 157.29 7.19 100.00
sdn2 0.00 0.00 0.00 132.00 0.00 20.94 324.93 16.37 137.58 0.00 137.58 7.58 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 235.00 0.00 5.87 51.17 103.06 207.00 0.00 207.00 4.07 95.60
sde2 0.00 0.00 0.00 192.00 0.00 6.74 71.90 152.61 512.35 0.00 512.35 5.21 100.00
sdg2 0.00 0.00 0.00 218.00 0.00 4.16 39.04 156.58 455.38 0.00 455.38 4.59 100.00
sdh2 0.00 0.00 0.00 203.00 0.00 4.75 47.90 82.16 234.82 0.00 234.82 3.59 72.80
sdj2 0.00 0.00 0.00 230.00 0.00 4.82 42.88 141.34 369.46 0.00 369.46 4.35 100.00
sdn2 0.00 0.00 0.00 230.00 0.00 2.55 22.75 118.69 238.23 0.00 238.23 4.03 92.80
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 169.00 0.00 1.38 16.67 143.32 833.54 0.00 833.54 5.92 100.00
sde2 0.00 0.00 0.00 156.00 0.00 1.31 17.22 144.32 948.62 0.00 948.62 6.41 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.49 18.93 144.51 937.14 0.00 937.14 6.21 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.25 14.90 142.82 683.56 0.00 683.56 5.81 100.00
sdj2 0.00 0.00 0.00 160.00 0.00 1.19 15.17 144.75 886.42 0.00 886.42 6.25 100.00
sdn2 0.00 0.00 0.00 163.00 0.00 1.31 16.50 143.37 890.31 0.00 890.31 6.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 157.00 0.00 1.13 14.74 143.41 880.05 0.00 880.05 6.37 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.29 15.69 143.50 888.43 0.00 888.43 5.92 100.00
sdg2 0.00 0.00 0.00 156.00 0.00 1.16 15.17 144.12 888.87 0.00 888.87 6.41 100.00
sdh2 0.00 0.00 0.00 175.00 0.00 1.38 16.19 145.04 839.02 0.00 839.02 5.71 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.36 16.61 144.13 854.86 0.00 854.86 5.95 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.23 14.92 143.80 844.40 0.00 844.40 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 162.00 0.00 1.18 14.88 143.35 892.52 0.00 892.52 6.17 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.35 16.06 143.32 839.35 0.00 839.35 5.81 100.00
sdg2 0.00 0.00 0.00 175.00 0.00 1.32 15.40 144.12 895.04 0.00 895.04 5.71 100.00
sdh2 0.00 0.00 0.00 174.00 0.00 1.30 15.29 143.37 847.06 0.00 847.06 5.75 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.29 15.88 142.64 871.33 0.00 871.33 5.99 100.00
sdn2 0.00 0.00 0.00 158.00 0.00 1.24 16.01 143.30 895.80 0.00 895.80 6.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.29 15.59 143.31 873.79 0.00 873.79 5.88 100.00
sde2 0.00 0.00 0.00 165.00 0.00 1.23 15.30 143.82 847.47 0.00 847.47 6.06 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.24 15.35 143.69 842.06 0.00 842.06 6.06 100.00
sdh2 0.00 0.00 0.00 173.00 0.00 1.37 16.20 143.52 811.28 0.00 811.28 5.78 100.00
sdj2 0.00 0.00 0.00 166.00 0.00 1.28 15.83 143.26 844.96 0.00 844.96 6.02 100.00
sdn2 0.00 0.00 0.00 175.00 0.00 1.24 14.55 143.60 853.78 0.00 853.78 5.71 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.21 14.86 143.81 843.11 0.00 843.11 5.99 100.00
sde2 0.00 0.00 0.00 171.00 0.00 1.33 15.88 143.94 851.95 0.00 851.95 5.85 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 1.15 14.39 143.40 876.63 0.00 876.63 6.10 100.00
sdh2 0.00 0.00 0.00 165.00 0.00 1.20 14.84 143.43 844.19 0.00 844.19 6.06 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.31 16.02 143.20 860.36 0.00 860.36 5.95 100.00
sdn2 0.00 0.00 0.00 170.00 0.00 1.29 15.54 142.74 845.20 0.00 845.20 5.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 175.00 0.00 1.33 15.61 144.33 844.41 0.00 844.41 5.71 100.00
sde2 0.00 0.00 0.00 173.00 0.00 1.34 15.86 144.42 840.76 0.00 840.76 5.78 100.00
sdg2 0.00 0.00 0.00 167.00 0.00 1.28 15.69 143.60 849.01 0.00 849.01 5.99 100.00
sdh2 0.00 0.00 0.00 175.00 0.00 1.32 15.43 142.81 847.66 0.00 847.66 5.71 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.32 16.16 142.77 831.88 0.00 831.88 5.99 100.00
sdn2 0.00 0.00 0.00 171.00 0.00 1.27 15.23 143.68 821.19 0.00 821.19 5.85 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.26 15.44 143.05 831.26 0.00 831.26 5.99 100.00
sde2 0.00 0.00 0.00 165.00 0.00 1.28 15.90 144.38 853.79 0.00 853.79 6.06 100.00
sdg2 0.00 0.00 0.00 159.00 0.00 1.18 15.22 143.77 897.96 0.00 897.96 6.29 100.00
sdh2 0.00 0.00 0.00 175.00 0.00 1.29 15.05 144.00 814.88 0.00 814.88 5.71 100.00
sdj2 0.00 0.00 0.00 176.00 0.00 1.41 16.41 142.88 846.61 0.00 846.61 5.68 100.00
sdn2 0.00 0.00 0.00 165.00 0.00 1.28 15.83 144.53 841.94 0.00 841.94 6.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 180.00 0.00 1.36 15.44 144.60 826.89 0.00 826.89 5.56 100.00
sde2 0.00 0.00 0.00 157.00 0.00 1.14 14.82 144.60 923.64 0.00 923.64 6.37 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.33 16.23 144.09 876.67 0.00 876.67 5.95 100.00
sdh2 0.00 0.00 0.00 170.00 0.00 1.26 15.22 143.77 832.00 0.00 832.00 5.88 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.30 15.64 144.33 828.02 0.00 828.02 5.88 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.24 15.43 143.46 891.68 0.00 891.68 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 166.00 0.00 1.43 17.70 144.70 855.20 0.00 855.20 6.02 100.00
sde2 0.00 0.00 0.00 174.00 0.00 1.32 15.52 143.92 840.83 0.00 840.83 5.75 100.00
sdg2 0.00 0.00 0.00 160.00 0.00 1.56 19.98 146.22 897.23 0.00 897.23 6.25 100.00
sdh2 0.00 0.00 0.00 169.00 0.00 1.27 15.38 142.31 862.34 0.00 862.34 5.92 100.00
sdj2 0.00 0.00 0.00 159.00 0.00 1.24 15.93 143.89 895.25 0.00 895.25 6.29 100.00
sdn2 0.00 0.00 0.00 158.00 0.00 1.30 16.81 144.32 910.18 0.00 910.18 6.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 155.00 0.00 1.17 15.42 144.08 890.37 0.00 890.37 6.45 100.00
sde2 0.00 0.00 0.00 162.00 0.00 1.28 16.22 143.89 854.74 0.00 854.74 6.17 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 1.48 18.43 145.50 890.12 0.00 890.12 6.10 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.32 15.74 143.64 812.30 0.00 812.30 5.81 100.00
sdj2 0.00 0.00 0.00 155.00 0.00 1.15 15.21 127.28 904.80 0.00 904.80 6.45 100.00
sdn2 0.00 0.00 0.00 161.00 0.00 1.21 15.38 144.12 894.53 0.00 894.53 6.21 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 160.00 0.00 1.53 19.54 75.40 883.05 0.00 883.05 6.25 100.00
sde2 0.00 0.00 0.00 157.00 0.00 1.23 15.98 83.17 920.00 0.00 920.00 6.37 100.00
sdg2 0.00 0.00 0.00 158.00 0.00 1.47 19.09 133.11 907.37 0.00 907.37 6.33 100.00
sdh2 0.00 0.00 0.00 151.00 0.00 2.24 30.32 114.38 905.25 0.00 905.25 6.62 100.00
sdj2 0.00 0.00 0.00 110.00 0.00 5.87 109.22 27.18 713.89 0.00 713.89 8.40 92.40
sdn2 0.00 0.00 0.00 151.00 0.00 2.46 33.41 113.25 926.75 0.00 926.75 6.62 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 108.00 0.00 12.70 240.75 6.38 66.85 0.00 66.85 8.89 96.00
sde2 0.00 0.00 0.00 109.00 0.00 15.68 294.57 8.53 117.36 0.00 117.36 9.17 100.00
sdg2 0.00 0.00 0.00 114.00 0.00 10.09 181.34 49.16 838.11 0.00 838.11 8.77 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 14.87 232.49 24.92 475.57 0.00 475.57 7.63 100.00
sdj2 0.00 0.00 0.00 87.00 0.00 11.75 276.48 6.42 72.28 0.00 72.28 10.34 90.00
sdn2 0.00 0.00 0.00 134.00 0.00 15.30 233.80 26.14 457.43 0.00 457.43 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 120.00 0.00 16.35 279.06 13.84 105.87 0.00 105.87 8.33 100.00
sde2 0.00 0.00 0.00 123.00 0.00 18.44 307.09 11.53 82.21 0.00 82.21 8.13 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 21.84 326.53 25.71 209.43 0.00 209.43 7.30 100.00
sdh2 0.00 0.00 0.00 117.00 0.00 16.53 289.37 9.82 95.76 0.00 95.76 8.55 100.00
sdj2 0.00 0.00 0.00 114.00 0.00 16.69 299.84 11.28 70.53 0.00 70.53 8.77 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 17.91 298.26 11.86 109.17 0.00 109.17 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.00 0.00 15.18 254.89 14.70 123.05 0.00 123.05 8.20 100.00
sde2 0.00 0.00 0.00 120.00 0.00 16.95 289.26 20.96 179.17 0.00 179.17 8.33 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 21.09 324.74 17.87 147.04 0.00 147.04 7.52 100.00
sdh2 0.00 0.00 0.00 117.00 0.00 16.84 294.83 8.98 80.62 0.00 80.62 8.55 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 17.90 281.98 21.13 167.60 0.00 167.60 7.69 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 16.46 261.27 20.90 126.60 0.00 126.60 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 111.00 0.00 16.55 305.30 8.42 67.39 0.00 67.39 8.97 99.60
sde2 0.00 0.00 0.00 128.00 0.00 19.25 307.99 10.14 71.50 0.00 71.50 7.53 96.40
sdg2 0.00 0.00 0.00 138.00 0.00 21.49 318.86 19.97 136.26 0.00 136.26 7.25 100.00
sdh2 0.00 0.00 0.00 95.00 0.00 13.87 298.98 7.38 61.85 0.00 61.85 8.97 85.20
sdj2 0.00 0.00 0.00 120.00 0.00 19.65 335.43 17.14 111.03 0.00 111.03 8.33 100.00
sdn2 0.00 0.00 0.00 122.00 0.00 21.12 354.47 19.40 189.57 0.00 189.57 8.20 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 19.95 321.72 21.35 162.08 0.00 162.08 7.87 100.00
sde2 0.00 0.00 0.00 126.00 0.00 20.08 326.37 11.58 93.81 0.00 93.81 7.94 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 20.44 305.55 25.14 191.82 0.00 191.82 7.30 100.00
sdh2 0.00 0.00 0.00 132.00 0.00 22.20 344.42 15.10 119.67 0.00 119.67 7.58 100.00
sdj2 0.00 0.00 0.00 151.00 0.00 24.49 332.19 23.51 166.09 0.00 166.09 6.62 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 20.38 331.17 21.48 179.21 0.00 179.21 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 114.00 0.00 17.97 322.88 20.84 175.82 0.00 175.82 8.77 100.00
sde2 0.00 0.00 0.00 121.00 0.00 17.34 293.56 25.05 168.46 0.00 168.46 8.26 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 22.81 353.89 22.14 163.91 0.00 163.91 7.58 100.00
sdh2 0.00 0.00 0.00 124.00 0.00 17.14 283.11 20.20 143.42 0.00 143.42 8.06 100.00
sdj2 0.00 0.00 0.00 126.00 0.00 21.37 347.42 24.62 168.54 0.00 168.54 7.94 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 18.90 302.39 13.70 105.25 0.00 105.25 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 112.00 0.00 14.49 265.04 9.79 121.79 0.00 121.79 8.61 96.40
sde2 0.00 0.00 0.00 116.00 0.00 16.39 289.37 13.00 168.69 0.00 168.69 8.52 98.80
sdg2 0.00 0.00 0.00 117.00 0.00 13.65 238.98 10.67 106.74 0.00 106.74 8.48 99.20
sdh2 0.00 0.00 0.00 117.00 0.00 14.12 247.15 16.20 171.25 0.00 171.25 8.55 100.00
sdj2 0.00 0.00 0.00 124.00 0.00 18.96 313.12 13.74 173.87 0.00 173.87 8.06 100.00
sdn2 0.00 0.00 0.00 104.00 0.00 11.74 231.26 7.85 90.42 0.00 90.42 8.96 93.20
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 21.48 335.79 18.64 130.41 0.00 130.41 7.63 100.00
sde2 0.00 0.00 0.00 138.00 0.00 24.47 363.14 20.56 121.39 0.00 121.39 7.25 100.00
sdg2 0.00 0.00 0.00 142.00 0.00 22.16 319.61 24.78 171.24 0.00 171.24 7.04 100.00
sdh2 0.00 0.00 0.00 138.00 0.00 23.38 346.99 18.99 124.32 0.00 124.32 7.25 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 21.91 342.52 17.48 120.03 0.00 120.03 7.63 100.00
sdn2 0.00 0.00 0.00 131.00 0.00 20.06 313.59 19.30 128.64 0.00 128.64 7.63 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 20.12 314.57 22.90 157.53 0.00 157.53 7.63 100.00
sde2 0.00 0.00 0.00 134.00 0.00 24.09 368.18 24.81 186.12 0.00 186.12 7.46 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 24.69 380.17 24.49 177.35 0.00 177.35 7.52 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 20.23 309.23 18.08 136.75 0.00 136.75 7.46 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 22.04 349.96 15.90 136.81 0.00 136.81 7.75 100.00
sdn2 0.00 0.00 0.00 138.00 0.00 23.18 344.02 20.24 156.96 0.00 156.96 7.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 21.72 355.94 19.09 164.64 0.00 164.64 8.00 100.00
sde2 0.00 0.00 0.00 132.00 0.00 22.62 351.03 18.78 146.45 0.00 146.45 7.58 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 24.40 359.55 18.16 133.67 0.00 133.67 7.19 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 21.90 332.30 17.35 128.00 0.00 128.00 7.41 100.00
sdj2 0.00 0.00 0.00 121.00 0.00 19.49 329.92 15.23 95.01 0.00 95.01 8.26 100.00
sdn2 0.00 0.00 0.00 125.00 0.00 22.66 371.24 15.10 92.80 0.00 92.80 8.00 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.00 0.00 17.67 296.61 12.27 97.02 0.00 97.02 8.20 100.00
sde2 0.00 0.00 0.00 126.00 0.00 19.95 324.25 20.96 171.43 0.00 171.43 7.94 100.00
sdg2 0.00 0.00 0.00 121.00 0.00 19.73 333.97 13.30 99.01 0.00 99.01 8.26 100.00
sdh2 0.00 0.00 0.00 122.00 0.00 20.32 341.16 13.16 94.13 0.00 94.13 8.20 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 22.75 335.16 16.14 129.15 0.00 129.15 7.19 100.00
sdn2 0.00 0.00 0.00 127.00 0.00 20.49 330.39 17.32 165.70 0.00 165.70 7.87 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 21.36 352.86 16.82 141.90 0.00 141.90 8.06 100.00
sde2 0.00 0.00 0.00 130.00 0.00 24.30 382.80 18.67 133.35 0.00 133.35 7.69 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 24.24 357.18 19.49 158.56 0.00 158.56 7.19 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 21.03 323.76 16.81 128.99 0.00 128.99 7.52 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 26.23 416.47 18.73 150.88 0.00 150.88 7.75 100.00
sdn2 0.00 0.00 0.00 116.00 0.00 17.75 313.38 10.48 93.14 0.00 93.14 8.62 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 153.47 0.00 27.24 363.55 19.30 114.27 0.00 114.27 6.45 99.01
sde2 0.00 0.00 0.00 152.48 0.00 25.34 340.34 32.23 206.81 0.00 206.81 6.49 99.01
sdg2 0.00 0.00 0.00 138.61 0.00 24.66 364.31 22.64 137.60 0.00 137.60 7.14 99.01
sdh2 0.00 0.00 0.00 135.64 0.00 23.54 355.45 26.34 193.17 0.00 193.17 7.30 99.01
sdj2 0.00 0.00 0.00 144.55 0.00 24.14 342.07 18.65 125.70 0.00 125.70 6.85 99.01
sdn2 0.00 0.00 0.00 136.63 0.00 21.96 329.21 20.08 133.59 0.00 133.59 7.25 99.01
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 24.08 370.78 21.81 163.19 0.00 163.19 7.52 100.00
sde2 0.00 0.00 0.00 129.00 0.00 23.02 365.52 23.00 192.99 0.00 192.99 7.75 100.00
sdg2 0.00 0.00 0.00 122.00 0.00 18.27 306.76 16.29 150.92 0.00 150.92 8.20 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 17.62 288.74 14.01 115.33 0.00 115.33 8.00 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 20.50 328.01 16.00 105.00 0.00 105.00 7.81 100.00
sdn2 0.00 0.00 0.00 119.00 0.00 18.57 319.67 10.79 89.95 0.00 89.95 8.10 96.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 20.96 327.62 24.22 160.34 0.00 160.34 7.63 100.00
sde2 0.00 0.00 0.00 141.00 0.00 25.12 364.90 15.71 120.54 0.00 120.54 7.09 100.00
sdg2 0.00 0.00 0.00 146.00 0.00 24.03 337.11 25.96 156.77 0.00 156.77 6.85 100.00
sdh2 0.00 0.00 0.00 124.00 0.00 19.40 320.40 23.64 180.58 0.00 180.58 8.06 100.00
sdj2 0.00 0.00 0.00 141.00 0.00 20.44 296.88 15.81 128.62 0.00 128.62 7.09 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 21.12 335.24 18.50 150.91 0.00 150.91 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 22.22 350.11 27.73 228.40 0.00 228.40 7.69 100.00
sde2 0.00 0.00 0.00 131.00 0.00 22.09 345.28 15.73 122.63 0.00 122.63 7.63 100.00
sdg2 0.00 0.00 0.00 125.00 0.00 21.93 359.25 27.52 240.77 0.00 240.77 8.00 100.00
sdh2 0.00 0.00 0.00 142.00 0.00 22.06 318.18 20.26 148.70 0.00 148.70 7.04 100.00
sdj2 0.00 0.00 0.00 137.00 0.00 21.08 315.11 23.84 164.23 0.00 164.23 7.30 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 19.82 322.17 14.64 102.79 0.00 102.79 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 22.17 343.95 19.51 173.03 0.00 173.03 7.58 100.00
sde2 0.00 0.00 0.00 126.00 0.00 21.33 346.75 13.14 109.27 0.00 109.27 7.94 100.00
sdg2 0.00 0.00 0.00 141.00 0.00 23.29 338.27 24.79 168.26 0.00 168.26 7.09 100.00
sdh2 0.00 0.00 0.00 118.00 0.00 16.16 280.50 10.02 107.19 0.00 107.19 8.47 100.00
sdj2 0.00 0.00 0.00 125.00 0.00 20.85 341.53 18.76 175.74 0.00 175.74 8.00 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 21.33 328.53 14.52 132.45 0.00 132.45 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 139.00 0.00 22.95 338.17 22.07 143.94 0.00 143.94 7.19 100.00
sde2 0.00 0.00 0.00 120.00 0.00 20.01 341.48 13.72 112.43 0.00 112.43 8.33 100.00
sdg2 0.00 0.00 0.00 141.00 0.00 21.26 308.78 25.58 173.59 0.00 173.59 7.09 100.00
sdh2 0.00 0.00 0.00 120.00 0.00 17.99 307.04 10.20 81.30 0.00 81.30 8.23 98.80
sdj2 0.00 0.00 0.00 138.00 0.00 21.28 315.83 22.92 148.41 0.00 148.41 7.25 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 21.12 351.74 9.91 74.63 0.00 74.63 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 244.00 0.00 9.32 78.23 93.64 157.31 0.00 157.31 4.10 100.00
sde2 0.00 0.00 0.00 228.00 0.00 6.15 55.28 155.60 357.86 0.00 357.86 4.39 100.00
sdg2 0.00 0.00 0.00 178.00 0.00 9.48 109.11 157.76 530.00 0.00 530.00 5.62 100.00
sdh2 0.00 0.00 0.00 195.00 0.00 8.25 86.62 73.51 188.76 0.00 188.76 4.14 80.80
sdj2 0.00 0.00 0.00 186.00 0.00 7.94 87.44 138.41 353.10 0.00 353.10 5.38 100.00
sdn2 0.00 0.00 0.00 218.00 0.00 7.67 72.03 111.87 257.78 0.00 257.78 4.59 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 166.00 0.00 1.29 15.90 144.03 847.54 0.00 847.54 6.02 100.00
sde2 0.00 0.00 0.00 157.00 0.00 1.46 19.10 143.65 989.35 0.00 989.35 6.37 100.00
sdg2 0.00 0.00 0.00 154.00 0.00 1.71 22.77 143.80 965.71 0.00 965.71 6.49 100.00
sdh2 0.00 0.00 0.00 177.00 0.00 1.40 16.23 143.33 657.56 0.00 657.56 5.65 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.60 19.57 144.65 943.07 0.00 943.07 5.95 100.00
sdn2 0.00 0.00 0.00 162.00 0.00 1.25 15.81 144.64 866.25 0.00 866.25 6.17 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.22 15.34 143.23 863.31 0.00 863.31 6.13 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.31 15.81 144.67 847.88 0.00 847.88 5.88 100.00
sdg2 0.00 0.00 0.00 160.00 0.00 1.57 20.12 145.82 891.00 0.00 891.00 6.25 100.00
sdh2 0.00 0.00 0.00 159.00 0.00 1.19 15.36 143.06 885.84 0.00 885.84 6.29 100.00
sdj2 0.00 0.00 0.00 171.00 0.00 1.28 15.30 143.94 839.93 0.00 839.93 5.85 100.00
sdn2 0.00 0.00 0.00 167.00 0.00 1.27 15.59 143.54 857.39 0.00 857.39 5.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.22 15.01 143.03 870.90 0.00 870.90 5.99 100.00
sde2 0.00 0.00 0.00 168.00 0.00 1.28 15.58 143.80 879.21 0.00 879.21 5.95 100.00
sdg2 0.00 0.00 0.00 169.00 0.00 1.31 15.88 144.26 884.50 0.00 884.50 5.92 100.00
sdh2 0.00 0.00 0.00 179.00 0.00 1.38 15.75 142.63 853.65 0.00 853.65 5.59 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.32 16.23 143.15 861.17 0.00 861.17 5.99 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.27 15.39 143.40 860.02 0.00 860.02 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.29 15.53 142.48 857.74 0.00 857.74 5.88 100.00
sde2 0.00 0.00 0.00 156.00 0.00 1.16 15.29 144.18 863.41 0.00 863.41 6.41 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.45 17.94 145.14 881.48 0.00 881.48 6.06 100.00
sdh2 0.00 0.00 0.00 156.00 0.00 1.18 15.51 144.25 866.67 0.00 866.67 6.41 100.00
sdj2 0.00 0.00 0.00 165.00 0.00 1.31 16.31 144.00 860.73 0.00 860.73 6.06 100.00
sdn2 0.00 0.00 0.00 171.00 0.00 1.26 15.13 143.54 843.91 0.00 843.91 5.85 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.21 14.74 143.67 852.69 0.00 852.69 5.95 100.00
sde2 0.00 0.00 0.00 166.00 0.00 1.29 15.96 144.96 911.52 0.00 911.52 6.02 100.00
sdg2 0.00 0.00 0.00 162.00 0.00 1.36 17.22 143.71 874.25 0.00 874.25 6.17 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.31 15.55 143.20 856.81 0.00 856.81 5.81 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.28 15.67 144.22 873.37 0.00 873.37 5.99 100.00
sdn2 0.00 0.00 0.00 157.00 0.00 1.15 14.98 143.45 859.24 0.00 859.24 6.37 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.27 15.53 145.04 850.30 0.00 850.30 5.99 100.00
sde2 0.00 0.00 0.00 164.00 0.00 1.62 20.27 143.38 896.46 0.00 896.46 6.10 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.32 16.40 145.16 869.53 0.00 869.53 6.06 100.00
sdh2 0.00 0.00 0.00 179.00 0.00 1.40 16.06 143.59 827.46 0.00 827.46 5.59 100.00
sdj2 0.00 0.00 0.00 160.00 0.00 1.23 15.71 143.74 883.55 0.00 883.55 6.25 100.00
sdn2 0.00 0.00 0.00 168.00 0.00 1.27 15.45 142.92 885.95 0.00 885.95 5.95 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 173.00 0.00 1.82 21.56 143.99 848.53 0.00 848.53 5.78 100.00
sde2 0.00 0.00 0.00 158.00 0.00 1.21 15.65 144.22 863.09 0.00 863.09 6.33 100.00
sdg2 0.00 0.00 0.00 160.00 0.00 1.49 19.12 145.12 892.58 0.00 892.58 6.25 100.00
sdh2 0.00 0.00 0.00 167.00 0.00 1.35 16.54 144.20 840.55 0.00 840.55 5.99 100.00
sdj2 0.00 0.00 0.00 164.00 0.00 1.22 15.27 144.47 889.56 0.00 889.56 6.10 100.00
sdn2 0.00 0.00 0.00 162.00 0.00 1.18 14.93 144.01 868.91 0.00 868.91 6.17 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 165.00 0.00 1.22 15.13 144.36 853.89 0.00 853.89 6.06 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.31 15.85 144.02 890.91 0.00 890.91 5.92 100.00
sdg2 0.00 0.00 0.00 170.00 0.00 1.62 19.46 145.28 892.31 0.00 892.31 5.88 100.00
sdh2 0.00 0.00 0.00 163.00 0.00 1.71 21.45 144.01 866.23 0.00 866.23 6.13 100.00
sdj2 0.00 0.00 0.00 164.00 0.00 1.51 18.91 144.52 872.32 0.00 872.32 6.10 100.00
sdn2 0.00 0.00 0.00 163.00 0.00 1.20 15.03 143.24 889.94 0.00 889.94 6.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.49 18.57 143.07 860.54 0.00 860.54 6.10 100.00
sde2 0.00 0.00 0.00 168.00 0.00 1.30 15.88 143.25 836.00 0.00 836.00 5.95 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.26 16.03 143.35 863.38 0.00 863.38 6.21 100.00
sdh2 0.00 0.00 0.00 162.00 0.00 1.19 15.07 144.23 908.07 0.00 908.07 6.17 100.00
sdj2 0.00 0.00 0.00 172.00 0.00 1.35 16.09 143.14 873.07 0.00 873.07 5.81 100.00
sdn2 0.00 0.00 0.00 161.00 0.00 1.23 15.66 143.50 890.41 0.00 890.41 6.21 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 169.00 0.00 1.28 15.54 143.86 883.64 0.00 883.64 5.92 100.00
sde2 0.00 0.00 0.00 155.00 0.00 1.37 18.06 143.13 890.40 0.00 890.40 6.45 100.00
sdg2 0.00 0.00 0.00 165.00 0.00 1.80 22.38 147.24 877.33 0.00 877.33 6.06 100.00
sdh2 0.00 0.00 0.00 161.00 0.00 1.36 17.28 144.04 863.50 0.00 863.50 6.21 100.00
sdj2 0.00 0.00 0.00 156.00 0.00 1.36 17.81 144.90 873.15 0.00 873.15 6.41 100.00
sdn2 0.00 0.00 0.00 173.00 0.00 1.31 15.50 143.08 843.21 0.00 843.21 5.78 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 151.00 0.00 1.13 15.37 113.30 887.44 0.00 887.44 6.62 100.00
sde2 0.00 0.00 0.00 135.00 0.00 2.80 42.53 58.67 939.44 0.00 939.44 7.41 100.00
sdg2 0.00 0.00 0.00 162.00 0.00 1.84 23.30 145.23 921.06 0.00 921.06 6.17 100.00
sdh2 0.00 0.00 0.00 158.00 0.00 1.57 20.36 97.75 919.49 0.00 919.49 6.33 100.00
sdj2 0.00 0.00 0.00 159.00 0.00 1.47 18.95 140.53 902.36 0.00 902.36 6.29 100.00
sdn2 0.00 0.00 0.00 159.00 0.00 1.18 15.16 132.00 872.60 0.00 872.60 6.29 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 13.89 215.45 23.08 469.15 0.00 469.15 7.58 100.00
sde2 0.00 0.00 0.00 94.00 0.00 14.45 314.83 4.38 49.53 0.00 49.53 9.28 87.20
sdg2 0.00 0.00 0.00 134.00 0.00 2.53 38.73 111.47 936.96 0.00 936.96 7.46 100.00
sdh2 0.00 0.00 0.00 108.00 0.00 14.37 272.52 7.69 239.93 0.00 239.93 9.00 97.20
sdj2 0.00 0.00 0.00 139.00 0.00 8.78 129.42 58.04 816.89 0.00 816.89 7.19 100.00
sdn2 0.00 0.00 0.00 136.00 0.00 10.80 162.67 45.19 712.18 0.00 712.18 7.35 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 89.00 0.00 13.76 316.63 5.53 65.08 0.00 65.08 9.17 81.60
sde2 0.00 0.00 0.00 64.00 0.00 8.89 284.41 3.43 52.31 0.00 52.31 10.31 66.00
sdg2 0.00 0.00 0.00 111.00 0.00 16.65 307.17 39.92 781.37 0.00 781.37 9.01 100.00
sdh2 0.00 0.00 0.00 89.00 0.00 12.09 278.27 4.75 54.11 0.00 54.11 9.35 83.20
sdj2 0.00 0.00 0.00 115.00 0.00 15.94 283.84 8.62 140.87 0.00 140.87 8.28 95.20
sdn2 0.00 0.00 0.00 111.00 0.00 15.68 289.39 8.64 93.48 0.00 93.48 9.01 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 120.00 0.00 18.69 318.97 12.07 96.93 0.00 96.93 8.33 100.00
sde2 0.00 0.00 0.00 121.00 0.00 19.93 337.30 12.68 93.55 0.00 93.55 8.26 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 25.13 384.09 28.76 216.57 0.00 216.57 7.46 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 16.80 279.76 13.74 103.06 0.00 103.06 8.07 99.20
sdj2 0.00 0.00 0.00 119.00 0.00 19.06 327.94 15.70 116.40 0.00 116.40 8.40 100.00
sdn2 0.00 0.00 0.00 121.00 0.00 20.00 338.59 10.45 86.35 0.00 86.35 8.26 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 21.94 362.31 13.08 101.26 0.00 101.26 8.06 100.00
sde2 0.00 0.00 0.00 125.00 0.00 20.86 341.74 10.70 92.13 0.00 92.13 8.00 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 25.77 385.16 14.80 115.12 0.00 115.12 7.30 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 20.42 334.64 16.43 121.73 0.00 121.73 8.00 100.00
sdj2 0.00 0.00 0.00 121.00 0.00 21.65 366.37 15.73 137.82 0.00 137.82 8.26 100.00
sdn2 0.00 0.00 0.00 100.00 0.00 15.64 320.32 7.65 63.08 0.00 63.08 8.84 88.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 132.00 0.00 22.47 348.58 24.06 152.82 0.00 152.82 7.58 100.00
sde2 0.00 0.00 0.00 125.00 0.00 20.35 333.42 19.30 122.37 0.00 122.37 8.00 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 21.28 332.70 18.11 140.27 0.00 140.27 7.63 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 22.21 347.20 24.90 181.86 0.00 181.86 7.63 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 23.21 371.38 14.58 115.88 0.00 115.88 7.81 100.00
sdn2 0.00 0.00 0.00 134.00 0.00 23.75 362.94 18.09 143.73 0.00 143.73 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 135.00 0.00 21.58 327.39 22.92 181.19 0.00 181.19 7.41 100.00
sde2 0.00 0.00 0.00 144.00 0.00 23.67 336.67 15.69 128.53 0.00 128.53 6.94 100.00
sdg2 0.00 0.00 0.00 135.00 0.00 23.00 348.90 23.66 149.81 0.00 149.81 7.41 100.00
sdh2 0.00 0.00 0.00 136.00 0.00 21.26 320.18 16.81 120.97 0.00 120.97 7.35 100.00
sdj2 0.00 0.00 0.00 126.00 0.00 19.70 320.14 13.37 96.63 0.00 96.63 7.94 100.00
sdn2 0.00 0.00 0.00 137.00 0.00 21.84 326.53 17.44 100.35 0.00 100.35 7.30 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 20.41 314.22 24.14 196.18 0.00 196.18 7.52 100.00
sde2 0.00 0.00 0.00 132.00 0.00 23.26 360.83 25.71 173.85 0.00 173.85 7.58 100.00
sdg2 0.00 0.00 0.00 144.00 0.00 25.59 364.01 24.16 188.47 0.00 188.47 6.94 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 20.01 322.73 24.52 174.20 0.00 174.20 7.87 100.00
sdj2 0.00 0.00 0.00 124.00 0.00 18.39 303.65 14.68 107.35 0.00 107.35 8.06 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 22.24 342.47 22.97 153.95 0.00 153.95 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 19.63 321.55 13.48 104.35 0.00 104.35 8.00 100.00
sde2 0.00 0.00 0.00 134.00 0.00 21.78 332.84 17.39 146.90 0.00 146.90 7.46 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 18.24 291.90 14.47 124.56 0.00 124.56 7.81 100.00
sdh2 0.00 0.00 0.00 136.00 0.00 22.57 339.90 23.06 204.53 0.00 204.53 7.35 100.00
sdj2 0.00 0.00 0.00 140.00 0.00 23.44 342.83 23.97 174.97 0.00 174.97 7.14 100.00
sdn2 0.00 0.00 0.00 120.00 0.00 20.76 354.38 22.57 223.73 0.00 223.73 8.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 117.00 0.00 17.82 311.92 14.80 130.56 0.00 130.56 8.55 100.00
sde2 0.00 0.00 0.00 124.00 0.00 20.49 338.35 12.40 109.81 0.00 109.81 8.06 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 18.50 300.71 9.49 78.51 0.00 78.51 7.94 100.00
sdh2 0.00 0.00 0.00 132.00 0.00 23.43 363.50 18.98 137.64 0.00 137.64 7.58 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 23.19 351.78 18.54 145.07 0.00 145.07 7.41 100.00
sdn2 0.00 0.00 0.00 131.00 0.00 22.64 353.88 16.26 137.68 0.00 137.68 7.63 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 22.42 358.77 14.86 125.16 0.00 125.16 7.81 100.00
sde2 0.00 0.00 0.00 126.00 0.00 22.10 359.14 12.58 106.54 0.00 106.54 7.94 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 21.87 364.11 10.49 78.44 0.00 78.44 8.13 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 22.49 374.52 17.26 153.24 0.00 153.24 8.13 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 22.25 347.83 13.07 111.24 0.00 111.24 7.63 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 21.87 349.88 19.87 138.09 0.00 138.09 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 23.24 366.05 17.30 126.68 0.00 126.68 7.69 100.00
sde2 0.00 0.00 0.00 138.00 0.00 24.28 360.37 26.01 164.75 0.00 164.75 7.25 100.00
sdg2 0.00 0.00 0.00 141.00 0.00 24.43 354.91 24.36 156.17 0.00 156.17 7.09 100.00
sdh2 0.00 0.00 0.00 128.00 0.00 21.15 338.46 20.04 151.88 0.00 151.88 7.81 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 19.00 301.64 19.56 116.71 0.00 116.71 7.75 100.00
sdn2 0.00 0.00 0.00 124.00 0.00 17.68 292.01 19.63 128.45 0.00 128.45 8.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 17.36 279.91 18.42 119.53 0.00 119.53 7.87 100.00
sde2 0.00 0.00 0.00 132.00 0.00 20.87 323.83 19.28 156.97 0.00 156.97 7.58 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 17.78 278.01 23.02 170.17 0.00 170.17 7.63 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 19.22 295.89 17.52 139.70 0.00 139.70 7.52 100.00
sdj2 0.00 0.00 0.00 136.00 0.00 21.49 323.68 23.54 161.21 0.00 161.21 7.35 100.00
sdn2 0.00 0.00 0.00 122.00 0.00 20.19 338.97 19.84 209.93 0.00 209.93 8.20 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.00 0.00 18.87 316.70 14.82 148.43 0.00 148.43 8.20 100.00
sde2 0.00 0.00 0.00 125.00 0.00 19.98 327.42 14.68 119.42 0.00 119.42 8.00 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 20.56 314.30 21.51 178.90 0.00 178.90 7.46 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 18.84 313.69 11.14 79.84 0.00 79.84 8.13 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 21.44 332.72 18.75 174.03 0.00 174.03 7.58 100.00
sdn2 0.00 0.00 0.00 132.00 0.00 22.32 346.34 12.68 98.45 0.00 98.45 7.58 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.00 0.00 20.20 339.08 11.02 83.74 0.00 83.74 7.93 96.80
sde2 0.00 0.00 0.00 136.00 0.00 21.81 328.46 19.50 121.26 0.00 121.26 7.35 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 21.86 322.12 21.30 130.94 0.00 130.94 7.19 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 21.64 354.54 16.13 135.23 0.00 135.23 8.00 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 22.35 349.39 16.32 126.72 0.00 126.72 7.63 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 22.13 351.39 13.68 95.81 0.00 95.81 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 19.94 321.59 22.55 166.71 0.00 166.71 7.87 100.00
sde2 0.00 0.00 0.00 133.00 0.00 20.81 320.38 21.61 170.74 0.00 170.74 7.52 100.00
sdg2 0.00 0.00 0.00 129.00 0.00 22.27 353.49 16.39 150.91 0.00 150.91 7.75 100.00
sdh2 0.00 0.00 0.00 130.00 0.00 20.04 315.78 23.56 141.20 0.00 141.20 7.69 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 20.39 318.73 17.03 126.44 0.00 126.44 7.63 100.00
sdn2 0.00 0.00 0.00 129.00 0.00 19.44 308.57 17.68 127.10 0.00 127.10 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 136.00 0.00 19.39 292.06 22.77 178.82 0.00 178.82 7.35 100.00
sde2 0.00 0.00 0.00 131.00 0.00 19.46 304.26 18.30 161.22 0.00 161.22 7.63 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 20.17 297.22 16.26 110.65 0.00 110.65 7.19 100.00
sdh2 0.00 0.00 0.00 140.00 0.00 22.02 322.14 19.00 163.46 0.00 163.46 7.14 100.00
sdj2 0.00 0.00 0.00 138.00 0.00 21.10 313.18 16.88 124.35 0.00 124.35 7.25 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 21.50 338.72 18.32 140.55 0.00 140.55 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 20.66 333.10 13.87 116.16 0.00 116.16 7.87 100.00
sde2 0.00 0.00 0.00 131.00 0.00 21.12 330.24 13.50 92.58 0.00 92.58 7.63 100.00
sdg2 0.00 0.00 0.00 135.00 0.00 22.86 346.77 17.78 146.64 0.00 146.64 7.41 100.00
sdh2 0.00 0.00 0.00 132.00 0.00 22.67 351.77 14.85 122.94 0.00 122.94 7.58 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 21.54 344.69 14.44 126.00 0.00 126.00 7.81 100.00
sdn2 0.00 0.00 0.00 144.00 0.00 22.18 315.51 23.68 170.08 0.00 170.08 6.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 135.00 0.00 21.63 328.07 17.10 106.52 0.00 106.52 7.41 100.00
sde2 0.00 0.00 0.00 135.00 0.00 22.56 342.24 17.97 132.33 0.00 132.33 7.41 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 18.49 295.80 11.43 76.25 0.00 76.25 7.81 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 17.15 276.54 11.87 89.39 0.00 89.39 7.87 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 20.32 327.65 14.27 92.13 0.00 92.13 7.87 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 22.06 322.75 27.05 179.66 0.00 179.66 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 201.00 0.00 8.59 87.48 81.78 154.33 0.00 154.33 4.38 88.00
sde2 0.00 0.00 0.00 230.00 0.00 5.40 48.05 138.86 337.72 0.00 337.72 4.35 100.00
sdg2 0.00 0.00 0.00 229.00 0.00 5.74 51.37 136.34 340.49 0.00 340.49 4.37 100.00
sdh2 0.00 0.00 0.00 161.00 0.00 4.72 60.09 60.46 196.55 0.00 196.55 3.90 62.80
sdj2 0.00 0.00 0.00 259.00 0.00 7.60 60.09 117.65 231.86 0.00 231.86 3.88 100.40
sdn2 0.00 0.00 0.00 206.00 0.00 7.05 70.06 98.43 201.98 0.00 201.98 4.87 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.28 15.94 143.11 802.02 0.00 802.02 6.10 100.00
sde2 0.00 0.00 0.00 164.00 0.00 1.24 15.44 143.90 866.17 0.00 866.17 6.10 100.00
sdg2 0.00 0.00 0.00 158.00 0.00 1.32 17.16 144.00 900.05 0.00 900.05 6.33 100.00
sdh2 0.00 0.00 0.00 201.00 0.00 1.53 15.60 143.35 552.74 0.00 552.74 4.98 100.00
sdj2 0.00 0.00 0.00 163.00 0.00 1.26 15.88 144.58 870.04 0.00 870.04 6.13 100.00
sdn2 0.00 0.00 0.00 160.00 0.00 1.28 16.43 143.97 883.25 0.00 883.25 6.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 159.00 0.00 1.23 15.82 143.60 900.10 0.00 900.10 6.29 100.00
sde2 0.00 0.00 0.00 178.00 0.00 1.42 16.31 142.87 843.28 0.00 843.28 5.62 100.00
sdg2 0.00 0.00 0.00 169.00 0.00 1.30 15.80 143.02 881.80 0.00 881.80 5.92 100.00
sdh2 0.00 0.00 0.00 169.00 0.00 1.20 14.59 142.64 861.14 0.00 861.14 5.89 99.60
sdj2 0.00 0.00 0.00 164.00 0.00 1.19 14.89 142.84 860.32 0.00 860.32 6.07 99.60
sdn2 0.00 0.00 0.00 167.00 0.00 1.23 15.07 143.19 853.63 0.00 853.63 5.96 99.60
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 156.00 0.00 1.14 14.99 143.37 940.08 0.00 940.08 6.41 100.00
sde2 0.00 0.00 0.00 168.00 0.00 1.26 15.36 144.05 837.57 0.00 837.57 5.95 100.00
sdg2 0.00 0.00 0.00 174.00 0.00 1.36 16.02 143.47 814.11 0.00 814.11 5.77 100.40
sdh2 0.00 0.00 0.00 170.00 0.00 1.23 14.80 143.80 836.12 0.00 836.12 5.91 100.40
sdj2 0.00 0.00 0.00 184.00 0.00 1.41 15.65 143.22 829.61 0.00 829.61 5.46 100.40
sdn2 0.00 0.00 0.00 148.00 0.00 1.10 15.19 143.09 968.14 0.00 968.14 6.78 100.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.26 15.21 144.12 853.39 0.00 853.39 5.91 100.40
sde2 0.00 0.00 0.00 168.00 0.00 1.29 15.75 144.56 849.26 0.00 849.26 5.98 100.40
sdg2 0.00 0.00 0.00 163.00 0.00 1.19 14.95 144.07 828.34 0.00 828.34 6.13 100.00
sdh2 0.00 0.00 0.00 165.00 0.00 1.28 15.88 143.55 864.02 0.00 864.02 6.06 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.22 15.01 143.68 819.78 0.00 819.78 5.99 100.00
sdn2 0.00 0.00 0.00 170.00 0.00 1.31 15.84 144.24 867.84 0.00 867.84 5.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.19 14.49 143.68 850.93 0.00 850.93 5.95 100.00
sde2 0.00 0.00 0.00 164.00 0.00 1.24 15.43 144.43 874.59 0.00 874.59 6.10 100.00
sdg2 0.00 0.00 0.00 171.00 0.00 1.32 15.87 144.76 868.63 0.00 868.63 5.85 100.00
sdh2 0.00 0.00 0.00 164.00 0.00 1.16 14.54 143.69 818.29 0.00 818.29 6.10 100.00
sdj2 0.00 0.00 0.00 162.00 0.00 1.27 16.02 142.64 870.05 0.00 870.05 6.17 100.00
sdn2 0.00 0.00 0.00 173.00 0.00 1.30 15.45 143.29 815.01 0.00 815.01 5.78 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.28 15.55 143.06 854.19 0.00 854.19 5.95 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.32 15.69 143.61 857.02 0.00 857.02 5.81 100.00
sdg2 0.00 0.00 0.00 158.00 0.00 1.16 15.01 143.82 889.70 0.00 889.70 6.33 100.00
sdh2 0.00 0.00 0.00 176.00 0.00 1.37 15.98 142.97 896.05 0.00 896.05 5.68 100.00
sdj2 0.00 0.00 0.00 164.00 0.00 1.26 15.77 143.72 866.07 0.00 866.07 6.10 100.00
sdn2 0.00 0.00 0.00 176.00 0.00 1.33 15.53 143.09 833.75 0.00 833.75 5.68 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 172.00 0.00 1.25 14.86 143.19 841.63 0.00 841.63 5.81 100.00
sde2 0.00 0.00 0.00 156.00 0.00 1.22 16.01 144.72 886.10 0.00 886.10 6.41 100.00
sdg2 0.00 0.00 0.00 163.00 0.00 1.26 15.87 144.03 873.30 0.00 873.30 6.13 100.00
sdh2 0.00 0.00 0.00 168.00 0.00 1.26 15.39 144.01 825.36 0.00 825.36 5.95 100.00
sdj2 0.00 0.00 0.00 169.00 0.00 1.24 15.07 143.00 872.85 0.00 872.85 5.92 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.29 16.11 143.37 826.02 0.00 826.02 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.29 15.76 143.35 852.93 0.00 852.93 5.99 100.00
sde2 0.00 0.00 0.00 159.00 0.00 1.19 15.28 143.89 940.20 0.00 940.20 6.29 100.00
sdg2 0.00 0.00 0.00 160.00 0.00 1.18 15.16 143.87 906.83 0.00 906.83 6.25 100.00
sdh2 0.00 0.00 0.00 173.00 0.00 1.29 15.24 144.34 847.65 0.00 847.65 5.78 100.00
sdj2 0.00 0.00 0.00 176.00 0.00 1.34 15.58 142.97 830.57 0.00 830.57 5.68 100.00
sdn2 0.00 0.00 0.00 158.00 0.00 1.48 19.20 145.15 888.61 0.00 888.61 6.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 160.00 0.00 1.17 14.97 142.90 867.30 0.00 867.30 6.23 99.60
sde2 0.00 0.00 0.00 173.00 0.00 1.33 15.70 143.47 861.13 0.00 861.13 5.76 99.60
sdg2 0.00 0.00 0.00 169.00 0.00 1.28 15.56 143.36 884.09 0.00 884.09 5.92 100.00
sdh2 0.00 0.00 0.00 165.00 0.00 1.22 15.19 143.20 858.40 0.00 858.40 6.06 100.00
sdj2 0.00 0.00 0.00 163.00 0.00 1.21 15.23 144.26 833.18 0.00 833.18 6.13 100.00
sdn2 0.00 0.00 0.00 157.00 0.00 1.69 21.99 149.12 960.89 0.00 960.89 6.37 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.22 15.28 145.41 891.78 0.00 891.78 6.13 100.00
sde2 0.00 0.00 0.00 155.00 0.00 1.15 15.14 138.49 862.89 0.00 862.89 6.48 100.40
sdg2 0.00 0.00 0.00 167.00 0.00 1.21 14.86 140.60 860.34 0.00 860.34 5.99 100.00
sdh2 0.00 0.00 0.00 164.00 0.00 1.28 16.02 144.70 858.32 0.00 858.32 6.10 100.00
sdj2 0.00 0.00 0.00 158.00 0.00 1.54 20.01 137.02 921.27 0.00 921.27 6.33 100.00
sdn2 0.00 0.00 0.00 160.00 0.00 2.05 26.30 151.14 927.30 0.00 927.30 6.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 151.00 0.00 3.27 44.31 75.43 909.38 0.00 909.38 6.65 100.40
sde2 0.00 0.00 0.00 133.00 0.00 8.29 127.71 37.12 733.71 0.00 733.71 7.52 100.00
sdg2 0.00 0.00 0.00 144.00 0.00 6.97 99.19 45.90 719.89 0.00 719.89 6.94 100.00
sdh2 0.00 0.00 0.00 141.00 0.00 3.57 51.86 70.13 918.27 0.00 918.27 7.09 100.00
sdj2 0.00 0.00 0.00 134.00 0.00 6.14 93.91 34.60 688.42 0.00 688.42 7.46 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 3.74 54.76 144.51 1018.26 0.00 1018.26 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 23.25 372.02 11.55 115.66 0.00 115.66 7.78 99.60
sde2 0.00 0.00 0.00 103.00 0.00 13.09 260.36 5.48 55.46 0.00 55.46 9.67 99.60
sdg2 0.00 0.00 0.00 98.00 0.00 12.13 253.41 4.02 42.94 0.00 42.94 9.59 94.00
sdh2 0.00 0.00 0.00 120.00 0.00 19.64 335.24 10.83 138.77 0.00 138.77 8.33 100.00
sdj2 0.00 0.00 0.00 98.00 0.00 12.56 262.55 3.72 39.31 0.00 39.31 9.80 96.00
sdn2 0.00 0.00 0.00 123.00 0.00 6.01 100.12 69.97 1038.63 0.00 1038.63 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 83.00 0.00 10.80 266.53 8.06 74.84 0.00 74.84 10.27 85.20
sde2 0.00 0.00 0.00 88.00 0.00 13.98 325.47 7.31 63.64 0.00 63.64 8.68 76.40
sdg2 0.00 0.00 0.00 71.00 0.00 10.13 292.21 6.34 70.70 0.00 70.70 9.24 65.60
sdh2 0.00 0.00 0.00 84.00 0.00 12.44 303.25 5.56 53.52 0.00 53.52 9.00 75.60
sdj2 0.00 0.00 0.00 73.00 0.00 9.33 261.85 8.41 79.62 0.00 79.62 9.81 71.60
sdn2 0.00 0.00 0.00 129.00 0.00 21.00 333.33 23.74 268.68 0.00 268.68 7.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 107.00 0.00 17.06 326.49 11.67 90.65 0.00 90.65 8.97 96.00
sde2 0.00 0.00 0.00 110.00 0.00 17.81 331.58 7.02 66.07 0.00 66.07 8.76 96.40
sdg2 0.00 0.00 0.00 107.00 0.00 16.27 311.50 9.80 69.68 0.00 69.68 7.89 84.40
sdh2 0.00 0.00 0.00 95.00 0.00 13.77 296.94 7.71 89.56 0.00 89.56 9.85 93.60
sdj2 0.00 0.00 0.00 108.00 0.00 18.08 342.78 8.13 80.85 0.00 80.85 8.44 91.20
sdn2 0.00 0.00 0.00 133.00 0.00 24.11 371.25 16.76 138.92 0.00 138.92 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 139.00 0.00 27.30 402.29 22.32 165.24 0.00 165.24 7.19 100.00
sde2 0.00 0.00 0.00 132.00 0.00 22.73 352.66 14.21 104.94 0.00 104.94 7.58 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 22.65 368.08 32.01 235.21 0.00 235.21 7.94 100.00
sdh2 0.00 0.00 0.00 123.00 0.00 21.42 356.63 14.37 96.36 0.00 96.36 8.13 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 23.99 380.83 20.51 172.06 0.00 172.06 7.75 100.00
sdn2 0.00 0.00 0.00 136.00 0.00 26.40 397.53 26.19 193.41 0.00 193.41 7.35 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 121.00 0.00 19.56 331.01 15.46 152.89 0.00 152.89 8.26 100.00
sde2 0.00 0.00 0.00 124.00 0.00 20.41 337.16 11.37 102.10 0.00 102.10 8.03 99.60
sdg2 0.00 0.00 0.00 135.00 0.00 23.03 349.33 21.34 181.33 0.00 181.33 7.41 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 20.64 338.12 16.03 145.31 0.00 145.31 8.00 100.00
sdj2 0.00 0.00 0.00 131.00 0.00 20.38 318.57 17.24 125.25 0.00 125.25 7.63 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 21.17 338.70 19.02 176.75 0.00 176.75 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 21.32 341.09 15.64 115.66 0.00 115.66 7.81 100.00
sde2 0.00 0.00 0.00 135.00 0.00 21.23 322.13 21.82 139.94 0.00 139.94 7.41 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 22.27 340.35 24.96 188.81 0.00 188.81 7.46 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 19.20 309.59 15.53 109.07 0.00 109.07 7.87 100.00
sdj2 0.00 0.00 0.00 119.00 0.00 19.81 340.88 15.91 138.12 0.00 138.12 8.40 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 22.88 360.49 20.41 148.15 0.00 148.15 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 126.00 0.00 20.10 326.79 17.58 114.76 0.00 114.76 7.94 100.00
sde2 0.00 0.00 0.00 140.00 0.00 22.72 332.29 19.54 141.14 0.00 141.14 7.14 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 21.33 318.90 24.51 161.08 0.00 161.08 7.30 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 18.58 290.53 18.99 150.56 0.00 150.56 7.63 100.00
sdj2 0.00 0.00 0.00 125.00 0.00 19.36 317.26 20.31 120.93 0.00 120.93 8.00 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 19.62 326.61 18.84 147.06 0.00 147.06 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 23.53 370.67 19.14 172.15 0.00 172.15 7.69 100.00
sde2 0.00 0.00 0.00 127.00 0.00 21.76 350.83 17.59 144.13 0.00 144.13 7.87 100.00
sdg2 0.00 0.00 0.00 136.00 0.00 21.98 330.99 23.68 182.26 0.00 182.26 7.35 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 20.43 329.43 18.99 138.83 0.00 138.83 7.87 100.00
sdj2 0.00 0.00 0.00 125.00 0.00 20.99 343.91 22.78 207.71 0.00 207.71 8.00 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 22.16 349.18 14.98 121.08 0.00 121.08 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 25.16 393.40 12.80 107.57 0.00 107.57 7.63 100.00
sde2 0.00 0.00 0.00 123.00 0.00 20.77 345.91 14.24 124.46 0.00 124.46 8.13 100.00
sdg2 0.00 0.00 0.00 140.00 0.00 26.31 384.81 19.48 157.49 0.00 157.49 7.14 100.00
sdh2 0.00 0.00 0.00 122.00 0.00 20.67 346.98 13.85 133.54 0.00 133.54 8.20 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 23.45 345.45 20.94 148.29 0.00 148.29 7.19 100.00
sdn2 0.00 0.00 0.00 115.00 0.00 18.47 328.97 10.31 102.40 0.00 102.40 8.70 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 126.00 0.00 21.09 342.83 13.57 100.35 0.00 100.35 7.94 100.00
sde2 0.00 0.00 0.00 137.00 0.00 23.24 347.47 15.82 123.45 0.00 123.45 7.30 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 20.86 326.06 25.16 153.19 0.00 153.19 7.63 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 21.26 348.37 14.59 92.13 0.00 92.13 7.94 99.20
sdj2 0.00 0.00 0.00 133.00 0.00 21.15 325.66 19.87 160.54 0.00 160.54 7.52 100.00
sdn2 0.00 0.00 0.00 119.00 0.00 18.98 326.58 11.96 83.39 0.00 83.39 8.40 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 21.76 348.21 14.98 111.25 0.00 111.25 7.81 100.00
sde2 0.00 0.00 0.00 125.00 0.00 23.12 378.80 15.14 119.90 0.00 119.90 8.00 100.00
sdg2 0.00 0.00 0.00 124.00 0.00 22.80 376.58 24.12 234.55 0.00 234.55 8.06 100.00
sdh2 0.00 0.00 0.00 119.00 0.00 19.69 338.87 16.90 163.97 0.00 163.97 8.40 100.00
sdj2 0.00 0.00 0.00 130.00 0.00 22.26 350.67 12.17 102.98 0.00 102.98 7.69 100.00
sdn2 0.00 0.00 0.00 124.00 0.00 24.27 400.81 12.28 114.94 0.00 114.94 8.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 129.00 0.00 18.52 294.05 15.78 131.60 0.00 131.60 7.75 100.00
sde2 0.00 0.00 0.00 132.00 0.00 22.57 350.15 14.02 92.42 0.00 92.42 7.58 100.00
sdg2 0.00 0.00 0.00 138.00 0.00 22.72 337.22 19.25 148.38 0.00 148.38 7.25 100.00
sdh2 0.00 0.00 0.00 145.00 0.00 24.45 345.30 17.46 118.57 0.00 118.57 6.90 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 21.69 344.27 15.76 104.19 0.00 104.19 7.75 100.00
sdn2 0.00 0.00 0.00 128.00 0.00 21.35 341.54 11.74 89.22 0.00 89.22 7.81 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 20.31 303.61 15.05 95.33 0.00 95.33 7.30 100.00
sde2 0.00 0.00 0.00 121.00 0.00 16.01 271.04 30.63 224.86 0.00 224.86 8.26 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 20.03 299.39 17.17 95.47 0.00 95.47 7.30 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 21.83 331.24 14.56 103.53 0.00 103.53 7.41 100.00
sdj2 0.00 0.00 0.00 132.00 0.00 18.56 288.00 22.70 151.52 0.00 151.52 7.58 100.00
sdn2 0.00 0.00 0.00 134.00 0.00 20.85 318.68 15.72 103.49 0.00 103.49 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.00 0.00 19.41 325.82 12.19 119.41 0.00 119.41 8.20 100.00
sde2 0.00 0.00 0.00 119.00 0.00 18.81 323.77 29.74 247.26 0.00 247.26 8.40 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 17.91 291.09 16.98 144.86 0.00 144.86 7.94 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 19.53 319.97 14.60 104.77 0.00 104.77 8.00 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 19.65 316.83 13.60 142.49 0.00 142.49 7.87 100.00
sdn2 0.00 0.00 0.00 137.00 0.00 22.54 336.98 16.16 127.77 0.00 127.77 7.30 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 122.66 0.00 19.40 323.93 9.37 77.61 0.00 77.61 8.13 99.70
sde2 0.00 0.00 0.00 133.99 0.00 20.86 318.91 23.25 183.56 0.00 183.56 7.47 100.10
sdg2 0.00 0.00 0.00 133.00 0.00 23.90 368.03 21.57 173.08 0.00 173.08 7.53 100.10
sdh2 0.00 0.00 0.00 131.53 0.00 20.84 324.50 19.02 146.02 0.00 146.02 7.61 100.10
sdj2 0.00 0.00 0.00 130.05 0.00 21.10 332.35 23.12 174.09 0.00 174.09 7.70 100.10
sdn2 0.00 0.00 0.00 132.02 0.00 22.78 353.45 15.45 118.67 0.00 118.67 7.58 100.10
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 109.00 0.00 16.88 317.24 13.12 91.38 0.00 91.38 8.99 98.00
sde2 0.00 0.00 0.00 146.00 0.00 28.57 400.82 29.45 201.51 0.00 201.51 6.85 100.00
sdg2 0.00 0.00 0.00 126.00 0.00 21.55 350.21 19.90 109.71 0.00 109.71 7.94 100.00
sdh2 0.00 0.00 0.00 129.00 0.00 20.11 319.31 22.53 161.33 0.00 161.33 7.75 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 20.80 335.40 17.21 106.33 0.00 106.33 7.87 100.00
sdn2 0.00 0.00 0.00 120.00 0.00 17.81 303.97 14.40 115.50 0.00 115.50 8.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 20.03 322.95 11.10 105.45 0.00 105.45 7.87 100.00
sde2 0.00 0.00 0.00 137.00 0.00 21.35 319.21 19.65 152.61 0.00 152.61 7.30 100.00
sdg2 0.00 0.00 0.00 132.00 0.00 21.55 334.30 18.34 176.91 0.00 176.91 7.58 100.00
sdh2 0.00 0.00 0.00 125.00 0.00 20.11 329.50 19.36 184.54 0.00 184.54 8.00 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 20.44 301.09 22.60 175.19 0.00 175.19 7.19 100.00
sdn2 0.00 0.00 0.00 123.00 0.00 18.52 308.43 17.99 125.69 0.00 125.69 8.13 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 205.00 0.00 8.42 84.15 74.84 181.48 0.00 181.48 4.60 94.40
sde2 0.00 0.00 0.00 199.00 0.00 7.33 75.49 136.38 326.15 0.00 326.15 5.03 100.00
sdg2 0.00 0.00 0.00 199.00 0.00 12.06 124.14 145.18 314.21 0.00 314.21 5.03 100.00
sdh2 0.00 0.00 0.00 185.00 0.00 8.50 94.12 56.48 131.03 0.00 131.03 4.39 81.20
sdj2 0.00 0.00 0.00 217.00 0.00 10.46 98.72 120.91 310.82 0.00 310.82 4.61 100.00
sdn2 0.00 0.00 0.00 201.00 0.00 10.23 104.21 102.52 231.40 0.00 231.40 4.98 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 177.00 0.00 1.33 15.40 144.34 666.89 0.00 666.89 5.65 100.00
sde2 0.00 0.00 0.00 176.00 0.00 1.51 17.60 145.00 908.36 0.00 908.36 5.68 100.00
sdg2 0.00 0.00 0.00 177.00 0.00 1.43 16.56 144.45 957.79 0.00 957.79 5.65 100.00
sdh2 0.00 0.00 0.00 185.00 0.00 1.46 16.13 143.90 594.12 0.00 594.12 5.41 100.00
sdj2 0.00 0.00 0.00 164.00 0.00 1.46 18.24 145.97 870.80 0.00 870.80 6.10 100.00
sdn2 0.00 0.00 0.00 158.00 0.00 1.13 14.71 144.21 874.61 0.00 874.61 6.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 169.00 0.00 1.51 18.28 144.16 862.32 0.00 862.32 5.92 100.00
sde2 0.00 0.00 0.00 182.00 0.00 1.58 17.73 144.01 784.77 0.00 784.77 5.49 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.81 22.97 143.87 865.34 0.00 865.34 6.21 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.33 15.83 143.08 867.74 0.00 867.74 5.81 100.00
sdj2 0.00 0.00 0.00 162.00 0.00 1.27 16.04 144.11 863.46 0.00 863.46 6.17 100.00
sdn2 0.00 0.00 0.00 176.00 0.00 1.33 15.42 143.71 847.27 0.00 847.27 5.68 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 161.00 0.00 1.18 14.98 142.70 897.14 0.00 897.14 6.21 100.00
sde2 0.00 0.00 0.00 165.00 0.00 1.45 17.94 145.21 854.42 0.00 854.42 6.06 100.00
sdg2 0.00 0.00 0.00 172.00 0.00 1.28 15.29 145.14 839.26 0.00 839.26 5.81 100.00
sdh2 0.00 0.00 0.00 177.00 0.00 1.37 15.83 143.05 832.52 0.00 832.52 5.65 100.00
sdj2 0.00 0.00 0.00 155.00 0.00 1.13 14.95 143.53 905.63 0.00 905.63 6.45 100.00
sdn2 0.00 0.00 0.00 169.00 0.00 1.29 15.60 142.48 845.75 0.00 845.75 5.92 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.22 15.23 143.45 861.78 0.00 861.78 6.10 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.93 23.03 145.78 848.33 0.00 848.33 5.81 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 1.57 19.56 143.99 871.05 0.00 871.05 6.10 100.00
sdh2 0.00 0.00 0.00 162.00 0.00 1.21 15.33 142.78 843.19 0.00 843.19 6.17 100.00
sdj2 0.00 0.00 0.00 167.00 0.00 1.29 15.84 143.20 878.68 0.00 878.68 5.99 100.00
sdn2 0.00 0.00 0.00 174.00 0.00 1.34 15.79 143.76 819.08 0.00 819.08 5.75 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.19 14.88 143.46 886.24 0.00 886.24 6.10 100.00
sde2 0.00 0.00 0.00 174.00 0.00 1.72 20.26 145.01 843.93 0.00 843.93 5.75 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 1.25 15.66 143.68 883.46 0.00 883.46 6.10 100.00
sdh2 0.00 0.00 0.00 174.00 0.00 1.29 15.14 143.42 840.97 0.00 840.97 5.75 100.00
sdj2 0.00 0.00 0.00 169.00 0.00 1.35 16.33 144.41 864.36 0.00 864.36 5.92 100.00
sdn2 0.00 0.00 0.00 173.00 0.00 1.32 15.64 142.58 830.22 0.00 830.22 5.78 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 159.00 0.00 1.19 15.28 143.34 880.53 0.00 880.53 6.29 100.00
sde2 0.00 0.00 0.00 157.00 0.00 1.56 20.35 144.29 895.01 0.00 895.01 6.37 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.37 16.73 144.92 875.07 0.00 875.07 5.95 100.00
sdh2 0.00 0.00 0.00 178.00 0.00 1.49 17.19 144.44 819.26 0.00 819.26 5.62 100.00
sdj2 0.00 0.00 0.00 176.00 0.00 1.40 16.30 144.43 830.07 0.00 830.07 5.68 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.21 15.09 142.84 863.32 0.00 863.32 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.26 15.40 143.68 885.31 0.00 885.31 5.95 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.59 19.22 145.20 881.68 0.00 881.68 5.92 100.00
sdg2 0.00 0.00 0.00 180.00 0.00 1.45 16.53 144.10 818.69 0.00 818.69 5.56 100.00
sdh2 0.00 0.00 0.00 160.00 0.00 1.17 14.93 143.95 856.98 0.00 856.98 6.25 100.00
sdj2 0.00 0.00 0.00 169.00 0.00 1.29 15.65 144.01 838.56 0.00 838.56 5.92 100.00
sdn2 0.00 0.00 0.00 160.00 0.00 1.19 15.29 143.67 877.02 0.00 877.02 6.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.18 14.88 143.22 861.72 0.00 861.72 6.13 100.00
sde2 0.00 0.00 0.00 175.00 0.00 1.75 20.51 146.37 847.41 0.00 847.41 5.71 100.00
sdg2 0.00 0.00 0.00 177.00 0.00 1.46 16.86 145.19 821.47 0.00 821.47 5.65 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.34 15.92 143.21 855.44 0.00 855.44 5.81 100.00
sdj2 0.00 0.00 0.00 179.00 0.00 1.41 16.12 144.78 839.51 0.00 839.51 5.59 100.00
sdn2 0.00 0.00 0.00 164.00 0.00 1.28 16.04 143.52 884.49 0.00 884.49 6.10 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.23 14.85 143.35 850.26 0.00 850.26 5.88 100.00
sde2 0.00 0.00 0.00 169.00 0.00 1.80 21.76 144.36 826.27 0.00 826.27 5.92 100.00
sdg2 0.00 0.00 0.00 161.00 0.00 1.32 16.76 143.87 861.14 0.00 861.14 6.21 100.00
sdh2 0.00 0.00 0.00 172.00 0.00 1.29 15.36 144.10 840.28 0.00 840.28 5.81 100.00
sdj2 0.00 0.00 0.00 149.00 0.00 1.17 16.05 143.47 865.56 0.00 865.56 6.71 100.00
sdn2 0.00 0.00 0.00 168.00 0.00 1.26 15.42 143.91 863.93 0.00 863.93 5.95 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.24 15.43 142.88 861.76 0.00 861.76 6.10 100.00
sde2 0.00 0.00 0.00 139.00 0.00 1.11 16.40 141.89 934.88 0.00 934.88 7.19 100.00
sdg2 0.00 0.00 0.00 155.00 0.00 1.63 21.57 140.15 880.23 0.00 880.23 6.45 100.00
sdh2 0.00 0.00 0.00 169.00 0.00 1.26 15.27 143.63 844.33 0.00 844.33 5.92 100.00
sdj2 0.00 0.00 0.00 163.00 0.00 1.74 21.88 144.30 927.56 0.00 927.56 6.13 100.00
sdn2 0.00 0.00 0.00 167.00 0.00 1.38 16.95 144.75 847.93 0.00 847.93 5.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 168.00 0.00 1.29 15.68 127.16 869.24 0.00 869.24 5.95 100.00
sde2 0.00 0.00 0.00 127.00 0.00 2.05 32.98 64.29 1069.64 0.00 1069.64 7.87 100.00
sdg2 0.00 0.00 0.00 124.00 0.00 1.39 23.01 61.90 1012.68 0.00 1012.68 8.06 100.00
sdh2 0.00 0.00 0.00 158.00 0.00 1.22 15.85 93.05 884.25 0.00 884.25 6.33 100.00
sdj2 0.00 0.00 0.00 139.00 0.00 0.88 12.92 84.48 920.69 0.00 920.69 7.19 100.00
sdn2 0.00 0.00 0.00 148.00 0.00 1.19 16.45 122.63 926.05 0.00 926.05 6.76 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 12.14 200.50 46.42 701.52 0.00 701.52 8.06 100.00
sde2 0.00 0.00 0.00 116.00 0.00 17.06 301.15 16.56 176.55 0.00 176.55 8.62 100.00
sdg2 0.00 0.00 0.00 114.00 0.00 17.22 309.35 11.33 126.84 0.00 126.84 8.77 100.00
sdh2 0.00 0.00 0.00 122.00 0.00 18.60 312.19 11.53 215.61 0.00 215.61 8.20 100.00
sdj2 0.00 0.00 0.00 111.00 0.00 17.29 319.01 21.27 379.28 0.00 379.28 9.01 100.00
sdn2 0.00 0.00 0.00 124.00 0.00 12.16 200.78 39.51 720.65 0.00 720.65 8.06 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 107.00 0.00 15.61 298.78 11.66 143.51 0.00 143.51 7.70 82.40
sde2 0.00 0.00 0.00 86.00 0.00 13.13 312.59 6.62 93.67 0.00 93.67 8.23 70.80
sdg2 0.00 0.00 0.00 89.00 0.00 11.39 262.12 6.22 76.76 0.00 76.76 8.76 78.00
sdh2 0.00 0.00 0.00 95.00 0.00 11.95 257.65 6.83 78.69 0.00 78.69 8.08 76.80
sdj2 0.00 0.00 0.00 87.00 0.00 8.65 203.59 4.51 63.72 0.00 63.72 9.20 80.00
sdn2 0.00 0.00 0.00 92.00 0.00 13.63 303.48 7.00 85.78 0.00 85.78 8.35 76.80
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 104.00 0.00 16.89 332.53 14.41 114.85 0.00 114.85 8.54 88.80
sde2 0.00 0.00 0.00 113.00 0.00 16.02 290.27 14.41 110.19 0.00 110.19 7.79 88.00
sdg2 0.00 0.00 0.00 121.00 0.00 18.89 319.69 19.74 134.28 0.00 134.28 7.24 87.60
sdh2 0.00 0.00 0.00 110.00 0.00 16.99 316.28 19.65 137.09 0.00 137.09 8.18 90.00
sdj2 0.00 0.00 0.00 114.00 0.00 18.86 338.76 14.66 115.12 0.00 115.12 8.07 92.00
sdn2 0.00 0.00 0.00 109.00 0.00 17.45 327.93 19.40 133.91 0.00 133.91 8.44 92.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 20.08 329.02 15.62 135.90 0.00 135.90 8.00 100.00
sde2 0.00 0.00 0.00 124.00 0.00 19.83 327.52 14.07 107.94 0.00 107.94 8.06 100.00
sdg2 0.00 0.00 0.00 128.00 0.00 20.93 334.92 21.00 182.94 0.00 182.94 7.81 100.00
sdh2 0.00 0.00 0.00 134.00 0.00 18.36 280.64 19.88 167.58 0.00 167.58 7.46 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 22.66 343.79 13.50 101.66 0.00 101.66 7.41 100.00
sdn2 0.00 0.00 0.00 134.00 0.00 21.16 323.39 22.24 167.55 0.00 167.55 7.46 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 21.16 333.28 16.56 107.05 0.00 107.05 7.69 100.00
sde2 0.00 0.00 0.00 123.00 0.00 16.92 281.65 13.72 121.98 0.00 121.98 8.13 100.00
sdg2 0.00 0.00 0.00 137.00 0.00 22.69 339.15 16.60 108.76 0.00 108.76 7.30 100.00
sdh2 0.00 0.00 0.00 132.00 0.00 24.00 372.36 19.20 143.15 0.00 143.15 7.58 100.00
sdj2 0.00 0.00 0.00 127.00 0.00 21.14 340.94 14.06 111.50 0.00 111.50 7.87 100.00
sdn2 0.00 0.00 0.00 132.00 0.00 24.88 385.98 20.97 160.52 0.00 160.52 7.58 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 22.40 352.84 18.16 151.85 0.00 151.85 7.69 100.00
sde2 0.00 0.00 0.00 124.00 0.00 19.45 321.29 10.89 90.32 0.00 90.32 8.06 100.00
sdg2 0.00 0.00 0.00 131.00 0.00 21.87 341.87 14.67 120.15 0.00 120.15 7.63 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 21.70 339.31 22.86 165.80 0.00 165.80 7.63 100.00
sdj2 0.00 0.00 0.00 121.00 0.00 18.62 315.23 14.63 112.89 0.00 112.89 8.26 100.00
sdn2 0.00 0.00 0.00 126.00 0.00 23.65 384.38 17.46 164.13 0.00 164.13 7.94 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 131.00 0.00 20.65 322.82 21.91 154.38 0.00 154.38 7.63 100.00
sde2 0.00 0.00 0.00 125.00 0.00 16.82 275.54 20.80 146.43 0.00 146.43 8.00 100.00
sdg2 0.00 0.00 0.00 127.00 0.00 20.99 338.52 17.15 111.50 0.00 111.50 7.87 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 22.79 345.80 19.46 148.24 0.00 148.24 7.41 100.00
sdj2 0.00 0.00 0.00 126.00 0.00 19.79 321.64 26.44 216.51 0.00 216.51 7.94 100.00
sdn2 0.00 0.00 0.00 127.00 0.00 17.67 284.96 19.05 144.19 0.00 144.19 7.87 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 133.00 0.00 19.76 304.24 22.85 172.87 0.00 172.87 7.52 100.00
sde2 0.00 0.00 0.00 123.00 0.00 18.05 300.55 15.12 142.21 0.00 142.21 8.13 100.00
sdg2 0.00 0.00 0.00 141.00 0.00 21.22 308.21 20.94 162.07 0.00 162.07 7.09 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 23.55 368.24 17.58 142.63 0.00 142.63 7.63 100.00
sdj2 0.00 0.00 0.00 142.00 0.00 23.34 336.56 21.44 150.20 0.00 150.20 7.04 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 20.08 316.40 20.08 140.22 0.00 140.22 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 119.00 0.00 19.10 328.71 22.43 203.93 0.00 203.93 8.40 100.00
sde2 0.00 0.00 0.00 129.00 0.00 20.18 320.35 15.51 118.45 0.00 118.45 7.75 100.00
sdg2 0.00 0.00 0.00 138.00 0.00 21.22 314.97 25.81 180.61 0.00 180.61 7.25 100.00
sdh2 0.00 0.00 0.00 131.00 0.00 19.90 311.09 21.45 147.36 0.00 147.36 7.63 100.00
sdj2 0.00 0.00 0.00 135.00 0.00 22.53 341.81 20.35 145.57 0.00 145.57 7.41 100.00
sdn2 0.00 0.00 0.00 138.00 0.00 20.90 310.14 27.05 202.20 0.00 202.20 7.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 142.00 0.00 23.25 335.39 19.01 139.27 0.00 139.27 7.04 100.00
sde2 0.00 0.00 0.00 124.00 0.00 19.74 326.00 10.58 93.68 0.00 93.68 8.06 100.00
sdg2 0.00 0.00 0.00 142.00 0.00 23.10 333.16 16.31 135.27 0.00 135.27 7.04 100.00
sdh2 0.00 0.00 0.00 127.00 0.00 22.80 367.71 19.38 160.66 0.00 160.66 7.87 100.00
sdj2 0.00 0.00 0.00 138.00 0.00 22.92 340.21 22.46 151.39 0.00 151.39 7.25 100.00
sdn2 0.00 0.00 0.00 133.00 0.00 20.65 318.04 17.01 126.08 0.00 126.08 7.52 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 21.17 338.73 16.80 125.06 0.00 125.06 7.81 100.00
sde2 0.00 0.00 0.00 111.00 0.00 14.80 273.08 8.64 61.77 0.00 61.77 8.54 94.80
sdg2 0.00 0.00 0.00 136.00 0.00 20.01 301.26 17.99 117.76 0.00 117.76 7.35 100.00
sdh2 0.00 0.00 0.00 139.00 0.00 21.03 309.81 17.94 120.83 0.00 120.83 7.19 100.00
sdj2 0.00 0.00 0.00 142.00 0.00 24.53 353.80 22.32 146.25 0.00 146.25 7.04 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 19.87 313.05 19.03 136.58 0.00 136.58 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 128.00 0.00 16.91 270.60 16.37 118.91 0.00 118.91 7.81 100.00
sde2 0.00 0.00 0.00 136.00 0.00 19.87 299.21 13.35 91.15 0.00 91.15 7.35 100.00
sdg2 0.00 0.00 0.00 139.00 0.00 21.52 317.14 19.33 134.62 0.00 134.62 7.19 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 21.70 334.08 26.98 161.50 0.00 161.50 7.52 100.00
sdj2 0.00 0.00 0.00 141.00 0.00 22.14 321.52 21.72 177.65 0.00 177.65 7.09 100.00
sdn2 0.00 0.00 0.00 136.00 0.00 20.10 302.62 22.15 183.15 0.00 183.15 7.35 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 120.00 0.00 17.17 293.04 16.54 139.93 0.00 139.93 8.33 100.00
sde2 0.00 0.00 0.00 134.00 0.00 17.86 272.98 19.15 148.30 0.00 148.30 7.46 100.00
sdg2 0.00 0.00 0.00 133.00 0.00 19.34 297.78 24.44 181.98 0.00 181.98 7.52 100.00
sdh2 0.00 0.00 0.00 144.00 0.00 19.77 281.14 29.93 243.61 0.00 243.61 6.94 100.00
sdj2 0.00 0.00 0.00 138.00 0.00 19.70 292.36 25.62 146.20 0.00 146.20 7.25 100.00
sdn2 0.00 0.00 0.00 130.00 0.00 19.01 299.44 18.52 134.98 0.00 134.98 7.69 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 130.00 0.00 21.52 339.05 14.86 126.31 0.00 126.31 7.69 100.00
sde2 0.00 0.00 0.00 135.00 0.00 20.06 304.36 16.88 133.36 0.00 133.36 7.41 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 21.18 323.66 15.30 119.73 0.00 119.73 7.46 100.00
sdh2 0.00 0.00 0.00 137.00 0.00 22.86 341.66 16.57 141.87 0.00 141.87 7.30 100.00
sdj2 0.00 0.00 0.00 129.00 0.00 22.52 357.46 28.30 246.57 0.00 246.57 7.75 100.00
sdn2 0.00 0.00 0.00 110.00 0.00 16.38 304.92 9.48 101.49 0.00 101.49 8.95 98.40
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 125.00 0.00 24.41 399.95 12.45 100.83 0.00 100.83 8.00 100.00
sde2 0.00 0.00 0.00 111.00 0.00 24.56 453.14 34.68 211.32 0.00 211.32 9.01 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 21.61 359.85 21.72 188.13 0.00 188.13 8.13 100.00
sdh2 0.00 0.00 0.00 122.00 0.00 22.82 383.05 17.68 128.62 0.00 128.62 8.20 100.00
sdj2 0.00 0.00 0.00 123.00 0.00 27.92 464.86 26.06 234.11 0.00 234.11 8.13 100.00
sdn2 0.00 0.00 0.00 135.00 0.00 24.82 376.47 20.24 148.44 0.00 148.44 7.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 124.00 0.00 23.03 380.44 15.48 100.84 0.00 100.84 8.06 100.00
sde2 0.00 0.00 0.00 136.00 0.00 25.10 377.97 20.78 227.38 0.00 227.38 7.35 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 21.86 363.90 11.01 87.38 0.00 87.38 8.13 100.00
sdh2 0.00 0.00 0.00 137.00 0.00 21.15 316.14 20.83 142.13 0.00 142.13 7.30 100.00
sdj2 0.00 0.00 0.00 128.00 0.00 21.84 349.41 14.96 96.22 0.00 96.22 7.81 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 21.13 309.10 26.87 161.26 0.00 161.26 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 21.28 318.07 14.45 124.38 0.00 124.38 7.30 100.00
sde2 0.00 0.00 0.00 142.00 0.00 24.50 353.42 24.04 174.06 0.00 174.06 7.04 100.00
sdg2 0.00 0.00 0.00 134.00 0.00 21.25 324.79 14.09 105.19 0.00 105.19 7.46 100.00
sdh2 0.00 0.00 0.00 135.00 0.00 21.76 330.16 20.23 156.53 0.00 156.53 7.41 100.00
sdj2 0.00 0.00 0.00 136.00 0.00 17.04 256.62 20.95 140.38 0.00 140.38 7.35 100.00
sdn2 0.00 0.00 0.00 140.00 0.00 22.59 330.44 26.16 177.31 0.00 177.31 7.14 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 127.00 0.00 19.50 314.40 17.28 128.28 0.00 128.28 7.87 100.00
sde2 0.00 0.00 0.00 128.00 0.00 22.05 352.80 18.32 138.91 0.00 138.91 7.81 100.00
sdg2 0.00 0.00 0.00 123.00 0.00 17.68 294.38 13.99 110.63 0.00 110.63 8.13 100.00
sdh2 0.00 0.00 0.00 133.00 0.00 21.40 329.51 18.62 134.83 0.00 134.83 7.52 100.00
sdj2 0.00 0.00 0.00 124.00 0.00 18.93 312.63 27.26 234.52 0.00 234.52 8.06 100.00
sdn2 0.00 0.00 0.00 138.00 0.00 22.64 335.97 23.25 201.54 0.00 201.54 7.25 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 204.00 0.00 9.99 100.26 69.12 178.49 0.00 178.49 4.90 100.00
sde2 0.00 0.00 0.00 220.00 0.00 8.15 75.89 122.70 257.64 0.00 257.64 4.55 100.00
sdg2 0.00 0.00 0.00 218.00 0.00 7.99 75.06 121.15 293.38 0.00 293.38 4.59 100.00
sdh2 0.00 0.00 0.00 182.00 0.00 12.36 139.05 52.77 172.99 0.00 172.99 5.38 98.00
sdj2 0.00 0.00 0.00 211.00 0.00 11.80 114.55 109.54 252.66 0.00 252.66 4.74 100.00
sdn2 0.00 0.00 0.00 205.00 0.00 9.56 95.46 86.32 145.35 0.00 145.35 4.88 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 181.00 0.00 1.37 15.50 146.03 608.35 0.00 608.35 5.52 100.00
sde2 0.00 0.00 0.00 151.00 0.00 1.79 24.33 145.48 946.65 0.00 946.65 6.62 100.00
sdg2 0.00 0.00 0.00 156.00 0.00 1.17 15.42 144.05 881.10 0.00 881.10 6.41 100.00
sdh2 0.00 0.00 0.00 217.00 0.00 1.91 18.03 143.89 486.60 0.00 486.60 4.61 100.00
sdj2 0.00 0.00 0.00 154.00 0.00 1.27 16.90 145.25 911.04 0.00 911.04 6.49 100.00
sdn2 0.00 0.00 0.00 156.00 0.00 1.21 15.87 144.93 857.82 0.00 857.82 6.41 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.34 16.85 144.15 937.72 0.00 937.72 6.13 100.00
sde2 0.00 0.00 0.00 172.00 0.00 1.41 16.78 144.55 888.81 0.00 888.81 5.81 100.00
sdg2 0.00 0.00 0.00 155.00 0.00 1.18 15.60 144.16 919.82 0.00 919.82 6.45 100.00
sdh2 0.00 0.00 0.00 173.00 0.00 1.33 15.75 144.08 854.59 0.00 854.59 5.78 100.00
sdj2 0.00 0.00 0.00 172.00 0.00 1.31 15.55 143.68 866.33 0.00 866.33 5.81 100.00
sdn2 0.00 0.00 0.00 182.00 0.00 1.54 17.35 143.65 862.95 0.00 862.95 5.49 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 153.00 0.00 1.18 15.86 144.06 901.73 0.00 901.73 6.54 100.00
sde2 0.00 0.00 0.00 171.00 0.00 1.27 15.21 143.72 854.99 0.00 854.99 5.85 100.00
sdg2 0.00 0.00 0.00 171.00 0.00 1.28 15.35 144.80 885.31 0.00 885.31 5.85 100.00
sdh2 0.00 0.00 0.00 177.00 0.00 1.38 15.98 143.38 821.42 0.00 821.42 5.65 100.00
sdj2 0.00 0.00 0.00 169.00 0.00 1.28 15.54 144.16 842.27 0.00 842.27 5.92 100.00
sdn2 0.00 0.00 0.00 161.00 0.00 1.20 15.30 144.40 859.43 0.00 859.43 6.21 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 167.00 0.00 1.25 15.31 142.84 907.19 0.00 907.19 5.99 100.00
sde2 0.00 0.00 0.00 167.00 0.00 1.29 15.85 143.50 864.62 0.00 864.62 5.99 100.00
sdg2 0.00 0.00 0.00 167.00 0.00 1.20 14.70 142.87 857.99 0.00 857.99 5.99 100.00
sdh2 0.00 0.00 0.00 167.00 0.00 1.26 15.43 144.14 841.89 0.00 841.89 5.99 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.28 15.42 145.06 844.40 0.00 844.40 5.88 100.00
sdn2 0.00 0.00 0.00 176.00 0.00 1.31 15.30 144.23 841.00 0.00 841.00 5.68 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.25 15.77 143.69 849.08 0.00 849.08 6.13 100.00
sde2 0.00 0.00 0.00 167.00 0.00 1.25 15.34 144.18 853.39 0.00 853.39 5.99 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.26 15.35 143.69 866.17 0.00 866.17 5.95 100.00
sdh2 0.00 0.00 0.00 169.00 0.00 1.28 15.47 145.31 870.04 0.00 870.04 5.92 100.00
sdj2 0.00 0.00 0.00 165.00 0.00 1.24 15.42 144.10 875.93 0.00 875.93 6.06 100.00
sdn2 0.00 0.00 0.00 167.00 0.00 1.30 15.96 143.79 836.81 0.00 836.81 5.99 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 169.00 0.00 1.46 17.68 143.42 882.46 0.00 882.46 5.92 100.00
sde2 0.00 0.00 0.00 162.00 0.00 1.46 18.40 144.09 876.02 0.00 876.02 6.17 100.00
sdg2 0.00 0.00 0.00 168.00 0.00 1.32 16.14 144.18 838.62 0.00 838.62 5.95 100.00
sdh2 0.00 0.00 0.00 174.00 0.00 1.34 15.79 143.47 823.89 0.00 823.89 5.75 100.00
sdj2 0.00 0.00 0.00 170.00 0.00 1.38 16.56 144.05 858.59 0.00 858.59 5.88 100.00
sdn2 0.00 0.00 0.00 188.00 0.00 1.40 15.30 143.97 810.38 0.00 810.38 5.32 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 166.00 0.00 1.19 14.64 143.00 840.87 0.00 840.87 6.02 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.30 15.65 144.48 853.48 0.00 853.48 5.88 100.00
sdg2 0.00 0.00 0.00 169.00 0.00 1.29 15.59 144.62 851.03 0.00 851.03 5.92 100.00
sdh2 0.00 0.00 0.00 175.00 0.00 1.36 15.89 143.60 814.49 0.00 814.49 5.71 100.00
sdj2 0.00 0.00 0.00 174.00 0.00 1.36 16.05 142.86 847.75 0.00 847.75 5.75 100.00
sdn2 0.00 0.00 0.00 159.00 0.00 1.23 15.81 145.20 828.45 0.00 828.45 6.29 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.26 15.21 144.11 867.29 0.00 867.29 5.88 100.00
sde2 0.00 0.00 0.00 164.00 0.00 1.25 15.55 143.70 875.80 0.00 875.80 6.10 100.00
sdg2 0.00 0.00 0.00 166.00 0.00 1.33 16.39 143.40 867.18 0.00 867.18 6.02 100.00
sdh2 0.00 0.00 0.00 177.00 0.00 1.37 15.80 144.01 814.15 0.00 814.15 5.65 100.00
sdj2 0.00 0.00 0.00 168.00 0.00 1.29 15.79 143.82 824.90 0.00 824.90 5.95 100.00
sdn2 0.00 0.00 0.00 159.00 0.00 1.25 16.08 144.34 915.67 0.00 915.67 6.29 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 164.00 0.00 1.29 16.07 143.11 853.44 0.00 853.44 6.10 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.28 15.44 143.00 850.94 0.00 850.94 5.88 100.00
sdg2 0.00 0.00 0.00 164.00 0.00 1.25 15.61 143.65 839.93 0.00 839.93 6.10 100.00
sdh2 0.00 0.00 0.00 171.00 0.00 1.30 15.59 142.73 849.12 0.00 849.12 5.85 100.00
sdj2 0.00 0.00 0.00 171.00 0.00 1.41 16.87 143.78 831.46 0.00 831.46 5.85 100.00
sdn2 0.00 0.00 0.00 158.00 0.00 1.22 15.81 144.08 909.82 0.00 909.82 6.33 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 170.00 0.00 1.27 15.34 142.97 859.13 0.00 859.13 5.88 100.00
sde2 0.00 0.00 0.00 170.00 0.00 1.30 15.65 139.86 849.20 0.00 849.20 5.88 100.00
sdg2 0.00 0.00 0.00 170.00 0.00 1.42 17.13 143.72 872.59 0.00 872.59 5.88 100.00
sdh2 0.00 0.00 0.00 166.00 0.00 1.34 16.49 144.60 824.51 0.00 824.51 6.02 100.00
sdj2 0.00 0.00 0.00 157.00 0.00 1.35 17.65 143.76 895.16 0.00 895.16 6.37 100.00
sdn2 0.00 0.00 0.00 157.00 0.00 2.01 26.22 144.18 901.96 0.00 901.96 6.37 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 163.00 0.00 1.21 15.14 123.02 865.72 0.00 865.72 6.13 100.00
sde2 0.00 0.00 0.00 110.00 0.00 1.45 26.97 36.21 839.85 0.00 839.85 6.47 71.20
sdg2 0.00 0.00 0.00 165.00 0.00 1.28 15.93 127.84 886.13 0.00 886.13 6.06 100.00
sdh2 0.00 0.00 0.00 168.00 0.00 1.49 18.20 107.71 857.31 0.00 857.31 5.95 100.00
sdj2 0.00 0.00 0.00 160.00 0.00 1.27 16.23 128.00 892.02 0.00 892.02 6.25 100.00
sdn2 0.00 0.00 0.00 154.00 0.00 1.75 23.27 130.14 946.10 0.00 946.10 6.49 100.00
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd2 0.00 0.00 0.00 137.00 0.00 12.05 180.12 20.27 445.99 0.00 445.99 7.30 100.00
sde2 0.00 0.00 0.00 71.00 0.00 9.59 276.51 7.06 61.52 0.00 61.52 8.62 61.20
sdg2 0.00 0.00 0.00 135.00 0.00 9.18 139.30 28.84 507.11 0.00 507.11 7.41 100.00
sdh2 0.00 0.00 0.00 114.00 0.00 11.34 203.67 12.18 367.93 0.00 367.93 8.25 94.00
sdj2 0.00 0.00 0.00 124.00 0.00 7.97 131.59 36.90 691.97 0.00 691.97 8.06 100.00
sdn2 0.00 0.00 0.00 143.00 0.00 9.73 139.36 35.15 577.29 0.00 577.29 6.99 100.00
[-- Attachment #6: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-03 23:18 ` Shrinand Javadekar
@ 2015-06-04 0:35 ` Dave Chinner
2015-06-04 0:58 ` Shrinand Javadekar
2015-06-04 1:25 ` Dave Chinner
0 siblings, 2 replies; 21+ messages in thread
From: Dave Chinner @ 2015-06-04 0:35 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Wed, Jun 03, 2015 at 04:18:20PM -0700, Shrinand Javadekar wrote:
> Here you go!
Thanks!
> /dev/mapper/35000c50062e6a12b-part2 /srv/node/r1 xfs
> rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
> 0 0
.....
> meta-data=/dev/mapper/35000c50062e6a7eb-part2 isize=256 agcount=64, agsize=11446344 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=732566016, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=357698, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
Ok, so agcount=64 is unusual, especially for a single disk
filesystem. What was the reason for doing this?
> - Workload causing the problem:
>
> Openstack Swift. This is what it's doing:
>
> 1. A path like /srv/node/r1/objects/1024/eef/tmp already exists.
> /srv/node/r1 is the mount point.
> 2. Creates a tmp file, say tmpfoo in the patch above. Path:
> /srv/node/r1/objects/1024/eef/tmp/tmpfoo.
> 3. Issues a 256KB write into this file.
> 4. Issues an fsync on the file.
> 5. Closes this file.
> 6. Creates another directory named "deadbeef" inside "eef" if it
> doesn't exist. Path /srv/node/r1/objects/1024/eef/deadbeef.
> 7. Moves file tmpfoo into the deadbeef directory using rename().
> /srv/node/r1/objects/1023/eef/tmp/tmpfoo -->
> /srv/node/r1/objects/1024/eef/deadbeef/foo.data
> 8. Does a readdir on /srv/node/r1/objects/1024/eef/deadbeef/
> 9. Iterates over all files obtained in #8 above. Usually #8 gives only one file.
Oh. We've already discussed this problem in a previous thread:
http://oss.sgi.com/archives/xfs/2015-04/msg00256.html
Next time, please make sure you start with a reference to previous
discussions on the same topic.
Specifically, that discussion touched on problems your workload
induces in metadata layout and locality:
http://oss.sgi.com/archives/xfs/2015-04/msg00300.html
And you are using agcount=64 on these machines, so that's going to
cause you all sorts of locality problems, which will translate into
seek bound IO performance....
> - IOStat and vmstat output
> (attached)
I am assuming these are 1 second samples, based on your 18s fast/12s
slow description earlier.
The vmstat shows fast writeback at 150-200MB/s, with no idle time,
anything up to 200 processes in running or blocked state and 20-30%
iowait, followed by idle CPU time with maybe 10 running/blocked
processes, writeback at 15-20MB/s with 70% idle time and 30% iowait.
IOWs, the workload is cyclic - lots of incoming data with lots of
throughput, followed by zero incoming data processing on only small
amounts of writeback.
The iostat shows that when the system is running at 150MB/s, the IO
service time is ~7ms (running ~130 IOPS per drive) and the average
IO size is around 170kB, with a request queue depth of 20-30 IOs.
Device utilisation is 100%, so throughput is seek bound.
When the system is mostly idle, the throughput is essentially
running a random 4k IO write workload - 180 IOPS, request size 4k,
service time 5ms, request queue depth ~140, average wait ~800ms,
device utilisation 100%. Again, seek bound, the only difference is
the IO size.
The vmstat information implies that front end application processing
is stopping for some period of time, but it does not indicate why it
is doing so. When the disks are doing 4k writeback, can you grab
the output of 'echo w > /proc/sysrq-trigger' from dmesg and post the
output? That will tell us if the front end processing is blocked on
the filesystem at all...
> - Trace cmd report
> Too big to attach. Here's a link:
> https://www.dropbox.com/s/3xxe2chsv4fsrv8/trace_report.txt.zip?dl=0
Downloading now.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 0:35 ` Dave Chinner
@ 2015-06-04 0:58 ` Shrinand Javadekar
2015-06-04 1:55 ` Dave Chinner
2015-06-04 1:25 ` Dave Chinner
1 sibling, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-04 0:58 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
Thanks Dave. Please see my responses inline.
On Wed, Jun 3, 2015 at 5:35 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Wed, Jun 03, 2015 at 04:18:20PM -0700, Shrinand Javadekar wrote:
>> Here you go!
>
> Thanks!
>
>> /dev/mapper/35000c50062e6a12b-part2 /srv/node/r1 xfs
>> rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
>> 0 0
> .....
>> meta-data=/dev/mapper/35000c50062e6a7eb-part2 isize=256 agcount=64, agsize=11446344 blks
>> = sectsz=512 attr=2
>> data = bsize=4096 blocks=732566016, imaxpct=5
>> = sunit=0 swidth=0 blks
>> naming =version 2 bsize=4096 ascii-ci=0
>> log =internal bsize=4096 blocks=357698, version=2
>> = sectsz=512 sunit=0 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
>
> Ok, so agcount=64 is unusual, especially for a single disk
> filesystem. What was the reason for doing this?
I read few articles that recommend using an increased number of AGs,
especially when there are large disks. I can use the default # of AGs
(4?) and try again.
>
>> - Workload causing the problem:
>>
>> Openstack Swift. This is what it's doing:
>>
>> 1. A path like /srv/node/r1/objects/1024/eef/tmp already exists.
>> /srv/node/r1 is the mount point.
>> 2. Creates a tmp file, say tmpfoo in the patch above. Path:
>> /srv/node/r1/objects/1024/eef/tmp/tmpfoo.
>> 3. Issues a 256KB write into this file.
>> 4. Issues an fsync on the file.
>> 5. Closes this file.
>> 6. Creates another directory named "deadbeef" inside "eef" if it
>> doesn't exist. Path /srv/node/r1/objects/1024/eef/deadbeef.
>> 7. Moves file tmpfoo into the deadbeef directory using rename().
>> /srv/node/r1/objects/1023/eef/tmp/tmpfoo -->
>> /srv/node/r1/objects/1024/eef/deadbeef/foo.data
>> 8. Does a readdir on /srv/node/r1/objects/1024/eef/deadbeef/
>> 9. Iterates over all files obtained in #8 above. Usually #8 gives only one file.
>
> Oh. We've already discussed this problem in a previous thread:
>
> http://oss.sgi.com/archives/xfs/2015-04/msg00256.html
Yes, we touched upon this earlier and found that all files were
getting created in the same AG. We fixed that by and my current
testing includes that fix.
Earlier the tmp file was /srv/node/r1/tmp. By moving it further down
the filesystem hierarchy to /srv/node/r1/objects/1024/eef/tmp, we make
sure there are several tmp directories. I'm told, the ideal solution
using O_TMP and linkat() will have to be rolled out later when there
is support for that in python.
>
> Next time, please make sure you start with a reference to previous
> discussions on the same topic.
Apologies, I will!
>
> Specifically, that discussion touched on problems your workload
> induces in metadata layout and locality:
>
> http://oss.sgi.com/archives/xfs/2015-04/msg00300.html
>
> And you are using agcount=64 on these machines, so that's going to
> cause you all sorts of locality problems, which will translate into
> seek bound IO performance....
>
>> - IOStat and vmstat output
>> (attached)
>
> I am assuming these are 1 second samples, based on your 18s fast/12s
> slow description earlier.
Yes, these are 1 seconds samples.
>
> The vmstat shows fast writeback at 150-200MB/s, with no idle time,
> anything up to 200 processes in running or blocked state and 20-30%
> iowait, followed by idle CPU time with maybe 10 running/blocked
> processes, writeback at 15-20MB/s with 70% idle time and 30% iowait.
>
> IOWs, the workload is cyclic - lots of incoming data with lots of
> throughput, followed by zero incoming data processing on only small
> amounts of writeback.
My understanding is that the workload is either
a) waiting for issued IOs to complete.
b) not able to issue more IOs because XFS is busy flushing the journal entries.
Is this not true?
>
> The iostat shows that when the system is running at 150MB/s, the IO
> service time is ~7ms (running ~130 IOPS per drive) and the average
> IO size is around 170kB, with a request queue depth of 20-30 IOs.
> Device utilisation is 100%, so throughput is seek bound.
>
> When the system is mostly idle, the throughput is essentially
> running a random 4k IO write workload - 180 IOPS, request size 4k,
> service time 5ms, request queue depth ~140, average wait ~800ms,
> device utilisation 100%. Again, seek bound, the only difference is
> the IO size.
Again, my understanding was that this idle time is because of XFS is
busy writing metadata from the journal to the final locations on disk.
>
> The vmstat information implies that front end application processing
> is stopping for some period of time, but it does not indicate why it
> is doing so. When the disks are doing 4k writeback, can you grab
> the output of 'echo w > /proc/sysrq-trigger' from dmesg and post the
> output? That will tell us if the front end processing is blocked on
> the filesystem at all...
Aah.. ok. Will do and get back to you soon.
>
>> - Trace cmd report
>> Too big to attach. Here's a link:
>> https://www.dropbox.com/s/3xxe2chsv4fsrv8/trace_report.txt.zip?dl=0
>
-Shri
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 0:35 ` Dave Chinner
2015-06-04 0:58 ` Shrinand Javadekar
@ 2015-06-04 1:25 ` Dave Chinner
2015-06-04 2:03 ` Dave Chinner
1 sibling, 1 reply; 21+ messages in thread
From: Dave Chinner @ 2015-06-04 1:25 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Thu, Jun 04, 2015 at 10:35:47AM +1000, Dave Chinner wrote:
> > - Trace cmd report
> > Too big to attach. Here's a link:
> > https://www.dropbox.com/s/3xxe2chsv4fsrv8/trace_report.txt.zip?dl=0
>
> Downloading now.
AIL pushing is occurring every 30s, yes. Across all filesystems, there
are roughly 23-25,000 metadata objects being pushed every 30s flush.
Think about that for a moment.
You have a write once workload, so inode metadata is journalled and
written only once. Hence if you are creating 1000 files/s, then you
have at least 30,000 inodes to push every 30s.
but that's not actually the big problem. Of the two ail push events
in the trace, there are this many objects that we attempt to push:
$ wc -l t.t
45149 t.t
And this many inodes:
$ grep INODE t.t | wc -l
11512
Now, XFS has inode clustering on writeback and that is active; it is
reducing the number of inode IOs by a factor of roughly 10. So that
means that every 30s, we've only got ~600 IOs across 8 disks
to write back dirty inodes. i.e. less than a second worth of random
IO. That's not the problem we are looking for.
Buffers, OTOH:
$ grep BUF t.t | wc -l
33637
So call it 17,000 every 30 seconds. That requires 17,000 4k IOs.
Across 8 disks at 170 IOPS, that is *exactly* 12.5 seconds worth of
IO.
Looks to me like the buffers are mostly inode btree. free space
btree and directory buffers.
Directory buffers, well, that's where increasing the directory block
size might help (e.g. to 8k). That may well reduce the number of
directory buffers by more than a factor of 2 due to the structure of
the directories. Depends on how many files you have in each
directory....
The number of inode and alloc btree buffers can be reduced by
reducing the number of AGs - probably by a factor of 10 by bringing
the AG count down to 4. And, because the active inode and freespace
btree buffers will be hotter, they are more likely just to be
relogged than written back, further reducing IOs.
Indeed, this looks to me like the smoking gun. To allocate a block,
you have to lock the AGF buffer that the allocation is going to take
place in. Problem is, when the xfsaild pushes the AGF buffers to the
writeback queue, they sit there with the buffer locked until the IO
completes.
In the traces, the xfsailds all run at 509385s, and immediately I
see a ~10s gap in the trace where almost no xfs_read_agf() traces
occur. It's not until 509396s that the traces really start to appear
at normal speed again.
Again, reducing the number of AGs will help with this problem,
simply because the AG headers are more likely to be locked or
pinned when the xfsaild sweep runs because they are active rather
than sitting idle waiting for the next operation in that AG to
require allocation....
Remember, a single AG can sustain thousands of allocations every
second - if you are only creating a few thousand files every second,
you don't need tens of AGs to sustain that - the default of 4 AGs
will do that just fine...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 0:58 ` Shrinand Javadekar
@ 2015-06-04 1:55 ` Dave Chinner
0 siblings, 0 replies; 21+ messages in thread
From: Dave Chinner @ 2015-06-04 1:55 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Wed, Jun 03, 2015 at 05:58:07PM -0700, Shrinand Javadekar wrote:
> Thanks Dave. Please see my responses inline.
>
> On Wed, Jun 3, 2015 at 5:35 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Wed, Jun 03, 2015 at 04:18:20PM -0700, Shrinand Javadekar wrote:
> >> Here you go!
> >
> > Thanks!
> >
> >> /dev/mapper/35000c50062e6a12b-part2 /srv/node/r1 xfs
> >> rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota
> >> 0 0
> > .....
> >> meta-data=/dev/mapper/35000c50062e6a7eb-part2 isize=256 agcount=64, agsize=11446344 blks
> >> = sectsz=512 attr=2
> >> data = bsize=4096 blocks=732566016, imaxpct=5
> >> = sunit=0 swidth=0 blks
> >> naming =version 2 bsize=4096 ascii-ci=0
> >> log =internal bsize=4096 blocks=357698, version=2
> >> = sectsz=512 sunit=0 blks, lazy-count=1
> >> realtime =none extsz=4096 blocks=0, rtextents=0
> >
> > Ok, so agcount=64 is unusual, especially for a single disk
> > filesystem. What was the reason for doing this?
>
> I read few articles that recommend using an increased number of AGs,
> especially when there are large disks. I can use the default # of AGs
> (4?) and try again.
<sigh>
The Google Fallacy strikes again.
http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
> >> Openstack Swift. This is what it's doing:
> >>
> >> 1. A path like /srv/node/r1/objects/1024/eef/tmp already exists.
> >> /srv/node/r1 is the mount point.
> >> 2. Creates a tmp file, say tmpfoo in the patch above. Path:
> >> /srv/node/r1/objects/1024/eef/tmp/tmpfoo.
> >> 3. Issues a 256KB write into this file.
> >> 4. Issues an fsync on the file.
> >> 5. Closes this file.
> >> 6. Creates another directory named "deadbeef" inside "eef" if it
> >> doesn't exist. Path /srv/node/r1/objects/1024/eef/deadbeef.
> >> 7. Moves file tmpfoo into the deadbeef directory using rename().
> >> /srv/node/r1/objects/1023/eef/tmp/tmpfoo -->
> >> /srv/node/r1/objects/1024/eef/deadbeef/foo.data
> >> 8. Does a readdir on /srv/node/r1/objects/1024/eef/deadbeef/
> >> 9. Iterates over all files obtained in #8 above. Usually #8 gives only one file.
> >
> > Oh. We've already discussed this problem in a previous thread:
> >
> > http://oss.sgi.com/archives/xfs/2015-04/msg00256.html
>
> Yes, we touched upon this earlier and found that all files were
> getting created in the same AG. We fixed that by and my current
> testing includes that fix.
Right, I noticed that looking at the inode allocation distribution.
It's pretty good (output is count, agno):
$$ awk '/xfs_ialloc_read_agi:/ {print $8}' trace_report.txt | sort -n |uniq -c
1362 0
1351 1
1359 2
1354 3
1374 4
1345 5
1380 6
1371 7
1356 8
1354 9
1373 10
1364 11
1357 12
1363 13
1368 14
1386 15
1355 16
1384 17
1352 18
1377 19
1358 20
1371 21
1356 22
1367 23
1342 24
1383 25
1352 26
1354 27
1347 28
1382 29
1348 30
1347 31
1351 32
1346 33
1350 34
1365 35
1346 36
1361 37
1358 38
1337 39
1356 40
1371 41
1347 42
1335 43
1378 44
1370 45
1372 46
1334 47
1363 48
1355 49
1365 50
1353 51
1370 52
1346 53
1369 54
1356 55
1381 56
1349 57
1365 58
1356 59
1351 60
1345 61
1379 62
1351 63
> > Specifically, that discussion touched on problems your workload
> > induces in metadata layout and locality:
> >
> > http://oss.sgi.com/archives/xfs/2015-04/msg00300.html
> >
> > And you are using agcount=64 on these machines, so that's going to
> > cause you all sorts of locality problems, which will translate into
> > seek bound IO performance....
> >
> >> - IOStat and vmstat output
> >> (attached)
> >
> > I am assuming these are 1 second samples, based on your 18s fast/12s
> > slow description earlier.
>
> Yes, these are 1 seconds samples.
>
> >
> > The vmstat shows fast writeback at 150-200MB/s, with no idle time,
> > anything up to 200 processes in running or blocked state and 20-30%
> > iowait, followed by idle CPU time with maybe 10 running/blocked
> > processes, writeback at 15-20MB/s with 70% idle time and 30% iowait.
> >
> > IOWs, the workload is cyclic - lots of incoming data with lots of
> > throughput, followed by zero incoming data processing on only small
> > amounts of writeback.
>
> My understanding is that the workload is either
>
> a) waiting for issued IOs to complete.
> b) not able to issue more IOs because XFS is busy flushing the journal entries.
>
> Is this not true?
It's just an *observation* that the incoming processing has stopped
from the data presented, and it doesn't speak to the cause of why
incoming data is not being processed. You're jumping to conclusions
again before there is supporting evidence to make such a statement.
> > The vmstat information implies that front end application processing
> > is stopping for some period of time, but it does not indicate why it
> > is doing so. When the disks are doing 4k writeback, can you grab
> > the output of 'echo w > /proc/sysrq-trigger' from dmesg and post the
> > output? That will tell us if the front end processing is blocked on
> > the filesystem at all...
>
> Aah.. ok. Will do and get back to you soon.
See? more information is required. ;)
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 1:25 ` Dave Chinner
@ 2015-06-04 2:03 ` Dave Chinner
2015-06-04 6:23 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Dave Chinner @ 2015-06-04 2:03 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Thu, Jun 04, 2015 at 11:25:30AM +1000, Dave Chinner wrote:
> On Thu, Jun 04, 2015 at 10:35:47AM +1000, Dave Chinner wrote:
> > > - Trace cmd report
> > > Too big to attach. Here's a link:
> > > https://www.dropbox.com/s/3xxe2chsv4fsrv8/trace_report.txt.zip?dl=0
> >
> > Downloading now.
>
> AIL pushing is occurring every 30s, yes. Across all filesystems, there
> are roughly 23-25,000 metadata objects being pushed every 30s flush.
...
> Indeed, this looks to me like the smoking gun. To allocate a block,
> you have to lock the AGF buffer that the allocation is going to take
> place in. Problem is, when the xfsaild pushes the AGF buffers to the
> writeback queue, they sit there with the buffer locked until the IO
> completes.
>
> In the traces, the xfsailds all run at 509385s, and immediately I
> see a ~10s gap in the trace where almost no xfs_read_agf() traces
> occur. It's not until 509396s that the traces really start to appear
> at normal speed again.
>
> Again, reducing the number of AGs will help with this problem,
> simply because the AG headers are more likely to be locked or
> pinned when the xfsaild sweep runs because they are active rather
> than sitting idle waiting for the next operation in that AG to
> require allocation....
>
> Remember, a single AG can sustain thousands of allocations every
> second - if you are only creating a few thousand files every second,
> you don't need tens of AGs to sustain that - the default of 4 AGs
> will do that just fine...
And in looking deeper into the issue, I think there's some code
changes we need to make to minimise this issue.
Allocation requires a locked AGF buffer, but they also need to be
locked for IO. The underlying issue looks like we hold the lock for
too long durign Io submission. i.e. a list gets passed to the
delayed write submission code, which then walks the list locking the
buffers, then we sort and issue the io on the list. If the writeback
queue is long enough, submission is getting blocked on the request
queue and we wait with locked buffers and hence don't allow
modifications to take place on the buffers while we are waiting for
submission.
Fixing this requires a tweak to the algorithm in
__xfs_buf_delwri_submit() so that we don't lock an entire list of
thousands of IOs before starting submission. In the mean time,
reducing the number of AGs will reduce the impact of this because
the delayed write submission code will skip buffers that are already
locked or pinned in memory, and hence an AG under modification at
the time submission occurs will be skipped by the delwri code.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 2:03 ` Dave Chinner
@ 2015-06-04 6:23 ` Dave Chinner
2015-06-04 7:26 ` Shrinand Javadekar
2015-06-05 0:59 ` Shrinand Javadekar
0 siblings, 2 replies; 21+ messages in thread
From: Dave Chinner @ 2015-06-04 6:23 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Thu, Jun 04, 2015 at 12:03:39PM +1000, Dave Chinner wrote:
> Fixing this requires a tweak to the algorithm in
> __xfs_buf_delwri_submit() so that we don't lock an entire list of
> thousands of IOs before starting submission. In the mean time,
> reducing the number of AGs will reduce the impact of this because
> the delayed write submission code will skip buffers that are already
> locked or pinned in memory, and hence an AG under modification at
> the time submission occurs will be skipped by the delwri code.
You might like to try the patch below on a test machine to see if
it helps with the problem.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
xfs: reduce lock hold times in buffer writeback
From: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
fs/xfs/xfs_buf.c | 80 ++++++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 61 insertions(+), 19 deletions(-)
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index bbe4e9e..8d2cc36 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -1768,15 +1768,63 @@ xfs_buf_cmp(
return 0;
}
+static void
+xfs_buf_delwri_submit_buffers(
+ struct list_head *buffer_list,
+ struct list_head *io_list,
+ bool wait)
+{
+ struct xfs_buf *bp, *n;
+ struct blk_plug plug;
+
+ blk_start_plug(&plug);
+ list_for_each_entry_safe(bp, n, buffer_list, b_list) {
+ bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC |
+ XBF_WRITE_FAIL);
+ bp->b_flags |= XBF_WRITE | XBF_ASYNC;
+
+ /*
+ * We do all IO submission async. This means if we need
+ * to wait for IO completion we need to take an extra
+ * reference so the buffer is still valid on the other
+ * side. We need to move the buffer onto the io_list
+ * at this point so the caller can still access it.
+ */
+ if (wait) {
+ xfs_buf_hold(bp);
+ list_move_tail(&bp->b_list, io_list);
+ } else
+ list_del_init(&bp->b_list);
+
+ xfs_buf_submit(bp);
+ }
+ blk_finish_plug(&plug);
+}
+
+/*
+ * submit buffers for write.
+ *
+ * When we have a large buffer list, we do not want to hold all the buffers
+ * locked while we block on the request queue waiting for IO dispatch. To avoid
+ * this problem, we lock and submit buffers in groups of 50, thereby minimising
+ * the lock hold times for lists which may contain thousands of objects.
+ *
+ * To do this, we sort the buffer list before we walk the list to lock and
+ * submit buffers, and we plug and unplug around each group of buffers we
+ * submit.
+ */
static int
__xfs_buf_delwri_submit(
struct list_head *buffer_list,
struct list_head *io_list,
bool wait)
{
- struct blk_plug plug;
struct xfs_buf *bp, *n;
+ LIST_HEAD (submit_list);
int pinned = 0;
+ int count = 0;
+
+ list_sort(NULL, buffer_list, xfs_buf_cmp);
list_for_each_entry_safe(bp, n, buffer_list, b_list) {
if (!wait) {
@@ -1802,30 +1850,24 @@ __xfs_buf_delwri_submit(
continue;
}
- list_move_tail(&bp->b_list, io_list);
+ list_move_tail(&bp->b_list, &submit_list);
trace_xfs_buf_delwri_split(bp, _RET_IP_);
- }
-
- list_sort(NULL, io_list, xfs_buf_cmp);
-
- blk_start_plug(&plug);
- list_for_each_entry_safe(bp, n, io_list, b_list) {
- bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
- bp->b_flags |= XBF_WRITE | XBF_ASYNC;
/*
- * we do all Io submission async. This means if we need to wait
- * for IO completion we need to take an extra reference so the
- * buffer is still valid on the other side.
+ * We do small batches of IO submission to minimise lock hold
+ * time and unnecessary writeback of buffers that are hot and
+ * would otherwise be relogged and hence not require immediate
+ * writeback.
*/
- if (wait)
- xfs_buf_hold(bp);
- else
- list_del_init(&bp->b_list);
+ if (count++ < 50)
+ continue;
- xfs_buf_submit(bp);
+ xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
+ count = 0;
}
- blk_finish_plug(&plug);
+
+ if (!list_empty(&submit_list))
+ xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
return pinned;
}
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 6:23 ` Dave Chinner
@ 2015-06-04 7:26 ` Shrinand Javadekar
2015-06-04 22:08 ` Dave Chinner
2015-06-05 0:59 ` Shrinand Javadekar
1 sibling, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-04 7:26 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1: Type: text/plain, Size: 6178 bytes --]
I made two changes based on the suggestions above:
1. Reverted the agcount back to the default: 4.
2. Bumped the directory block size to 8k (-n size=8k)
This definitely has made things better. My throughput for one run of
my 40GB (5GB on each disk) test has gone up from ~70MB/s to 88MB/s.
The pauses started off being very small : ~1 sec. Right now, with 20GB
data in each disk, I see the pauses are ~4 seconds.
I ran echo w > /proc/sysrq-trigger as soon as the system went into one
of these pauses. Attached here is the output of dmesg after that. I'm
going to run a test overnight to see how it performs. Especially, how
big do the pauses get as more and more data is written into the
system.
Also, unfortunately, I don't have a kernel dev setup ready to try out
the patch immediately. I will try and setup the environment to try it
out.
-Shri
On Wed, Jun 3, 2015 at 11:23 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Thu, Jun 04, 2015 at 12:03:39PM +1000, Dave Chinner wrote:
>> Fixing this requires a tweak to the algorithm in
>> __xfs_buf_delwri_submit() so that we don't lock an entire list of
>> thousands of IOs before starting submission. In the mean time,
>> reducing the number of AGs will reduce the impact of this because
>> the delayed write submission code will skip buffers that are already
>> locked or pinned in memory, and hence an AG under modification at
>> the time submission occurs will be skipped by the delwri code.
>
> You might like to try the patch below on a test machine to see if
> it helps with the problem.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: reduce lock hold times in buffer writeback
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
> fs/xfs/xfs_buf.c | 80 ++++++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 61 insertions(+), 19 deletions(-)
>
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index bbe4e9e..8d2cc36 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -1768,15 +1768,63 @@ xfs_buf_cmp(
> return 0;
> }
>
> +static void
> +xfs_buf_delwri_submit_buffers(
> + struct list_head *buffer_list,
> + struct list_head *io_list,
> + bool wait)
> +{
> + struct xfs_buf *bp, *n;
> + struct blk_plug plug;
> +
> + blk_start_plug(&plug);
> + list_for_each_entry_safe(bp, n, buffer_list, b_list) {
> + bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC |
> + XBF_WRITE_FAIL);
> + bp->b_flags |= XBF_WRITE | XBF_ASYNC;
> +
> + /*
> + * We do all IO submission async. This means if we need
> + * to wait for IO completion we need to take an extra
> + * reference so the buffer is still valid on the other
> + * side. We need to move the buffer onto the io_list
> + * at this point so the caller can still access it.
> + */
> + if (wait) {
> + xfs_buf_hold(bp);
> + list_move_tail(&bp->b_list, io_list);
> + } else
> + list_del_init(&bp->b_list);
> +
> + xfs_buf_submit(bp);
> + }
> + blk_finish_plug(&plug);
> +}
> +
> +/*
> + * submit buffers for write.
> + *
> + * When we have a large buffer list, we do not want to hold all the buffers
> + * locked while we block on the request queue waiting for IO dispatch. To avoid
> + * this problem, we lock and submit buffers in groups of 50, thereby minimising
> + * the lock hold times for lists which may contain thousands of objects.
> + *
> + * To do this, we sort the buffer list before we walk the list to lock and
> + * submit buffers, and we plug and unplug around each group of buffers we
> + * submit.
> + */
> static int
> __xfs_buf_delwri_submit(
> struct list_head *buffer_list,
> struct list_head *io_list,
> bool wait)
> {
> - struct blk_plug plug;
> struct xfs_buf *bp, *n;
> + LIST_HEAD (submit_list);
> int pinned = 0;
> + int count = 0;
> +
> + list_sort(NULL, buffer_list, xfs_buf_cmp);
>
> list_for_each_entry_safe(bp, n, buffer_list, b_list) {
> if (!wait) {
> @@ -1802,30 +1850,24 @@ __xfs_buf_delwri_submit(
> continue;
> }
>
> - list_move_tail(&bp->b_list, io_list);
> + list_move_tail(&bp->b_list, &submit_list);
> trace_xfs_buf_delwri_split(bp, _RET_IP_);
> - }
> -
> - list_sort(NULL, io_list, xfs_buf_cmp);
> -
> - blk_start_plug(&plug);
> - list_for_each_entry_safe(bp, n, io_list, b_list) {
> - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
> - bp->b_flags |= XBF_WRITE | XBF_ASYNC;
>
> /*
> - * we do all Io submission async. This means if we need to wait
> - * for IO completion we need to take an extra reference so the
> - * buffer is still valid on the other side.
> + * We do small batches of IO submission to minimise lock hold
> + * time and unnecessary writeback of buffers that are hot and
> + * would otherwise be relogged and hence not require immediate
> + * writeback.
> */
> - if (wait)
> - xfs_buf_hold(bp);
> - else
> - list_del_init(&bp->b_list);
> + if (count++ < 50)
> + continue;
>
> - xfs_buf_submit(bp);
> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
> + count = 0;
> }
> - blk_finish_plug(&plug);
> +
> + if (!list_empty(&submit_list))
> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
>
> return pinned;
> }
[-- Attachment #2: dmesg_sysrq_trigger --]
[-- Type: application/octet-stream, Size: 252361 bytes --]
[541339.879221] .nr_spread_over : 14
[541339.879223] .nr_running : 0
[541339.879224] .load : 0
[541339.879226] .runnable_load_avg : 0
[541339.879227] .blocked_load_avg : 0
[541339.879229] .tg_load_contrib : 0
[541339.879231] .tg_runnable_contrib : 4
[541339.879233] .tg_load_avg : 399
[541339.879234] .tg->runnable_avg : 416
[541339.879236] .tg->cfs_bandwidth.timer_active: 0
[541339.879238] .throttled : 0
[541339.879239] .throttle_count : 0
[541339.879242] .se->exec_start : 541339783.401300
[541339.879243] .se->vruntime : 158246553.830846
[541339.879245] .se->sum_exec_runtime : 3845.918386
[541339.879247] .se->statistics.wait_start : 0.000000
[541339.879249] .se->statistics.sleep_start : 0.000000
[541339.879250] .se->statistics.block_start : 0.000000
[541339.879251] .se->statistics.sleep_max : 0.000000
[541339.879253] .se->statistics.block_max : 0.000000
[541339.879255] .se->statistics.exec_max : 3.996775
[541339.879256] .se->statistics.slice_max : 0.469932
[541339.879258] .se->statistics.wait_max : 11.277384
[541339.879260] .se->statistics.wait_sum : 2118.566891
[541339.879261] .se->statistics.wait_count : 56893
[541339.879270] .se->load.weight : 2
[541339.879276] .se->avg.runnable_avg_sum : 186
[541339.879284] .se->avg.runnable_avg_period : 46391
[541339.879289] .se->avg.load_avg_contrib : 0
[541339.879296] .se->avg.decay_count : 516261848
[541339.879303]
[541339.879303] cfs_rq[8]:/autogroup-11415
[541339.879310] .exec_clock : 43328.835080
[541339.879317] .MIN_vruntime : 0.000001
[541339.879323] .min_vruntime : 28155.138193
[541339.879329] .max_vruntime : 0.000001
[541339.879335] .spread : 0.000000
[541339.879341] .spread0 : -163174426.672205
[541339.879348] .nr_spread_over : 0
[541339.879354] .nr_running : 0
[541339.879361] .load : 0
[541339.879367] .runnable_load_avg : 0
[541339.879373] .blocked_load_avg : 108
[541339.879374] .tg_load_contrib : 115
[541339.879376] .tg_runnable_contrib : 126
[541339.879378] .tg_load_avg : 1688
[541339.879379] .tg->runnable_avg : 1045
[541339.879381] .tg->cfs_bandwidth.timer_active: 0
[541339.879382] .throttled : 0
[541339.879384] .throttle_count : 0
[541339.879386] .se->exec_start : 541339878.380243
[541339.879388] .se->vruntime : 158246627.726971
[541339.879389] .se->sum_exec_runtime : 43335.492596
[541339.879391] .se->statistics.wait_start : 0.000000
[541339.879393] .se->statistics.sleep_start : 0.000000
[541339.879394] .se->statistics.block_start : 0.000000
[541339.879395] .se->statistics.sleep_max : 0.000000
[541339.879397] .se->statistics.block_max : 0.000000
[541339.879398] .se->statistics.exec_max : 3.997031
[541339.879400] .se->statistics.slice_max : 6.203576
[541339.879401] .se->statistics.wait_max : 20.504351
[541339.879403] .se->statistics.wait_sum : 50982.861011
[541339.879404] .se->statistics.wait_count : 164535
[541339.879406] .se->load.weight : 2
[541339.879408] .se->avg.runnable_avg_sum : 5724
[541339.879409] .se->avg.runnable_avg_period : 46242
[541339.879411] .se->avg.load_avg_contrib : 86
[541339.879412] .se->avg.decay_count : 516261939
[541339.879414]
[541339.879414] cfs_rq[8]:/autogroup-11432
[541339.879417] .exec_clock : 174699.162958
[541339.879418] .MIN_vruntime : 0.000001
[541339.879420] .min_vruntime : 152070.211033
[541339.879421] .max_vruntime : 0.000001
[541339.879423] .spread : 0.000000
[541339.879425] .spread0 : -163050511.599365
[541339.879426] .nr_spread_over : 871
[541339.879428] .nr_running : 0
[541339.879430] .load : 0
[541339.879431] .runnable_load_avg : 0
[541339.879432] .blocked_load_avg : 0
[541339.879434] .tg_load_contrib : 0
[541339.879435] .tg_runnable_contrib : 3
[541339.879437] .tg_load_avg : 2582
[541339.879438] .tg->runnable_avg : 2009
[541339.879440] .tg->cfs_bandwidth.timer_active: 0
[541339.879442] .throttled : 0
[541339.879443] .throttle_count : 0
[541339.879445] .se->exec_start : 541339781.091023
[541339.879446] .se->vruntime : 158246555.033048
[541339.879448] .se->sum_exec_runtime : 174713.430659
[541339.879449] .se->statistics.wait_start : 0.000000
[541339.879451] .se->statistics.sleep_start : 0.000000
[541339.879452] .se->statistics.block_start : 0.000000
[541339.879453] .se->statistics.sleep_max : 0.000000
[541339.879455] .se->statistics.block_max : 0.000000
[541339.879456] .se->statistics.exec_max : 4.007488
[541339.879458] .se->statistics.slice_max : 67.554363
[541339.879459] .se->statistics.wait_max : 44.335561
[541339.879461] .se->statistics.wait_sum : 79626.773613
[541339.879462] .se->statistics.wait_count : 191344
[541339.879463] .se->load.weight : 2
[541339.879464] .se->avg.runnable_avg_sum : 146
[541339.879466] .se->avg.runnable_avg_period : 47660
[541339.879467] .se->avg.load_avg_contrib : 0
[541339.879468] .se->avg.decay_count : 516261846
[541339.879470]
[541339.879470] cfs_rq[8]:/autogroup-11408
[541339.879472] .exec_clock : 234205.954871
[541339.879474] .MIN_vruntime : 0.000001
[541339.879479] .min_vruntime : 128576.137520
[541339.879486] .max_vruntime : 0.000001
[541339.879491] .spread : 0.000000
[541339.879498] .spread0 : -163074005.672878
[541339.879505] .nr_spread_over : 0
[541339.879511] .nr_running : 1
[541339.879517] .load : 1024
[541339.879523] .runnable_load_avg : 464
[541339.879529] .blocked_load_avg : 239
[541339.879536] .tg_load_contrib : 583
[541339.879541] .tg_runnable_contrib : 443
[541339.879548] .tg_load_avg : 5937
[541339.879555] .tg->runnable_avg : 4210
[541339.879561] .tg->cfs_bandwidth.timer_active: 0
[541339.879568] .throttled : 0
[541339.879574] .throttle_count : 0
[541339.879581] .se->exec_start : 541339878.380243
[541339.879588] .se->vruntime : 158246638.806089
[541339.879593] .se->sum_exec_runtime : 234223.529765
[541339.879600] .se->statistics.wait_start : 0.000000
[541339.879607] .se->statistics.sleep_start : 0.000000
[541339.879614] .se->statistics.block_start : 0.000000
[541339.879621] .se->statistics.sleep_max : 0.000000
[541339.879627] .se->statistics.block_max : 0.000000
[541339.879632] .se->statistics.exec_max : 4.000416
[541339.879634] .se->statistics.slice_max : 18.096331
[541339.879635] .se->statistics.wait_max : 57.211302
[541339.879637] .se->statistics.wait_sum : 151091.150074
[541339.879638] .se->statistics.wait_count : 200096
[541339.879639] .se->load.weight : 283
[541339.879641] .se->avg.runnable_avg_sum : 21030
[541339.879642] .se->avg.runnable_avg_period : 46516
[541339.879644] .se->avg.load_avg_contrib : 115
[541339.879645] .se->avg.decay_count : 516261918
[541339.879647]
[541339.879647] cfs_rq[8]:/autogroup-11406
[541339.879649] .exec_clock : 275129.723671
[541339.879651] .MIN_vruntime : 0.000001
[541339.879652] .min_vruntime : 165500.877894
[541339.879654] .max_vruntime : 0.000001
[541339.879655] .spread : 0.000000
[541339.879656] .spread0 : -163037080.932504
[541339.879658] .nr_spread_over : 2
[541339.879659] .nr_running : 0
[541339.879660] .load : 0
[541339.879662] .runnable_load_avg : 0
[541339.879663] .blocked_load_avg : 2681
[541339.879665] .tg_load_contrib : 2343
[541339.879666] .tg_runnable_contrib : 233
[541339.879668] .tg_load_avg : 10268
[541339.879669] .tg->runnable_avg : 5658
[541339.879671] .tg->cfs_bandwidth.timer_active: 0
[541339.879672] .throttled : 0
[541339.879674] .throttle_count : 0
[541339.879676] .se->exec_start : 541339877.307532
[541339.879677] .se->vruntime : 158246628.648895
[541339.879679] .se->sum_exec_runtime : 275162.286852
[541339.879681] .se->statistics.wait_start : 0.000000
[541339.879682] .se->statistics.sleep_start : 0.000000
[541339.879684] .se->statistics.block_start : 0.000000
[541339.879685] .se->statistics.sleep_max : 0.000000
[541339.879687] .se->statistics.block_max : 0.000000
[541339.879689] .se->statistics.exec_max : 4.004609
[541339.879690] .se->statistics.slice_max : 14.562895
[541339.879692] .se->statistics.wait_max : 52.525697
[541339.879694] .se->statistics.wait_sum : 278785.807888
[541339.879695] .se->statistics.wait_count : 947410
[541339.879697] .se->load.weight : 2
[541339.879699] .se->avg.runnable_avg_sum : 10712
[541339.879700] .se->avg.runnable_avg_period : 46507
[541339.879702] .se->avg.load_avg_contrib : 251
[541339.879703] .se->avg.decay_count : 516261938
[541339.879705]
[541339.879705] cfs_rq[8]:/
[541339.879707] .exec_clock : 34695989.696410
[541339.879709] .MIN_vruntime : 0.000001
[541339.879711] .min_vruntime : 158246646.270916
[541339.879712] .max_vruntime : 0.000001
[541339.879714] .spread : 0.000000
[541339.879715] .spread0 : -4955935.539482
[541339.879717] .nr_spread_over : 20347
[541339.879718] .nr_running : 1
[541339.879720] .load : 283
[541339.879722] .runnable_load_avg : 347
[541339.879723] .blocked_load_avg : 86
[541339.879725] .tg_load_contrib : 409
[541339.879726] .tg_runnable_contrib : 568
[541339.879728] .tg_load_avg : 10859
[541339.879730] .tg->runnable_avg : 9875
[541339.879731] .tg->cfs_bandwidth.timer_active: 0
[541339.879733] .throttled : 0
[541339.879734] .throttle_count : 0
[541339.879736] .avg->runnable_avg_sum : 25760
[541339.879738] .avg->runnable_avg_period : 46355
[541339.879739]
[541339.879739] rt_rq[8]:
[541339.879742] .rt_nr_running : 0
[541339.879743] .rt_throttled : 0
[541339.879745] .rt_time : 0.018949
[541339.879747] .rt_runtime : 950.000000
[541339.879749]
[541339.879749] runnable tasks:
[541339.879749] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.879749] ----------------------------------------------------------------------------------------------------------
[541339.879753] rcuos/0 8 158246553.513498 6921079 120 158246553.513498 196604.080689 540979724.866666 0 /
[541339.879764] watchdog/8 78 -11.967897 135471 0 -11.967897 3006.089016 0.001891 0 /
[541339.879770] migration/8 79 0.000000 180791 0 0.000000 5552.583169 0.001337 0 /
[541339.879796] ksoftirqd/8 80 158246572.945775 419866 120 158246572.945775 5902.069532 541254369.694172 0 /
[541339.879824] kworker/8:0H 82 16337.752400 8 100 16337.752400 0.064102 108611.514417 0 /
[541339.879866] kdmflush 392 222.325391 2 100 222.325391 0.006482 0.003599 0 /
[541339.879885] kdmflush 1979 4392.578603 2 100 4392.578603 0.010040 0.006333 0 /
[541339.879890] bioset 1981 4403.258486 2 100 4403.258486 0.013879 0.003945 0 /
[541339.879900] sh 3028 0.943626 5 120 0.943626 1.604774 153.404640 0 /autogroup-257
[541339.879905] java 3035 334.523560 9 120 334.523560 4.776271 11719.501327 0 /autogroup-264
[541339.879911] java 3109 38003.192839 961 120 38003.192839 1046.169018 539900999.317480 0 /autogroup-264
[541339.879918] java 3215 105.506246 2 120 105.506246 0.062734 0.010622 0 /autogroup-264
[541339.879923] java 3219 135.895000 4 120 135.895000 0.066837 0.000000 0 /autogroup-264
[541339.879928] java 4787 37241.598046 262 120 37241.598046 80.284504 529745186.333195 0 /autogroup-264
[541339.879937] java 3208 121.091992 3 120 121.091992 0.055841 0.000000 0 /autogroup-257
[541339.879942] java 3211 145.133889 3 120 145.133889 0.061899 0.000000 0 /autogroup-257
[541339.879948] atd 3089 0.891868 153 120 0.891868 5.812739 539468143.281996 0 /autogroup-298
[541339.879954] java 3236 37933.777336 547 120 37933.777336 206.434252 538548099.935772 0 /autogroup-264
[541339.879965] java 4796 430.483011 19 120 430.483011 36.142936 201.776397 0 /autogroup-264
[541339.879973] multipathd 4667 0.000000 7 0 0.000000 0.554535 0.000000 0 /autogroup-348
[541339.879979] multipathd 4668 0.000000 748356 0 0.000000 84534.916169 0.000000 0 /autogroup-348
[541339.879988] java 5373 31605.694521 56044 120 31605.694521 22912.939027 541242461.542409 0 /autogroup-358
[541339.880011] java 19859 31615.374736 6 120 31615.374736 0.629771 19640.828445 0 /autogroup-358
[541339.880018] SignalSender 5382 24.207808 110 120 24.207808 0.799083 30.124220 0 /autogroup-368
[541339.880023] kworker/8:1H 5662 157283363.046693 2907 100 157283363.046693 39.517658 540920122.438927 0 /
[541339.880029] java 385 186840.659656 14331 120 186840.659656 2947.278450 123557619.113896 0 /autogroup-8620
[541339.880042] java 419 186839.771643 7090 120 186839.771643 690.158520 123544242.730910 0 /autogroup-8620
[541339.880053] mysqld 19157 285756.330614 111262 120 285756.330614 3562.167918 111092513.022813 0 /autogroup-8936
[541339.880068] jfsCommit 23556 114283773.373440 226778 120 114283773.373440 4597.754484 7731225.019526 0 /
[541339.880075] jfsSync 23566 111520689.258551 2 120 111520689.258551 0.008437 0.003679 0 /
[541339.880080] kworker/u32:4 30114 158244903.696708 44293 120 158244903.696708 4936.700485 14816228.327643 0 /
[541339.880086] kworker/8:3 13492 157914380.544723 38130 120 157914380.544723 731.718665 10069342.310869 0 /
[541339.880092] PassengerHelper 5363 25.126033 11 120 25.126033 0.478590 2102926.491430 0 /autogroup-11395
[541339.880097] PassengerLoggin 5384 35.455347 2 120 35.455347 0.141689 0.017529 0 /autogroup-11395
[541339.880106] apache2 5432 1865.984206 1 120 1865.984206 0.013736 0.000000 0 /autogroup-356
[541339.880112] apache2 5393 1853.973231 3 120 1853.973231 18.579078 8.457400 0 /autogroup-356
[541339.880118] xfs-data/dm-23 6327 152605500.060776 2 100 152605500.060776 0.056318 0.061626 0 /
[541339.880123] xfs-conv/dm-23 6328 152605508.110377 2 100 152605508.110377 0.052633 0.057494 0 /
[541339.880128] xfs-cil/dm-23 6329 152605520.161577 2 100 152605520.161577 0.054610 0.052039 0 /
[541339.880133] xfs-data/dm-16 6333 152605544.301608 2 100 152605544.301608 0.072613 0.011469 0 /
[541339.880144] swift-object-se 7262 165489.170465 51629 120 165489.170465 1763.228197 1640173.355697 0 /autogroup-11406
[541339.880151] swift-object-se 7280 165479.179791 50030 120 165479.179791 1704.689195 1639235.894114 0 /autogroup-11406
[541339.880157] swift-object-se 7287 165487.846832 49582 120 165487.846832 1698.485971 1639476.842490 0 /autogroup-11406
[541339.880163] swift-object-se 7767 165488.882754 49307 120 165488.882754 1718.094069 1578828.062635 0 /autogroup-11406
[541339.880168] swift-object-se 7791 165483.227214 46839 120 165483.227214 1667.717393 1579079.165103 0 /autogroup-11406
[541339.880173] swift-object-se 7884 165488.875483 48954 120 165488.875483 1737.550333 1579048.991939 0 /autogroup-11406
[541339.880179] swift-object-se 7889 165480.735602 48688 120 165480.735602 1693.101171 1578474.227067 0 /autogroup-11406
[541339.880189] swift-object-se 7268 165480.898012 49394 120 165480.898012 1698.794600 1639734.675741 0 /autogroup-11406
[541339.880195] swift-object-se 7765 165489.009970 49390 120 165489.009970 1726.514914 1578703.544478 0 /autogroup-11406
[541339.880201] swift-object-se 7770 165488.874771 50449 120 165488.874771 1717.667939 1578840.006594 0 /autogroup-11406
[541339.880207] swift-object-se 8041 165487.745500 48606 120 165487.745500 1742.704795 1578578.487974 0 /autogroup-11406
[541339.880214] swift-object-se 7721 165483.703470 48588 120 165483.703470 1689.020724 1578626.923576 0 /autogroup-11406
[541339.880220] Rswift-object-se 7984 165488.970578 47706 120 165488.970578 1697.754755 1578828.970883 0 /autogroup-11406
[541339.880226] swift-object-se 7704 165489.180919 49502 120 165489.180919 1711.226893 1579329.961960 0 /autogroup-11406
[541339.880241] swift-object-se 8043 165484.144582 50136 120 165484.144582 1750.736355 1578634.160845 0 /autogroup-11406
[541339.880247] swift-object-se 8046 165487.916970 49303 120 165487.916970 1749.867140 1577970.340429 0 /autogroup-11406
[541339.880253] swift-object-se 7780 165479.080977 50906 120 165479.080977 1761.033251 1578275.449704 0 /autogroup-11406
[541339.880257] swift-object-se 7782 165386.518574 51171 120 165386.518574 1736.370657 1578013.107547 0 /autogroup-11406
[541339.880262] swift-object-se 7795 165480.753108 49055 120 165480.753108 1713.103922 1578778.636131 0 /autogroup-11406
[541339.880267] swift-object-se 7809 165487.532424 49367 120 165487.532424 1725.694689 1578705.893859 0 /autogroup-11406
[541339.880274] swift-object-se 6545 165476.062914 51416 120 165476.062914 1780.082131 1731886.076706 0 /autogroup-11406
[541339.880279] swift-object-se 6669 165482.024197 50815 120 165482.024197 1722.994555 1673027.753559 0 /autogroup-11406
[541339.880284] swift-object-se 6670 165487.790700 50196 120 165487.790700 1715.442961 1672817.296855 0 /autogroup-11406
[541339.880289] swift-object-se 7648 165487.333729 49394 120 165487.333729 1707.428643 1579289.200598 0 /autogroup-11406
[541339.880295] swift-object-se 7919 165489.152874 50277 120 165489.152874 1733.115401 1578530.215331 0 /autogroup-11406
[541339.880301] swift-object-se 8006 165487.771346 50100 120 165487.771346 1715.556643 1578120.069105 0 /autogroup-11406
[541339.880309] swift-object-se 7803 165356.676587 49473 120 165356.676587 1727.258222 1577782.126016 0 /autogroup-11406
[541339.880316] swift-object-se 7887 165415.341404 48645 120 165415.341404 1716.273826 1578098.866196 0 /autogroup-11406
[541339.880323] swift-object-se 6548 165489.213538 52064 120 165489.213538 1798.912469 1731585.393383 0 /autogroup-11406
[541339.880328] swift-object-se 7689 165487.509262 51574 120 165487.509262 1766.486877 1578427.175375 0 /autogroup-11406
[541339.880334] swift-object-se 7727 165358.729280 47881 120 165358.729280 1674.409408 1578905.721982 0 /autogroup-11406
[541339.880340] swift-object-se 7829 165486.709960 50740 120 165486.709960 1707.645349 1579018.430313 0 /autogroup-11406
[541339.880348] swift-object-se 7272 165488.892282 50243 120 165488.892282 1702.668984 1639621.424842 0 /autogroup-11406
[541339.880354] swift-object-se 7911 165488.877894 49590 120 165488.877894 1743.651151 1578827.057379 0 /autogroup-11406
[541339.880363] Rswift-object-se 8047 165488.877894 49521 120 165488.877894 1765.225459 1578173.207329 0 /autogroup-11406
[541339.880371] swift-object-se 7937 165381.205557 46445 120 165381.205557 1671.511752 1578495.189385 0 /autogroup-11406
[541339.880377] swift-proxy-ser 6508 128577.599009 124450 120 128577.599009 110556.787660 1484113.691600 0 /autogroup-11408
[541339.880383] swift-proxy-ser 6526 128573.931413 122804 120 128573.931413 106195.454466 1492284.053407 0 /autogroup-11408
[541339.880389] nginx 6710 3.334128 4 120 3.334128 4.669458 0.000000 0 /autogroup-11415
[541339.880394] nginx 6717 28155.138193 181919 120 28155.138193 35611.048085 1589004.950562 0 /autogroup-11415
[541339.880399] nginx 6721 28147.302117 231201 120 28147.302117 47195.658564 1568581.996660 0 /autogroup-11415
[541339.880405] nginx 6726 28150.542239 154519 120 28150.542239 27993.341702 1603278.276189 0 /autogroup-11415
[541339.880411] java 6777 483.537072 48 120 483.537072 39.306014 822325.779053 0 /autogroup-11418
[541339.880419] java 6996 -12.542977 9 120 -12.542977 0.585564 7.352198 0 /autogroup-11418
[541339.880425] java 7021 133.416355 10 120 133.416355 0.597536 734.680242 0 /autogroup-11418
[541339.880430] java 7023 133.507332 5 120 133.507332 0.669624 735.409643 0 /autogroup-11418
[541339.880435] java 7025 632.134441 31 120 632.134441 2.192928 1618405.444058 0 /autogroup-11418
[541339.880441] java 7030 84.048771 5 120 84.048771 0.284545 0.768733 0 /autogroup-11418
[541339.880446] java 7034 84.045848 4 120 84.045848 0.234074 0.625371 0 /autogroup-11418
[541339.880452] java 7042 132.735329 5 120 132.735329 0.313317 0.829307 0 /autogroup-11418
[541339.880457] java 7047 144.921340 9 120 144.921340 0.489196 0.937858 0 /autogroup-11418
[541339.880462] java 7081 652.123797 189 120 652.123797 509.166407 1617831.370089 0 /autogroup-11418
[541339.880468] java 19759 643.521744 3 120 643.521744 0.141562 0.000000 0 /autogroup-11418
[541339.880473] java 19807 651.819252 4 120 651.819252 0.144371 15.338730 0 /autogroup-11418
[541339.880481] java 7053 11149.497109 1059 120 11149.497109 247.377286 1659948.550691 0 /autogroup-11424
[541339.880488] java 7077 11172.227175 7565 120 11172.227175 5647.336814 1654818.824208 0 /autogroup-11424
[541339.880496] java 7212 10992.024133 92 120 10992.024133 8.482369 1638640.959299 0 /autogroup-11424
[541339.880501] java 7242 11183.349355 7300 120 11183.349355 6140.285457 1648121.892133 0 /autogroup-11424
[541339.880508] java 7323 11166.089306 7831 120 11166.089306 6050.538798 1644323.803837 0 /autogroup-11424
[541339.880514] java 7561 11166.550292 7858 120 11166.550292 6261.161240 1590547.257841 0 /autogroup-11424
[541339.880519] java 7572 11179.509114 7258 120 11179.509114 6233.277023 1586390.686992 0 /autogroup-11424
[541339.880536] kworker/8:0 13440 158021056.583759 19039 120 158021056.583759 371.439668 899008.747423 0 /
[541339.880545] kworker/8:2 16290 158246250.726264 13626 120 158246250.726264 277.042476 505126.399485 0 /
[541339.880553] kworker/8:1 19669 158246634.633039 2595 120 158246634.633039 56.474567 80320.941478 0 /
[541339.880559] kworker/8:4 19706 158224582.897571 2717 120 158224582.897571 56.721090 60919.024304 0 /
[541339.880578]
[541339.880583] cpu#9, 2199.987 MHz
[541339.880590] .nr_running : 4
[541339.880598] .load : 389
[541339.880605] .nr_switches : 271987558
[541339.880611] .nr_load_updates : 21977775
[541339.880618] .nr_uninterruptible : 317538
[541339.880625] .next_balance : 4430.360681
[541339.880632] .curr->pid : 6501
[541339.880639] .clock : 541339879.934412
[541339.880645] .cpu_load[0] : 126
[541339.880653] .cpu_load[1] : 113
[541339.880660] .cpu_load[2] : 88
[541339.880664] .cpu_load[3] : 66
[541339.880665] .cpu_load[4] : 55
[541339.880667] .yld_count : 5768727
[541339.880669] .sched_count : 277803591
[541339.880670] .sched_goidle : 101919279
[541339.880672] .avg_idle : 247701
[541339.880673] .max_idle_balance_cost : 500000
[541339.880675] .ttwu_count : 132228506
[541339.880676] .ttwu_local : 22344535
[541339.880678]
[541339.880678] cfs_rq[9]:/autogroup-11436
[541339.880681] .exec_clock : 244.318432
[541339.880683] .MIN_vruntime : 0.000001
[541339.880684] .min_vruntime : 2518.931177
[541339.880686] .max_vruntime : 0.000001
[541339.880687] .spread : 0.000000
[541339.880689] .spread0 : -163200062.879221
[541339.880691] .nr_spread_over : 178
[541339.880692] .nr_running : 0
[541339.880694] .load : 0
[541339.880695] .runnable_load_avg : 0
[541339.880697] .blocked_load_avg : 0
[541339.880699] .tg_load_contrib : 0
[541339.880700] .tg_runnable_contrib : 10
[541339.880702] .tg_load_avg : 3894
[541339.880703] .tg->runnable_avg : 271
[541339.880705] .tg->cfs_bandwidth.timer_active: 0
[541339.880706] .throttled : 0
[541339.880708] .throttle_count : 0
[541339.880710] .se->exec_start : 541339867.755814
[541339.880711] .se->vruntime : 158720953.405190
[541339.880713] .se->sum_exec_runtime : 244.333768
[541339.880715] .se->statistics.wait_start : 0.000000
[541339.880716] .se->statistics.sleep_start : 0.000000
[541339.880718] .se->statistics.block_start : 0.000000
[541339.880719] .se->statistics.sleep_max : 0.000000
[541339.880721] .se->statistics.block_max : 0.000000
[541339.880722] .se->statistics.exec_max : 1.379101
[541339.880724] .se->statistics.slice_max : 0.854863
[541339.880725] .se->statistics.wait_max : 6.978075
[541339.880727] .se->statistics.wait_sum : 85.030984
[541339.880735] .se->statistics.wait_count : 875
[541339.880742] .se->load.weight : 2
[541339.880749] .se->avg.runnable_avg_sum : 478
[541339.880756] .se->avg.runnable_avg_period : 48240
[541339.880763] .se->avg.load_avg_contrib : 0
[541339.880770] .se->avg.decay_count : 516261929
[541339.880777]
[541339.880777] cfs_rq[9]:/autogroup-11424
[541339.880783] .exec_clock : 24335.068530
[541339.880791] .MIN_vruntime : 0.000001
[541339.880797] .min_vruntime : 11262.164501
[541339.880803] .max_vruntime : 0.000001
[541339.880810] .spread : 0.000000
[541339.880816] .spread0 : -163191319.645897
[541339.880822] .nr_spread_over : 9
[541339.880828] .nr_running : 0
[541339.880834] .load : 0
[541339.880840] .runnable_load_avg : 0
[541339.880847] .blocked_load_avg : 0
[541339.880854] .tg_load_contrib : 0
[541339.880859] .tg_runnable_contrib : 0
[541339.880860] .tg_load_avg : 180
[541339.880862] .tg->runnable_avg : 157
[541339.880863] .tg->cfs_bandwidth.timer_active: 0
[541339.880865] .throttled : 0
[541339.880866] .throttle_count : 0
[541339.880868] .se->exec_start : 541339753.998824
[541339.880869] .se->vruntime : 158720841.862822
[541339.880871] .se->sum_exec_runtime : 24337.367437
[541339.880872] .se->statistics.wait_start : 0.000000
[541339.880874] .se->statistics.sleep_start : 0.000000
[541339.880875] .se->statistics.block_start : 0.000000
[541339.880877] .se->statistics.sleep_max : 0.000000
[541339.880878] .se->statistics.block_max : 0.000000
[541339.880880] .se->statistics.exec_max : 3.998111
[541339.880882] .se->statistics.slice_max : 11.251892
[541339.880883] .se->statistics.wait_max : 19.075062
[541339.880885] .se->statistics.wait_sum : 6649.813989
[541339.880886] .se->statistics.wait_count : 66425
[541339.880888] .se->load.weight : 2
[541339.880889] .se->avg.runnable_avg_sum : 44
[541339.880890] .se->avg.runnable_avg_period : 47311
[541339.880892] .se->avg.load_avg_contrib : 0
[541339.880893] .se->avg.decay_count : 516261820
[541339.880895]
[541339.880895] cfs_rq[9]:/autogroup-11432
[541339.880897] .exec_clock : 170193.061295
[541339.880898] .MIN_vruntime : 0.000001
[541339.880900] .min_vruntime : 148231.312588
[541339.880901] .max_vruntime : 0.000001
[541339.880903] .spread : 0.000000
[541339.880904] .spread0 : -163054350.497810
[541339.880905] .nr_spread_over : 858
[541339.880907] .nr_running : 0
[541339.880908] .load : 0
[541339.880909] .runnable_load_avg : 0
[541339.880911] .blocked_load_avg : 46
[541339.880912] .tg_load_contrib : 46
[541339.880914] .tg_runnable_contrib : 44
[541339.880915] .tg_load_avg : 1989
[541339.880916] .tg->runnable_avg : 1812
[541339.880918] .tg->cfs_bandwidth.timer_active: 0
[541339.880919] .throttled : 0
[541339.880920] .throttle_count : 0
[541339.880922] .se->exec_start : 541339867.614363
[541339.880923] .se->vruntime : 158720953.068510
[541339.880925] .se->sum_exec_runtime : 170207.438409
[541339.880926] .se->statistics.wait_start : 0.000000
[541339.880928] .se->statistics.sleep_start : 0.000000
[541339.880929] .se->statistics.block_start : 0.000000
[541339.880930] .se->statistics.sleep_max : 0.000000
[541339.880931] .se->statistics.block_max : 0.000000
[541339.880933] .se->statistics.exec_max : 4.002397
[541339.880934] .se->statistics.slice_max : 67.133145
[541339.880936] .se->statistics.wait_max : 37.041305
[541339.880937] .se->statistics.wait_sum : 81494.912759
[541339.880939] .se->statistics.wait_count : 201153
[541339.880940] .se->load.weight : 2
[541339.880942] .se->avg.runnable_avg_sum : 2043
[541339.880943] .se->avg.runnable_avg_period : 47408
[541339.880944] .se->avg.load_avg_contrib : 23
[541339.880946] .se->avg.decay_count : 516261929
[541339.880948]
[541339.880948] cfs_rq[9]:/autogroup-11415
[541339.880950] .exec_clock : 44375.709503
[541339.880951] .MIN_vruntime : 0.000001
[541339.880953] .min_vruntime : 29577.838910
[541339.880954] .max_vruntime : 0.000001
[541339.880955] .spread : 0.000000
[541339.880957] .spread0 : -163173003.971488
[541339.880958] .nr_spread_over : 0
[541339.880960] .nr_running : 0
[541339.880961] .load : 0
[541339.880962] .runnable_load_avg : 0
[541339.880964] .blocked_load_avg : 24
[541339.880965] .tg_load_contrib : 24
[541339.880966] .tg_runnable_contrib : 27
[541339.880968] .tg_load_avg : 1476
[541339.880969] .tg->runnable_avg : 1054
[541339.880971] .tg->cfs_bandwidth.timer_active: 0
[541339.880972] .throttled : 0
[541339.880973] .throttle_count : 0
[541339.880975] .se->exec_start : 541339877.443124
[541339.880976] .se->vruntime : 158720971.604268
[541339.880980] .se->sum_exec_runtime : 44381.711580
[541339.880986] .se->statistics.wait_start : 0.000000
[541339.880993] .se->statistics.sleep_start : 0.000000
[541339.880999] .se->statistics.block_start : 0.000000
[541339.881005] .se->statistics.sleep_max : 0.000000
[541339.881010] .se->statistics.block_max : 0.000000
[541339.881016] .se->statistics.exec_max : 3.996888
[541339.881023] .se->statistics.slice_max : 11.986083
[541339.881029] .se->statistics.wait_max : 22.284702
[541339.881037] .se->statistics.wait_sum : 51255.796535
[541339.881043] .se->statistics.wait_count : 164348
[541339.881050] .se->load.weight : 2
[541339.881057] .se->avg.runnable_avg_sum : 1282
[541339.881064] .se->avg.runnable_avg_period : 47180
[541339.881070] .se->avg.load_avg_contrib : 17
[541339.881075] .se->avg.decay_count : 516261938
[541339.881082]
[541339.881082] cfs_rq[9]:/autogroup-11408
[541339.881088] .exec_clock : 236516.499829
[541339.881094] .MIN_vruntime : 0.000001
[541339.881101] .min_vruntime : 130446.765605
[541339.881107] .max_vruntime : 0.000001
[541339.881115] .spread : 0.000000
[541339.881121] .spread0 : -163072135.044793
[541339.881125] .nr_spread_over : 0
[541339.881126] .nr_running : 1
[541339.881127] .load : 1024
[541339.881128] .runnable_load_avg : 327
[541339.881129] .blocked_load_avg : 0
[541339.881131] .tg_load_contrib : 327
[541339.881132] .tg_runnable_contrib : 364
[541339.881133] .tg_load_avg : 5556
[541339.881135] .tg->runnable_avg : 4251
[541339.881136] .tg->cfs_bandwidth.timer_active: 0
[541339.881138] .throttled : 0
[541339.881139] .throttle_count : 0
[541339.881140] .se->exec_start : 541339879.934412
[541339.881142] .se->vruntime : 158720997.598372
[541339.881143] .se->sum_exec_runtime : 236533.909943
[541339.881145] .se->statistics.wait_start : 0.000000
[541339.881146] .se->statistics.sleep_start : 0.000000
[541339.881147] .se->statistics.block_start : 0.000000
[541339.881149] .se->statistics.sleep_max : 0.000000
[541339.881150] .se->statistics.block_max : 0.000000
[541339.881152] .se->statistics.exec_max : 3.999334
[541339.881153] .se->statistics.slice_max : 25.229480
[541339.881154] .se->statistics.wait_max : 55.944393
[541339.881156] .se->statistics.wait_sum : 154104.104003
[541339.881157] .se->statistics.wait_count : 203293
[541339.881159] .se->load.weight : 159
[541339.881160] .se->avg.runnable_avg_sum : 16969
[541339.881161] .se->avg.runnable_avg_period : 47666
[541339.881163] .se->avg.load_avg_contrib : 56
[541339.881164] .se->avg.decay_count : 0
[541339.881166]
[541339.881166] cfs_rq[9]:/autogroup-11406
[541339.881168] .exec_clock : 278464.410638
[541339.881169] .MIN_vruntime : 167398.391524
[541339.881171] .min_vruntime : 167410.391524
[541339.881172] .max_vruntime : 167398.391524
[541339.881174] .spread : 0.000000
[541339.881175] .spread0 : -163035171.418874
[541339.881176] .nr_spread_over : 5
[541339.881178] .nr_running : 3
[541339.881179] .load : 3072
[541339.881180] .runnable_load_avg : 8
[541339.881182] .blocked_load_avg : 767
[541339.881183] .tg_load_contrib : 708
[541339.881185] .tg_runnable_contrib : 359
[541339.881186] .tg_load_avg : 10834
[541339.881188] .tg->runnable_avg : 5912
[541339.881189] .tg->cfs_bandwidth.timer_active: 0
[541339.881191] .throttled : 0
[541339.881192] .throttle_count : 0
[541339.881194] .se->exec_start : 541339879.294257
[541339.881195] .se->vruntime : 158720996.548833
[541339.881197] .se->sum_exec_runtime : 278495.403415
[541339.881198] .se->statistics.wait_start : 541339879.753339
[541339.881199] .se->statistics.sleep_start : 0.000000
[541339.881201] .se->statistics.block_start : 0.000000
[541339.881202] .se->statistics.sleep_max : 0.000000
[541339.881203] .se->statistics.block_max : 0.000000
[541339.881205] .se->statistics.exec_max : 3.998018
[541339.881206] .se->statistics.slice_max : 15.985298
[541339.881208] .se->statistics.wait_max : 58.295253
[541339.881209] .se->statistics.wait_sum : 276695.291351
[541339.881210] .se->statistics.wait_count : 973443
[541339.881212] .se->load.weight : 230
[541339.881213] .se->avg.runnable_avg_sum : 16302
[541339.881214] .se->avg.runnable_avg_period : 46771
[541339.881216] .se->avg.load_avg_contrib : 70
[541339.881217] .se->avg.decay_count : 516261940
[541339.881219]
[541339.881219] cfs_rq[9]:/
[541339.881221] .exec_clock : 34775102.891329
[541339.881223] .MIN_vruntime : 158720996.548833
[541339.881225] .min_vruntime : 158720996.548833
[541339.881226] .max_vruntime : 158720996.548833
[541339.881227] .spread : 0.000000
[541339.881229] .spread0 : -4481585.261565
[541339.881230] .nr_spread_over : 20132
[541339.881231] .nr_running : 2
[541339.881233] .load : 389
[541339.881234] .runnable_load_avg : 126
[541339.881236] .blocked_load_avg : 19
[541339.881237] .tg_load_contrib : 145
[541339.881239] .tg_runnable_contrib : 534
[541339.881240] .tg_load_avg : 8366
[541339.881241] .tg->runnable_avg : 10030
[541339.881243] .tg->cfs_bandwidth.timer_active: 0
[541339.881244] .throttled : 0
[541339.881246] .throttle_count : 0
[541339.881247] .avg->runnable_avg_sum : 24838
[541339.881249] .avg->runnable_avg_period : 47039
[541339.881250]
[541339.881250] rt_rq[9]:
[541339.881254] .rt_nr_running : 0
[541339.881261] .rt_throttled : 0
[541339.881267] .rt_time : 0.000000
[541339.881274] .rt_runtime : 950.000000
[541339.881280]
[541339.881280] runnable tasks:
[541339.881280] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.881280] ----------------------------------------------------------------------------------------------------------
[541339.881293] watchdog/9 83 -11.971969 135471 0 -11.971969 2810.752815 0.002288 0 /
[541339.881323] migration/9 84 0.000000 179550 0 0.000000 4581.803074 0.001388 0 /
[541339.881352] ksoftirqd/9 85 158720827.863664 419223 120 158720827.863664 6213.501163 541255196.662054 0 /
[541339.881376] kworker/9:0H 87 2155.038587 7 100 2155.038587 0.020822 13342.051246 0 /
[541339.881386] scsi_eh_5 275 832.042637 9 120 832.042637 0.200559 1027.666493 0 /
[541339.881395] kdmflush 404 883.868581 2 100 883.868581 0.009330 0.007075 0 /
[541339.881400] bioset 405 895.887232 2 100 895.887232 0.019687 0.006776 0 /
[541339.881405] kdmflush 406 907.896487 2 100 907.896487 0.010065 0.191804 0 /
[541339.881409] bioset 409 919.932203 2 100 919.932203 0.036582 0.003059 0 /
[541339.881417] kworker/9:1H 956 158332902.124641 2945 100 158332902.124641 40.439117 541246571.362807 0 /
[541339.881424] rsyslogd 1485 1137.283835 17 120 1137.283835 2.839350 539172687.340549 0 /autogroup-148
[541339.881432] bioset 1654 2607.252753 2 100 2607.252753 0.008415 0.004330 0 /
[541339.881437] kdmflush 1764 2777.625553 2 100 2777.625553 0.011823 0.011765 0 /
[541339.881444] kdmflush 1920 2900.027675 2 100 2900.027675 0.008256 0.003360 0 /
[541339.881451] ruby-timer-thr 2827 4.242529 1 120 4.242529 0.020573 0.000000 0 /autogroup-208
[541339.881472] java 3501 29426.957599 656 120 29426.957599 2379.234819 417715970.590843 0 /autogroup-264
[541339.881478] java 4260 179.223260 2 120 179.223260 0.119863 0.013314 0 /autogroup-264
[541339.881485] java 4338 288.388135 9 120 288.388135 0.292571 611.033959 0 /autogroup-264
[541339.881492] java 4438 312.473289 3 120 312.473289 0.289951 0.147624 0 /autogroup-264
[541339.881517] java 4442 326.747707 9 120 326.747707 0.336718 0.825660 0 /autogroup-264
[541339.881523] java 4444 339.012294 8 120 339.012294 0.264594 0.112377 0 /autogroup-264
[541339.881531] carbon-cache.py 4483 103733.183150 101154 120 103733.183150 341364.335967 540935852.688506 0 /autogroup-345
[541339.881539] multipathd 4670 0.000000 108363 0 0.000000 3321.167430 0.000000 0 /autogroup-348
[541339.881545] java 5212 161.561760 3 120 161.561760 2.425190 57164.445723 0 /autogroup-358
[541339.881550] java 5214 35156.954234 15092 120 35156.954234 10177.043743 541183351.376489 0 /autogroup-358
[541339.881556] java 5216 35157.363943 13138 120 35157.363943 9054.515066 541184554.339635 0 /autogroup-358
[541339.881565] java 5225 35157.031333 18805 120 35157.031333 10349.406508 541183180.067593 0 /autogroup-358
[541339.881576] java 5375 35176.270423 542001 120 35176.270423 21607.892372 541243978.406681 0 /autogroup-358
[541339.881584] java 5381 68.199563 2 120 68.199563 0.202843 0.035786 0 /autogroup-358
[541339.881593] java 5733 35168.132132 481 120 35168.132132 136.993457 541192195.215439 0 /autogroup-358
[541339.881598] java 5737 35136.150499 1831 120 35136.150499 268.415394 540966726.523372 0 /autogroup-358
[541339.881614] mysqld 19185 7147.303508 2 120 7147.303508 0.307867 46.103679 0 /autogroup-8936
[541339.881621] mysqld 7158 314899.365925 27254 120 314899.365925 10116.306148 1575411.519476 0 /autogroup-8936
[541339.881627] mysqld 13451 315015.878257 139 120 315015.878257 31.893032 899072.525105 0 /autogroup-8936
[541339.881633] mysqld 16341 315015.724488 82 120 315015.724488 16.305564 449477.774634 0 /autogroup-8936
[541339.881640] mysqld 19791 315026.184843 12 120 315026.184843 1.795340 1.730349 0 /autogroup-8936
[541339.881648] jfsCommit 23565 114401637.879146 226255 120 114401637.879146 4623.728911 7731157.683895 0 /
[541339.881654] bioset 505 111624289.561312 2 100 111624289.561312 0.019846 0.009194 0 /
[541339.881662] bash 29146 102.749763 60 120 102.749763 104.700949 2611.789969 0 /autogroup-11329
[541339.881669] PassengerLoggin 5385 11.103172 2 120 11.103172 0.151755 0.015299 0 /autogroup-11395
[541339.881679] xfs-cil/dm-25 6341 156767505.401286 10 100 156767505.401286 0.172022 1255000.400366 0 /
[541339.881686] xfs-data/dm-12 6363 153023764.563164 2 100 153023764.563164 0.044177 0.016973 0 /
[541339.881691] xfs-conv/dm-12 6364 153023776.628027 2 100 153023776.628027 0.067921 0.050940 0 /
[541339.881695] xfs-cil/dm-12 6365 153023788.692215 2 100 153023788.692215 0.067243 0.037733 0 /
[541339.881701] xfs-cil/dm-28 6371 156767505.401874 5 100 156767505.401874 0.114649 1254316.507401 0 /
[541339.881707] swift-account-s 6461 91.672096 344 120 91.672096 76.290951 1735609.008668 0 /autogroup-11407
[541339.881714] swift-object-se 7263 167388.325158 49603 120 167388.325158 1707.574168 1640018.771642 0 /autogroup-11406
[541339.881720] swift-object-se 8056 167394.414493 50670 120 167394.414493 1752.544419 1578684.297040 0 /autogroup-11406
[541339.881727] swift-object-se 7285 167395.346687 49586 120 167395.346687 1733.942024 1639254.562322 0 /autogroup-11406
[541339.881733] swift-object-se 7291 167391.601882 50143 120 167391.601882 1697.939920 1639005.549257 0 /autogroup-11406
[541339.881743] swift-object-se 7994 167352.352985 48683 120 167352.352985 1751.433449 1577823.855412 0 /autogroup-11406
[541339.881748] swift-object-se 8001 167394.889508 52305 120 167394.889508 1770.038298 1578035.313343 0 /autogroup-11406
[541339.881754] swift-object-se 7271 167395.333165 51355 120 167395.333165 1768.270670 1639054.260015 0 /autogroup-11406
[541339.881760] swift-object-se 7775 167396.550909 49304 120 167396.550909 1730.359243 1578567.383665 0 /autogroup-11406
[541339.881765] Rswift-object-se 7944 167398.465886 47826 120 167398.465886 1682.041220 1579173.900932 0 /autogroup-11406
[541339.881771] swift-object-se 8039 167395.277466 48606 120 167395.277466 1729.049755 1578564.584163 0 /autogroup-11406
[541339.881777] swift-object-se 6618 167364.480124 47926 120 167364.480124 1687.230974 1702844.269805 0 /autogroup-11406
[541339.881782] swift-object-se 6620 167364.431253 47978 120 167364.431253 1678.046489 1703290.079527 0 /autogroup-11406
[541339.881788] swift-object-se 7637 167394.886169 50184 120 167394.886169 1740.348223 1579034.901150 0 /autogroup-11406
[541339.881793] swift-object-se 7638 167394.852285 50923 120 167394.852285 1753.546555 1578578.191234 0 /autogroup-11406
[541339.881798] swift-object-se 7639 167395.151533 50082 120 167395.151533 1757.439244 1578823.401224 0 /autogroup-11406
[541339.881804] swift-object-se 7973 167386.972105 45968 120 167386.972105 1632.025373 1579561.092894 0 /autogroup-11406
[541339.881811] swift-object-se 7641 167394.999178 50260 120 167394.999178 1730.189052 1578852.057140 0 /autogroup-11406
[541339.881818] swift-object-se 7868 167388.457812 48729 120 167388.457812 1689.213651 1578247.248208 0 /autogroup-11406
[541339.881824] swift-object-se 6610 167386.865493 50888 120 167386.865493 1761.716748 1702424.244501 0 /autogroup-11406
[541339.881830] swift-object-se 7192 167388.671275 49795 120 167388.671275 1723.626737 1643218.530037 0 /autogroup-11406
[541339.881837] swift-object-se 7901 167391.946603 48044 120 167391.946603 1706.871063 1578451.924698 0 /autogroup-11406
[541339.881845] swift-object-se 7859 167391.928166 50191 120 167391.928166 1741.073573 1578548.603423 0 /autogroup-11406
[541339.881853] swift-object-se 7772 167391.616217 49671 120 167391.616217 1714.748262 1578410.966835 0 /autogroup-11406
[541339.881860] swift-object-se 7794 167386.918211 50206 120 167386.918211 1744.790320 1578475.010319 0 /autogroup-11406
[541339.881865] swift-object-se 7811 167395.359953 48774 120 167395.359953 1737.745949 1578520.561437 0 /autogroup-11406
[541339.881874] swift-object-se 8007 167394.893237 50506 120 167394.893237 1723.106204 1578620.338333 0 /autogroup-11406
[541339.881881] swift-object-se 7738 167395.268108 47839 120 167395.268108 1685.269243 1578762.756121 0 /autogroup-11406
[541339.881887] swift-object-se 7833 167394.896942 48646 120 167394.896942 1688.516414 1578561.226735 0 /autogroup-11406
[541339.881892] swift-object-se 7834 167394.892154 48401 120 167394.892154 1697.017418 1578680.781238 0 /autogroup-11406
[541339.881898] swift-object-se 7885 167338.336516 47077 120 167338.336516 1675.251630 1579036.708073 0 /autogroup-11406
[541339.881906] swift-object-se 6552 167394.793759 52743 120 167394.793759 1763.466402 1732331.023265 0 /autogroup-11406
[541339.881912] swift-object-se 7714 167364.495776 47344 120 167364.495776 1652.653444 1579205.003163 0 /autogroup-11406
[541339.881925] swift-object-se 7300 167395.223557 50258 120 167395.223557 1725.723175 1639142.506215 0 /autogroup-11406
[541339.881930] swift-object-se 7302 167394.889750 50827 120 167394.889750 1736.550212 1638834.825742 0 /autogroup-11406
[541339.881936] swift-object-se 7910 167398.518321 50461 120 167398.518321 1741.936998 1578367.719127 0 /autogroup-11406
[541339.881943] swift-object-se 7852 167311.664048 48451 120 167311.664048 1672.178768 1579454.696330 0 /autogroup-11406
[541339.881954] swift-object-se 7785 167398.413599 49034 120 167398.413599 1724.904180 1578971.829736 0 /autogroup-11406
[541339.881959] swift-object-se 7931 167357.280077 46543 120 167357.280077 1669.046368 1578457.866556 0 /autogroup-11406
[541339.881965] swift-container 6487 0.735776 146 120 0.735776 53.295972 1735632.457301 0 /autogroup-11409
[541339.881970] swift-container 6489 -6.845354 144 120 -6.845354 48.232242 1735641.630685 0 /autogroup-11409
[541339.881976] swift-container 6492 -3.554074 162 120 -3.554074 63.407808 1736367.978180 0 /autogroup-11409
[541339.881981] swift-container 6493 0.233086 156 120 0.233086 53.079910 1735685.250605 0 /autogroup-11409
[541339.881986] swift-container 6499 0.227533 143 120 0.227533 44.265175 1735646.864919 0 /autogroup-11409
[541339.881991] swift-container 6500 -4.849897 145 120 -4.849897 43.185293 1735623.909085 0 /autogroup-11409
[541339.881996] swift-proxy-ser 6501 130448.190145 160911 120 130448.190145 157989.492575 1357770.557773 0 /autogroup-11408
[541339.882002] swift-proxy-ser 6516 129465.297120 142837 120 129465.297120 137848.935578 1394148.501594 0 /autogroup-11408
[541339.882009] swift-proxy-ser 6531 129465.258315 122898 120 129465.258315 104393.932030 1467119.599477 0 /autogroup-11408
[541339.882015] nginx 6712 29577.838910 240562 120 29577.838910 48426.975985 1563094.765083 0 /autogroup-11415
[541339.882023] java 6774 372.681588 39 120 372.681588 31.282330 822334.290241 0 /autogroup-11418
[541339.882029] java 6775 362.475969 44 120 362.475969 14.992226 822361.511318 0 /autogroup-11418
[541339.882035] java 6781 362.374683 40 120 362.374683 13.673422 822364.028192 0 /autogroup-11418
[541339.882045] java 7016 569.534983 270 120 569.534983 415.725424 1650284.137027 0 /autogroup-11418
[541339.882051] java 7028 99.368300 7 120 99.368300 0.448345 45306.558833 0 /autogroup-11418
[541339.882060] java 6901 11251.052967 3120 120 11251.052967 2330.078147 1660742.448917 0 /autogroup-11424
[541339.882066] java 6907 11244.838661 3537 120 11244.838661 2196.813845 1660914.027905 0 /autogroup-11424
[541339.882075] java 7070 11244.850306 1671 120 11244.850306 43.655119 1661723.149160 0 /autogroup-11424
[541339.882081] java 7074 11249.242644 8158 120 11249.242644 7979.945906 1652582.779796 0 /autogroup-11424
[541339.882088] java 7211 11147.101029 95 120 11147.101029 10.610862 1638644.338600 0 /autogroup-11424
[541339.882094] java 7245 11245.267785 7181 120 11245.267785 6118.213756 1647581.323623 0 /autogroup-11424
[541339.882099] java 7246 11246.038454 7515 120 11246.038454 6916.176741 1646897.197305 0 /autogroup-11424
[541339.882105] java 7251 11245.835086 6732 120 11245.835086 5509.465185 1648113.921919 0 /autogroup-11424
[541339.882113] java 7609 11260.694609 8008 120 11260.694609 5927.226891 1586506.305438 0 /autogroup-11424
[541339.882119] java 7624 11245.045999 7182 120 11245.045999 5802.721387 1586195.686719 0 /autogroup-11424
[541339.882125] magfsd 7105 23.011907 1 120 23.011907 0.014899 0.000000 0 /autogroup-11432
[541339.882131] magfsd 7110 71.353986 2 120 71.353986 0.070096 0.008098 0 /autogroup-11432
[541339.882136] magfsd 7115 107.635783 1 120 107.635783 0.046832 0.000000 0 /autogroup-11432
[541339.882141] magfsd 7118 131.844767 1 120 131.844767 0.084140 0.000000 0 /autogroup-11432
[541339.882147] magfsd 19689 148184.902054 6668 120 148184.902054 4775.552284 68335.598113 0 /autogroup-11432
[541339.882153] magfsd 19866 148231.312588 1642 120 148231.312588 364.092463 19998.687932 0 /autogroup-11432
[541339.882162] kworker/9:2 8560 158646630.782135 19459 120 158646630.782135 373.731756 1450214.169709 0 /
[541339.882168] kworker/9:0 11292 158720987.184920 5380 120 158720987.184920 106.872773 1141330.284117 0 /
[541339.882174] kworker/u32:3 13544 158720622.337221 1324 120 158720622.337221 133.142996 922197.188520 0 /
[541339.882180] kworker/9:1 13925 158716851.328779 26147 120 158716851.328779 477.060436 797698.567725 0 /
[541339.882186] kworker/9:3 16320 158280020.520824 8551 120 158280020.520824 166.851107 408844.204077 0 /
[541339.882194] kworker/9:4 19659 158720573.931933 2748 120 158720573.931933 50.478517 87961.553053 0 /
[541339.882202]
[541339.882205] cpu#10, 2199.987 MHz
[541339.882207] .nr_running : 2
[541339.882209] .load : 256
[541339.882211] .nr_switches : 272609268
[541339.882212] .nr_load_updates : 21835005
[541339.882214] .nr_uninterruptible : 307520
[541339.882216] .next_balance : 4430.360681
[541339.882217] .curr->pid : 6473
[541339.882219] .clock : 541339881.936420
[541339.882221] .cpu_load[0] : 44
[541339.882222] .cpu_load[1] : 77
[541339.882224] .cpu_load[2] : 96
[541339.882226] .cpu_load[3] : 88
[541339.882227] .cpu_load[4] : 89
[541339.882229] .yld_count : 14063530
[541339.882230] .sched_count : 286711144
[541339.882232] .sched_goidle : 102200694
[541339.882233] .avg_idle : 113101
[541339.882235] .max_idle_balance_cost : 500000
[541339.882237] .ttwu_count : 132350516
[541339.882238] .ttwu_local : 22089582
[541339.882240]
[541339.882240] cfs_rq[10]:/autogroup-11415
[541339.882243] .exec_clock : 45499.333251
[541339.882245] .MIN_vruntime : 0.000001
[541339.882246] .min_vruntime : 30271.042536
[541339.882248] .max_vruntime : 0.000001
[541339.882249] .spread : 0.000000
[541339.882251] .spread0 : -163172310.767862
[541339.882252] .nr_spread_over : 0
[541339.882254] .nr_running : 0
[541339.882255] .load : 0
[541339.882257] .runnable_load_avg : 0
[541339.882258] .blocked_load_avg : 0
[541339.882260] .tg_load_contrib : 0
[541339.882261] .tg_runnable_contrib : 1
[541339.882263] .tg_load_avg : 1461
[541339.882264] .tg->runnable_avg : 1058
[541339.882266] .tg->cfs_bandwidth.timer_active: 0
[541339.882268] .throttled : 0
[541339.882269] .throttle_count : 0
[541339.882271] .se->exec_start : 541339758.720989
[541339.882273] .se->vruntime : 158483722.920921
[541339.882275] .se->sum_exec_runtime : 45505.655893
[541339.882276] .se->statistics.wait_start : 0.000000
[541339.882278] .se->statistics.sleep_start : 0.000000
[541339.882279] .se->statistics.block_start : 0.000000
[541339.882281] .se->statistics.sleep_max : 0.000000
[541339.882282] .se->statistics.block_max : 0.000000
[541339.882284] .se->statistics.exec_max : 3.997566
[541339.882285] .se->statistics.slice_max : 10.133004
[541339.882287] .se->statistics.wait_max : 18.680735
[541339.882288] .se->statistics.wait_sum : 51556.843715
[541339.882290] .se->statistics.wait_count : 167574
[541339.882291] .se->load.weight : 2
[541339.882292] .se->avg.runnable_avg_sum : 74
[541339.882294] .se->avg.runnable_avg_period : 46935
[541339.882295] .se->avg.load_avg_contrib : 0
[541339.882296] .se->avg.decay_count : 516261825
[541339.882298]
[541339.882298] cfs_rq[10]:/autogroup-11408
[541339.882301] .exec_clock : 235727.766046
[541339.882302] .MIN_vruntime : 129048.007669
[541339.882304] .min_vruntime : 129048.719744
[541339.882305] .max_vruntime : 129048.007669
[541339.882307] .spread : 0.000000
[541339.882308] .spread0 : -163073533.090654
[541339.882310] .nr_spread_over : 0
[541339.882312] .nr_running : 1
[541339.882313] .load : 1024
[541339.882315] .runnable_load_avg : 114
[541339.882316] .blocked_load_avg : 270
[541339.882318] .tg_load_contrib : 384
[541339.882319] .tg_runnable_contrib : 377
[541339.882321] .tg_load_avg : 6217
[541339.882322] .tg->runnable_avg : 4338
[541339.882324] .tg->cfs_bandwidth.timer_active: 0
[541339.882325] .throttled : 0
[541339.882327] .throttle_count : 0
[541339.882329] .se->exec_start : 541339878.721285
[541339.882330] .se->vruntime : 158483941.300398
[541339.882332] .se->sum_exec_runtime : 235744.930501
[541339.882333] .se->statistics.wait_start : 541339881.625873
[541339.882335] .se->statistics.sleep_start : 0.000000
[541339.882336] .se->statistics.block_start : 0.000000
[541339.882338] .se->statistics.sleep_max : 0.000000
[541339.882339] .se->statistics.block_max : 0.000000
[541339.882341] .se->statistics.exec_max : 4.000728
[541339.882342] .se->statistics.slice_max : 21.003205
[541339.882344] .se->statistics.wait_max : 63.928540
[541339.882345] .se->statistics.wait_sum : 153298.186119
[541339.882347] .se->statistics.wait_count : 203076
[541339.882348] .se->load.weight : 165
[541339.882349] .se->avg.runnable_avg_sum : 17027
[541339.882351] .se->avg.runnable_avg_period : 46247
[541339.882352] .se->avg.load_avg_contrib : 69
[541339.882353] .se->avg.decay_count : 0
[541339.882355]
[541339.882355] cfs_rq[10]:/autogroup-11432
[541339.882357] .exec_clock : 172601.545027
[541339.882359] .MIN_vruntime : 0.000001
[541339.882361] .min_vruntime : 150774.897418
[541339.882362] .max_vruntime : 0.000001
[541339.882364] .spread : 0.000000
[541339.882365] .spread0 : -163051806.912980
[541339.882367] .nr_spread_over : 975
[541339.882368] .nr_running : 0
[541339.882370] .load : 0
[541339.882371] .runnable_load_avg : 0
[541339.882372] .blocked_load_avg : 115
[541339.882374] .tg_load_contrib : 115
[541339.882375] .tg_runnable_contrib : 134
[541339.882377] .tg_load_avg : 1961
[541339.882379] .tg->runnable_avg : 1801
[541339.882380] .tg->cfs_bandwidth.timer_active: 0
[541339.882382] .throttled : 0
[541339.882383] .throttle_count : 0
[541339.882385] .se->exec_start : 541339792.920209
[541339.882386] .se->vruntime : 158483851.911208
[541339.882388] .se->sum_exec_runtime : 172614.940461
[541339.882389] .se->statistics.wait_start : 0.000000
[541339.882390] .se->statistics.sleep_start : 0.000000
[541339.882392] .se->statistics.block_start : 0.000000
[541339.882393] .se->statistics.sleep_max : 0.000000
[541339.882394] .se->statistics.block_max : 0.000000
[541339.882396] .se->statistics.exec_max : 4.007383
[541339.882397] .se->statistics.slice_max : 84.666179
[541339.882399] .se->statistics.wait_max : 58.528211
[541339.882400] .se->statistics.wait_sum : 80913.308532
[541339.882402] .se->statistics.wait_count : 202776
[541339.882403] .se->load.weight : 2
[541339.882405] .se->avg.runnable_avg_sum : 6145
[541339.882406] .se->avg.runnable_avg_period : 46830
[541339.882407] .se->avg.load_avg_contrib : 39
[541339.882409] .se->avg.decay_count : 516261857
[541339.882411]
[541339.882411] cfs_rq[10]:/autogroup-11406
[541339.882413] .exec_clock : 272108.933984
[541339.882415] .MIN_vruntime : 0.000001
[541339.882417] .min_vruntime : 162173.668921
[541339.882418] .max_vruntime : 0.000001
[541339.882419] .spread : 0.000000
[541339.882421] .spread0 : -163040408.141477
[541339.882422] .nr_spread_over : 2
[541339.882424] .nr_running : 1
[541339.882425] .load : 1024
[541339.882427] .runnable_load_avg : 386
[541339.882428] .blocked_load_avg : 586
[541339.882429] .tg_load_contrib : 907
[541339.882431] .tg_runnable_contrib : 448
[541339.882432] .tg_load_avg : 11846
[541339.882433] .tg->runnable_avg : 6098
[541339.882434] .tg->cfs_bandwidth.timer_active: 0
[541339.882436] .throttled : 0
[541339.882437] .throttle_count : 0
[541339.882439] .se->exec_start : 541339881.936420
[541339.882440] .se->vruntime : 158483941.336152
[541339.882442] .se->sum_exec_runtime : 272139.625346
[541339.882443] .se->statistics.wait_start : 0.000000
[541339.882445] .se->statistics.sleep_start : 0.000000
[541339.882446] .se->statistics.block_start : 0.000000
[541339.882447] .se->statistics.sleep_max : 0.000000
[541339.882449] .se->statistics.block_max : 0.000000
[541339.882450] .se->statistics.exec_max : 3.997632
[541339.882451] .se->statistics.slice_max : 15.109987
[541339.882453] .se->statistics.wait_max : 67.963285
[541339.882454] .se->statistics.wait_sum : 278445.517337
[541339.882456] .se->statistics.wait_count : 955418
[541339.882457] .se->load.weight : 91
[541339.882458] .se->avg.runnable_avg_sum : 20531
[541339.882460] .se->avg.runnable_avg_period : 46744
[541339.882461] .se->avg.load_avg_contrib : 79
[541339.882462] .se->avg.decay_count : 516261940
[541339.882464]
[541339.882464] cfs_rq[10]:/
[541339.882466] .exec_clock : 34748008.251814
[541339.882468] .MIN_vruntime : 158483941.300398
[541339.882469] .min_vruntime : 158483941.300398
[541339.882471] .max_vruntime : 158483941.300398
[541339.882472] .spread : 0.000000
[541339.882474] .spread0 : -4718640.510000
[541339.882475] .nr_spread_over : 19978
[541339.882476] .nr_running : 2
[541339.882478] .load : 256
[541339.882479] .runnable_load_avg : 148
[541339.882481] .blocked_load_avg : 0
[541339.882482] .tg_load_contrib : 131
[541339.882484] .tg_runnable_contrib : 668
[541339.882485] .tg_load_avg : 10277
[541339.882487] .tg->runnable_avg : 10190
[541339.882488] .tg->cfs_bandwidth.timer_active: 0
[541339.882490] .throttled : 0
[541339.882491] .throttle_count : 0
[541339.882493] .avg->runnable_avg_sum : 31376
[541339.882494] .avg->runnable_avg_period : 47892
[541339.882496]
[541339.882496] rt_rq[10]:
[541339.882498] .rt_nr_running : 0
[541339.882499] .rt_throttled : 0
[541339.882501] .rt_time : 0.000000
[541339.882502] .rt_runtime : 950.000000
[541339.882504]
[541339.882504] runnable tasks:
[541339.882504] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.882504] ----------------------------------------------------------------------------------------------------------
[541339.882511] watchdog/10 88 -11.971439 135471 0 -11.971439 2792.497639 0.002230 0 /
[541339.882517] migration/10 89 0.000000 174276 0 0.000000 4195.567793 0.001404 0 /
[541339.882522] ksoftirqd/10 90 158483550.444805 417713 120 158483550.444805 5645.334857 541256294.278143 0 /
[541339.882527] kworker/10:0H 92 21413.486806 8 100 21413.486806 0.058600 694000.132478 0 /
[541339.882538] bioset 402 585.475540 2 100 585.475540 0.026764 0.004519 0 /
[541339.882544] ext4-rsv-conver 431 641.406463 2 100 641.406463 0.032541 0.015151 0 /
[541339.882550] ext4-rsv-conver 594 737.774670 2 100 737.774670 0.006444 0.040004 0 /
[541339.882555] ext4-rsv-conver 621 761.864531 2 100 761.864531 0.041839 0.005117 0 /
[541339.882562] bioset 1468 2089.583895 2 100 2089.583895 0.007472 0.004359 0 /
[541339.882568] kdmflush 1525 2114.192038 2 100 2114.192038 0.013238 0.005632 0 /
[541339.882574] bioset 1526 2124.717565 2 100 2124.717565 0.010442 0.004359 0 /
[541339.882579] kdmflush 1540 2136.737253 2 100 2136.737253 0.010380 0.034346 0 /
[541339.882584] kdmflush 1544 2148.746622 2 100 2148.746622 0.010275 0.002741 0 /
[541339.882589] bioset 1545 2160.755000 2 100 2160.755000 0.009269 0.002646 0 /
[541339.882595] kdmflush 1610 2225.515926 2 100 2225.515926 0.007521 0.002468 0 /
[541339.882600] bioset 1766 2452.869581 2 100 2452.869581 0.024654 0.005978 0 /
[541339.882605] bioset 1843 2549.149835 2 100 2549.149835 0.010212 0.004592 0 /
[541339.882610] bioset 1921 2596.993299 2 100 2596.993299 0.008059 0.004256 0 /
[541339.882615] kdmflush 1965 2609.002132 2 100 2609.002132 0.009819 0.004709 0 /
[541339.882620] bioset 1966 2621.009721 2 100 2621.009721 0.008496 0.004042 0 /
[541339.882626] kauditd 2429 2775.774797 2 120 2775.774797 0.048007 0.021777 0 /
[541339.882633] ruby-timer-thr 2793 3087633.142589 23804 120 3087633.142589 1093.246132 541152037.476695 0 /autogroup-242
[541339.882641] java 3217 35235.869774 537 120 35235.869774 1498.967704 469922308.013238 0 /autogroup-264
[541339.882651] java 3258 41126.070713 518 120 41126.070713 208.131957 538548086.302412 0 /autogroup-264
[541339.882659] java 4440 120.475332 3 120 120.475332 0.246837 0.022177 0 /autogroup-264
[541339.882664] java 4443 132.759160 14 120 132.759160 0.283835 0.455011 0 /autogroup-264
[541339.882669] java 4445 144.970117 6 120 144.970117 0.210964 0.025183 0 /autogroup-264
[541339.882675] java 4448 157.491431 5 120 157.491431 0.521321 5.615955 0 /autogroup-264
[541339.882682] multipathd 4671 0.000000 1 0 0.000000 0.462808 0.000000 0 /autogroup-348
[541339.882688] java 5223 39392.905276 18826 120 39392.905276 9170.402702 541184183.845164 0 /autogroup-358
[541339.882698] kworker/10:1H 8260 158059560.809279 2981 100 158059560.809279 42.539022 540559773.786078 0 /
[541339.882712] mysqld 19797 336083.396028 15 120 336083.396028 1.975102 1.865305 0 /autogroup-8936
[541339.882718] mysqld 19799 336096.156794 16 120 336096.156794 2.114945 2.201090 0 /autogroup-8936
[541339.882723] mysqld 19800 336110.813652 13 120 336110.813652 2.656865 0.649202 0 /autogroup-8936
[541339.882728] mysqld 19801 336113.391035 14 120 336113.391035 2.674437 0.425770 0 /autogroup-8936
[541339.882733] logger 19142 7895.689063 15 120 7895.689063 2.304070 5493.205451 0 /autogroup-8936
[541339.882738] jfsCommit 23550 114592991.307643 227507 120 114592991.307643 5162.361988 7730597.747793 0 /
[541339.882750] apache2 5394 1707.912789 217 120 1707.912789 24.949702 2149753.087928 0 /autogroup-356
[541339.882756] apache2 5395 1715.477827 4 120 1715.477827 26.431805 0.825441 0 /autogroup-356
[541339.882762] xfs-cil/dm-16 6335 152879118.417634 2 100 152879118.417634 0.062108 0.023675 0 /
[541339.882768] xfs-cil/dm-14 6359 152879142.465213 2 100 152879142.465213 0.021621 0.055722 0 /
[541339.882776] swift-object-se 7267 162154.030581 50226 120 162154.030581 1730.388981 1639591.589328 0 /autogroup-11406
[541339.882787] swift-object-se 7985 162146.198120 50704 120 162146.198120 1786.022939 1578038.709918 0 /autogroup-11406
[541339.882793] swift-object-se 7989 162112.279328 49405 120 162112.279328 1719.428979 1577959.055759 0 /autogroup-11406
[541339.882800] swift-object-se 7765 162161.979704 49392 120 162161.979704 1726.742515 1578706.578074 0 /autogroup-11406
[541339.882806] swift-object-se 7951 162161.745845 45629 120 162161.745845 1630.577383 1579737.994996 0 /autogroup-11406
[541339.882811] swift-object-se 7953 162161.780812 47438 120 162161.780812 1659.823143 1579994.679385 0 /autogroup-11406
[541339.882817] Rswift-object-se 6473 162173.668921 895819 120 162173.668921 210980.209364 1104000.463516 0 /autogroup-11406
[541339.882822] swift-object-se 6615 162146.179775 48469 120 162146.179775 1680.893526 1703414.242708 0 /autogroup-11406
[541339.882828] swift-object-se 7535 162152.781471 48439 120 162152.781471 1687.558249 1587899.126180 0 /autogroup-11406
[541339.882836] swift-object-se 7713 162153.668300 51558 120 162153.668300 1753.863437 1578196.039826 0 /autogroup-11406
[541339.882842] swift-object-se 8069 162154.867855 48221 120 162154.867855 1722.341760 1578085.938860 0 /autogroup-11406
[541339.882848] swift-object-se 8070 162096.807490 48722 120 162096.807490 1716.344185 1577675.341081 0 /autogroup-11406
[541339.882856] swift-object-se 7855 162146.205423 50657 120 162146.205423 1744.550281 1578294.859381 0 /autogroup-11406
[541339.882861] swift-object-se 7866 162161.718951 48811 120 162161.718951 1753.191727 1578464.431143 0 /autogroup-11406
[541339.882867] swift-object-se 7870 162161.775359 49678 120 162161.775359 1761.303552 1578184.726958 0 /autogroup-11406
[541339.882873] swift-object-se 8044 162155.267652 48285 120 162155.267652 1730.506526 1578306.495323 0 /autogroup-11406
[541339.882879] swift-object-se 6676 162160.343334 49838 120 162160.343334 1721.005192 1673215.748778 0 /autogroup-11406
[541339.882885] swift-object-se 6679 162146.239202 50160 120 162146.239202 1718.063995 1672338.029225 0 /autogroup-11406
[541339.882891] swift-object-se 8028 162154.838805 48098 120 162154.838805 1731.498410 1578433.268963 0 /autogroup-11406
[541339.882898] swift-object-se 7650 162155.123187 48906 120 162155.123187 1739.252891 1579163.924152 0 /autogroup-11406
[541339.882903] swift-object-se 7654 162154.066355 50138 120 162154.066355 1735.681951 1578794.190428 0 /autogroup-11406
[541339.882910] swift-object-se 7921 162129.415029 48649 120 162129.415029 1739.554679 1578242.036999 0 /autogroup-11406
[541339.882916] swift-object-se 7540 162153.572638 50171 120 162153.572638 1720.281067 1587401.108444 0 /autogroup-11406
[541339.882922] swift-object-se 7740 162153.645283 51197 120 162153.645283 1730.820318 1578301.990174 0 /autogroup-11406
[541339.882928] swift-object-se 7840 162129.702656 51543 120 162129.702656 1793.930008 1577459.633393 0 /autogroup-11406
[541339.882934] swift-object-se 8011 162137.282741 47884 120 162137.282741 1710.787178 1577958.329557 0 /autogroup-11406
[541339.882942] swift-object-se 7802 162153.709094 48933 120 162153.709094 1702.821915 1578876.722546 0 /autogroup-11406
[541339.882947] swift-object-se 8071 162161.906744 46756 120 162161.906744 1675.479389 1578380.018264 0 /autogroup-11406
[541339.882955] swift-object-se 7952 162162.030684 48035 120 162162.030684 1708.735647 1578781.901956 0 /autogroup-11406
[541339.882962] swift-object-se 7882 162155.264216 50473 120 162155.264216 1751.000043 1577742.373241 0 /autogroup-11406
[541339.882971] swift-object-se 8022 162152.674712 48367 120 162152.674712 1688.573708 1578895.585052 0 /autogroup-11406
[541339.882976] swift-object-se 8024 162152.671742 47062 120 162152.671742 1663.377449 1578975.963595 0 /autogroup-11406
[541339.882981] swift-object-se 8025 162152.701370 47140 120 162152.701370 1684.064126 1578981.690561 0 /autogroup-11406
[541339.882990] swift-proxy-ser 6503 129048.007669 137075 120 129048.007669 110552.491251 1462056.315506 0 /autogroup-11408
[541339.882996] swift-proxy-ser 6507 129048.314930 125534 120 129048.314930 104675.056314 1479793.999373 0 /autogroup-11408
[541339.883001] swift-proxy-ser 6513 129048.719744 118989 120 129048.719744 109491.043666 1470515.829127 0 /autogroup-11408
[541339.883012] java 6970 654.321237 1789 120 654.321237 100.783726 1648224.141596 0 /autogroup-11418
[541339.883018] java 7000 -12.379691 13 120 -12.379691 0.737409 7.047020 0 /autogroup-11418
[541339.883026] java 7079 617.312020 151 120 617.312020 59.704142 1558501.996889 0 /autogroup-11418
[541339.883033] java 6905 11257.872096 3022 120 11257.872096 2392.072604 1660576.177903 0 /autogroup-11424
[541339.883041] java 7060 11254.623878 1699 120 11254.623878 43.073127 1661866.739006 0 /autogroup-11424
[541339.883051] java 7577 11263.195271 7763 120 11263.195271 6412.234458 1586321.988164 0 /autogroup-11424
[541339.883060] magfsd 19897 150774.897418 980 120 150774.897418 666.360383 12293.231506 0 /autogroup-11432
[541339.883068] kworker/10:1 8825 158202888.040154 27789 120 158202888.040154 518.838321 1326060.767683 0 /
[541339.883074] kworker/10:3 13421 158477101.862346 21692 120 158477101.862346 429.276438 952036.343252 0 /
[541339.883080] kworker/10:2 16407 158483861.891047 12056 120 158483861.891047 246.062499 472838.969198 0 /
[541339.883086] kworker/10:4 16968 158483849.986525 2740 120 158483849.986525 55.980626 326344.995840 0 /
[541339.883094]
[541339.883096] cpu#11, 2199.987 MHz
[541339.883098] .nr_running : 1
[541339.883099] .load : 253
[541339.883101] .nr_switches : 272335264
[541339.883102] .nr_load_updates : 21723398
[541339.883104] .nr_uninterruptible : 308119
[541339.883105] .next_balance : 4430.360664
[541339.883107] .curr->pid : 19940
[541339.883108] .clock : 541339882.764108
[541339.883110] .cpu_load[0] : 193
[541339.883111] .cpu_load[1] : 155
[541339.883112] .cpu_load[2] : 121
[541339.883114] .cpu_load[3] : 88
[541339.883115] .cpu_load[4] : 63
[541339.883117] .yld_count : 9964524
[541339.883118] .sched_count : 282348809
[541339.883120] .sched_goidle : 101829442
[541339.883121] .avg_idle : 71767
[541339.883122] .max_idle_balance_cost : 500000
[541339.883124] .ttwu_count : 132416074
[541339.883125] .ttwu_local : 22253291
[541339.883127]
[541339.883127] cfs_rq[11]:/autogroup-264
[541339.883129] .exec_clock : 36198.783427
[541339.883131] .MIN_vruntime : 0.000001
[541339.883133] .min_vruntime : 37410.155118
[541339.883134] .max_vruntime : 0.000001
[541339.883136] .spread : 0.000000
[541339.883137] .spread0 : -163165171.655280
[541339.883139] .nr_spread_over : 806
[541339.883140] .nr_running : 0
[541339.883142] .load : 0
[541339.883143] .runnable_load_avg : 0
[541339.883145] .blocked_load_avg : 0
[541339.883147] .tg_load_contrib : 0
[541339.883148] .tg_runnable_contrib : 0
[541339.883149] .tg_load_avg : 13
[541339.883151] .tg->runnable_avg : 0
[541339.883152] .tg->cfs_bandwidth.timer_active: 0
[541339.883154] .throttled : 0
[541339.883155] .throttle_count : 0
[541339.883157] .se->exec_start : 541339867.963058
[541339.883159] .se->vruntime : 158399210.487787
[541339.883160] .se->sum_exec_runtime : 36215.003680
[541339.883162] .se->statistics.wait_start : 0.000000
[541339.883164] .se->statistics.sleep_start : 0.000000
[541339.883165] .se->statistics.block_start : 0.000000
[541339.883167] .se->statistics.sleep_max : 0.000000
[541339.883169] .se->statistics.block_max : 0.000000
[541339.883170] .se->statistics.exec_max : 3.998893
[541339.883172] .se->statistics.slice_max : 27.649075
[541339.883173] .se->statistics.wait_max : 319.060401
[541339.883175] .se->statistics.wait_sum : 6432.546644
[541339.883177] .se->statistics.wait_count : 586146
[541339.883178] .se->load.weight : 2
[541339.883180] .se->avg.runnable_avg_sum : 36
[541339.883181] .se->avg.runnable_avg_period : 47963
[541339.883183] .se->avg.load_avg_contrib : 0
[541339.883184] .se->avg.decay_count : 516261929
[541339.883187]
[541339.883187] cfs_rq[11]:/autogroup-11436
[541339.883189] .exec_clock : 313.615675
[541339.883191] .MIN_vruntime : 0.000001
[541339.883193] .min_vruntime : 2653.797277
[541339.883194] .max_vruntime : 0.000001
[541339.883196] .spread : 0.000000
[541339.883198] .spread0 : -163199928.013121
[541339.883199] .nr_spread_over : 190
[541339.883201] .nr_running : 1
[541339.883202] .load : 1024
[541339.883204] .runnable_load_avg : 1022
[541339.883205] .blocked_load_avg : 0
[541339.883207] .tg_load_contrib : 1022
[541339.883209] .tg_runnable_contrib : 92
[541339.883210] .tg_load_avg : 4131
[541339.883211] .tg->runnable_avg : 289
[541339.883213] .tg->cfs_bandwidth.timer_active: 0
[541339.883214] .throttled : 0
[541339.883216] .throttle_count : 0
[541339.883218] .se->exec_start : 541339882.764108
[541339.883220] .se->vruntime : 158399244.614574
[541339.883221] .se->sum_exec_runtime : 313.633451
[541339.883223] .se->statistics.wait_start : 0.000000
[541339.883224] .se->statistics.sleep_start : 0.000000
[541339.883226] .se->statistics.block_start : 0.000000
[541339.883227] .se->statistics.sleep_max : 0.000000
[541339.883228] .se->statistics.block_max : 0.000000
[541339.883229] .se->statistics.exec_max : 2.440071
[541339.883231] .se->statistics.slice_max : 1.322612
[541339.883232] .se->statistics.wait_max : 13.718429
[541339.883234] .se->statistics.wait_sum : 91.974083
[541339.883236] .se->statistics.wait_count : 976
[541339.883237] .se->load.weight : 253
[541339.883238] .se->avg.runnable_avg_sum : 4155
[541339.883240] .se->avg.runnable_avg_period : 46131
[541339.883241] .se->avg.load_avg_contrib : 71
[541339.883242] .se->avg.decay_count : 516261943
[541339.883244]
[541339.883244] cfs_rq[11]:/autogroup-358
[541339.883246] .exec_clock : 42211.745596
[541339.883248] .MIN_vruntime : 0.000001
[541339.883249] .min_vruntime : 37353.569300
[541339.883250] .max_vruntime : 0.000001
[541339.883252] .spread : 0.000000
[541339.883253] .spread0 : -163165228.241098
[541339.883254] .nr_spread_over : 11
[541339.883256] .nr_running : 0
[541339.883257] .load : 0
[541339.883258] .runnable_load_avg : 0
[541339.883259] .blocked_load_avg : 0
[541339.883261] .tg_load_contrib : 0
[541339.883262] .tg_runnable_contrib : 0
[541339.883263] .tg_load_avg : 0
[541339.883265] .tg->runnable_avg : 0
[541339.883266] .tg->cfs_bandwidth.timer_active: 0
[541339.883267] .throttled : 0
[541339.883268] .throttle_count : 0
[541339.883270] .se->exec_start : 541339856.915162
[541339.883272] .se->vruntime : 158399195.565660
[541339.883273] .se->sum_exec_runtime : 42219.283818
[541339.883274] .se->statistics.wait_start : 0.000000
[541339.883276] .se->statistics.sleep_start : 0.000000
[541339.883277] .se->statistics.block_start : 0.000000
[541339.883278] .se->statistics.sleep_max : 0.000000
[541339.883280] .se->statistics.block_max : 0.000000
[541339.883281] .se->statistics.exec_max : 131.154224
[541339.883283] .se->statistics.slice_max : 5.759236
[541339.883284] .se->statistics.wait_max : 38.738001
[541339.883286] .se->statistics.wait_sum : 4344.219804
[541339.883287] .se->statistics.wait_count : 393529
[541339.883288] .se->load.weight : 2
[541339.883290] .se->avg.runnable_avg_sum : 23
[541339.883292] .se->avg.runnable_avg_period : 47153
[541339.883293] .se->avg.load_avg_contrib : 0
[541339.883294] .se->avg.decay_count : 516261918
[541339.883297]
[541339.883297] cfs_rq[11]:/autogroup-347
[541339.883299] .exec_clock : 88215.301341
[541339.883301] .MIN_vruntime : 0.000001
[541339.883302] .min_vruntime : 70881.548320
[541339.883304] .max_vruntime : 0.000001
[541339.883305] .spread : 0.000000
[541339.883307] .spread0 : -163131700.262078
[541339.883308] .nr_spread_over : 0
[541339.883310] .nr_running : 0
[541339.883311] .load : 0
[541339.883313] .runnable_load_avg : 0
[541339.883314] .blocked_load_avg : 23
[541339.883316] .tg_load_contrib : 23
[541339.883318] .tg_runnable_contrib : 3
[541339.883319] .tg_load_avg : 86
[541339.883320] .tg->runnable_avg : 61
[541339.883322] .tg->cfs_bandwidth.timer_active: 0
[541339.883324] .throttled : 0
[541339.883325] .throttle_count : 0
[541339.883327] .se->exec_start : 541339868.162550
[541339.883329] .se->vruntime : 158399210.544178
[541339.883330] .se->sum_exec_runtime : 88387.395608
[541339.883332] .se->statistics.wait_start : 0.000000
[541339.883333] .se->statistics.sleep_start : 0.000000
[541339.883335] .se->statistics.block_start : 0.000000
[541339.883336] .se->statistics.sleep_max : 0.000000
[541339.883337] .se->statistics.block_max : 0.000000
[541339.883339] .se->statistics.exec_max : 11.258032
[541339.883341] .se->statistics.slice_max : 2.295264
[541339.883343] .se->statistics.wait_max : 16.131926
[541339.883344] .se->statistics.wait_sum : 137095.798626
[541339.883346] .se->statistics.wait_count : 1865964
[541339.883347] .se->load.weight : 2
[541339.883349] .se->avg.runnable_avg_sum : 157
[541339.883350] .se->avg.runnable_avg_period : 47642
[541339.883352] .se->avg.load_avg_contrib : 16
[541339.883353] .se->avg.decay_count : 516261929
[541339.883356]
[541339.883356] cfs_rq[11]:/autogroup-11415
[541339.883358] .exec_clock : 42237.381216
[541339.883359] .MIN_vruntime : 0.000001
[541339.883361] .min_vruntime : 27866.450600
[541339.883363] .max_vruntime : 0.000001
[541339.883364] .spread : 0.000000
[541339.883365] .spread0 : -163174715.359798
[541339.883367] .nr_spread_over : 0
[541339.883368] .nr_running : 0
[541339.883370] .load : 0
[541339.883371] .runnable_load_avg : 0
[541339.883373] .blocked_load_avg : 171
[541339.883374] .tg_load_contrib : 167
[541339.883375] .tg_runnable_contrib : 148
[541339.883377] .tg_load_avg : 1461
[541339.883379] .tg->runnable_avg : 1097
[541339.883380] .tg->cfs_bandwidth.timer_active: 0
[541339.883381] .throttled : 0
[541339.883382] .throttle_count : 0
[541339.883384] .se->exec_start : 541339875.164399
[541339.883386] .se->vruntime : 158399220.433329
[541339.883387] .se->sum_exec_runtime : 42243.512816
[541339.883389] .se->statistics.wait_start : 0.000000
[541339.883390] .se->statistics.sleep_start : 0.000000
[541339.883392] .se->statistics.block_start : 0.000000
[541339.883393] .se->statistics.sleep_max : 0.000000
[541339.883394] .se->statistics.block_max : 0.000000
[541339.883396] .se->statistics.exec_max : 3.997046
[541339.883397] .se->statistics.slice_max : 6.793017
[541339.883399] .se->statistics.wait_max : 19.359947
[541339.883400] .se->statistics.wait_sum : 49399.894732
[541339.883402] .se->statistics.wait_count : 154935
[541339.883403] .se->load.weight : 2
[541339.883405] .se->avg.runnable_avg_sum : 6696
[541339.883406] .se->avg.runnable_avg_period : 46099
[541339.883407] .se->avg.load_avg_contrib : 129
[541339.883409] .se->avg.decay_count : 516261936
[541339.883411]
[541339.883411] cfs_rq[11]:/autogroup-11432
[541339.883413] .exec_clock : 172609.713291
[541339.883415] .MIN_vruntime : 0.000001
[541339.883416] .min_vruntime : 149473.481988
[541339.883417] .max_vruntime : 0.000001
[541339.883419] .spread : 0.000000
[541339.883420] .spread0 : -163053108.328410
[541339.883422] .nr_spread_over : 841
[541339.883423] .nr_running : 0
[541339.883424] .load : 0
[541339.883425] .runnable_load_avg : 0
[541339.883427] .blocked_load_avg : 0
[541339.883428] .tg_load_contrib : 0
[541339.883430] .tg_runnable_contrib : 51
[541339.883431] .tg_load_avg : 1961
[541339.883433] .tg->runnable_avg : 1801
[541339.883434] .tg->cfs_bandwidth.timer_active: 0
[541339.883436] .throttled : 0
[541339.883437] .throttle_count : 0
[541339.883439] .se->exec_start : 541339854.793523
[541339.883440] .se->vruntime : 158399196.419004
[541339.883442] .se->sum_exec_runtime : 172624.690299
[541339.883444] .se->statistics.wait_start : 0.000000
[541339.883445] .se->statistics.sleep_start : 0.000000
[541339.883447] .se->statistics.block_start : 0.000000
[541339.883448] .se->statistics.sleep_max : 0.000000
[541339.883449] .se->statistics.block_max : 0.000000
[541339.883451] .se->statistics.exec_max : 4.002824
[541339.883452] .se->statistics.slice_max : 60.072988
[541339.883454] .se->statistics.wait_max : 42.027786
[541339.883455] .se->statistics.wait_sum : 80432.081523
[541339.883457] .se->statistics.wait_count : 204584
[541339.883458] .se->load.weight : 2
[541339.883460] .se->avg.runnable_avg_sum : 2398
[541339.883461] .se->avg.runnable_avg_period : 47293
[541339.883462] .se->avg.load_avg_contrib : 0
[541339.883464] .se->avg.decay_count : 516261916
[541339.883466]
[541339.883466] cfs_rq[11]:/autogroup-11424
[541339.883468] .exec_clock : 24405.368001
[541339.883469] .MIN_vruntime : 0.000001
[541339.883471] .min_vruntime : 11025.096398
[541339.883473] .max_vruntime : 0.000001
[541339.883474] .spread : 0.000000
[541339.883475] .spread0 : -163191556.714000
[541339.883477] .nr_spread_over : 12
[541339.883478] .nr_running : 0
[541339.883480] .load : 0
[541339.883481] .runnable_load_avg : 0
[541339.883483] .blocked_load_avg : 0
[541339.883484] .tg_load_contrib : 0
[541339.883486] .tg_runnable_contrib : 0
[541339.883488] .tg_load_avg : 179
[541339.883489] .tg->runnable_avg : 157
[541339.883491] .tg->cfs_bandwidth.timer_active: 0
[541339.883492] .throttled : 0
[541339.883494] .throttle_count : 0
[541339.883496] .se->exec_start : 541339793.351767
[541339.883497] .se->vruntime : 158399172.286554
[541339.883499] .se->sum_exec_runtime : 24407.797967
[541339.883500] .se->statistics.wait_start : 0.000000
[541339.883502] .se->statistics.sleep_start : 0.000000
[541339.883503] .se->statistics.block_start : 0.000000
[541339.883505] .se->statistics.sleep_max : 0.000000
[541339.883506] .se->statistics.block_max : 0.000000
[541339.883508] .se->statistics.exec_max : 4.007454
[541339.883509] .se->statistics.slice_max : 32.980180
[541339.883511] .se->statistics.wait_max : 16.250153
[541339.883513] .se->statistics.wait_sum : 6865.611656
[541339.883514] .se->statistics.wait_count : 62092
[541339.883516] .se->load.weight : 2
[541339.883517] .se->avg.runnable_avg_sum : 34
[541339.883519] .se->avg.runnable_avg_period : 46334
[541339.883520] .se->avg.load_avg_contrib : 0
[541339.883522] .se->avg.decay_count : 516261858
[541339.883524]
[541339.883524] cfs_rq[11]:/autogroup-11406
[541339.883526] .exec_clock : 275544.825470
[541339.883527] .MIN_vruntime : 0.000001
[541339.883528] .min_vruntime : 164600.417032
[541339.883530] .max_vruntime : 0.000001
[541339.883531] .spread : 0.000000
[541339.883533] .spread0 : -163037981.393366
[541339.883534] .nr_spread_over : 7
[541339.883535] .nr_running : 0
[541339.883537] .load : 0
[541339.883538] .runnable_load_avg : 0
[541339.883539] .blocked_load_avg : 525
[541339.883541] .tg_load_contrib : 493
[541339.883542] .tg_runnable_contrib : 381
[541339.883544] .tg_load_avg : 12553
[541339.883545] .tg->runnable_avg : 6210
[541339.883546] .tg->cfs_bandwidth.timer_active: 0
[541339.883548] .throttled : 0
[541339.883549] .throttle_count : 0
[541339.883551] .se->exec_start : 541339882.649226
[541339.883553] .se->vruntime : 158399255.939352
[541339.883554] .se->sum_exec_runtime : 275576.987789
[541339.883556] .se->statistics.wait_start : 0.000000
[541339.883557] .se->statistics.sleep_start : 0.000000
[541339.883559] .se->statistics.block_start : 0.000000
[541339.883560] .se->statistics.sleep_max : 0.000000
[541339.883561] .se->statistics.block_max : 0.000000
[541339.883563] .se->statistics.exec_max : 3.998461
[541339.883564] .se->statistics.slice_max : 15.433618
[541339.883566] .se->statistics.wait_max : 62.196066
[541339.883568] .se->statistics.wait_sum : 276190.689555
[541339.883569] .se->statistics.wait_count : 950196
[541339.883570] .se->load.weight : 2
[541339.883572] .se->avg.runnable_avg_sum : 17855
[541339.883573] .se->avg.runnable_avg_period : 46967
[541339.883574] .se->avg.load_avg_contrib : 44
[541339.883576] .se->avg.decay_count : 516261943
[541339.883578]
[541339.883578] cfs_rq[11]:/autogroup-11408
[541339.883580] .exec_clock : 234478.973815
[541339.883582] .MIN_vruntime : 0.000001
[541339.883583] .min_vruntime : 128332.219025
[541339.883585] .max_vruntime : 0.000001
[541339.883586] .spread : 0.000000
[541339.883588] .spread0 : -163074249.591373
[541339.883589] .nr_spread_over : 0
[541339.883591] .nr_running : 0
[541339.883592] .load : 0
[541339.883593] .runnable_load_avg : 0
[541339.883595] .blocked_load_avg : 351
[541339.883596] .tg_load_contrib : 351
[541339.883597] .tg_runnable_contrib : 241
[541339.883599] .tg_load_avg : 6554
[541339.883600] .tg->runnable_avg : 4441
[541339.883602] .tg->cfs_bandwidth.timer_active: 0
[541339.883603] .throttled : 0
[541339.883605] .throttle_count : 0
[541339.883607] .se->exec_start : 541339880.440771
[541339.883608] .se->vruntime : 158399243.064817
[541339.883610] .se->sum_exec_runtime : 234496.576886
[541339.883611] .se->statistics.wait_start : 0.000000
[541339.883613] .se->statistics.sleep_start : 0.000000
[541339.883614] .se->statistics.block_start : 0.000000
[541339.883616] .se->statistics.sleep_max : 0.000000
[541339.883617] .se->statistics.block_max : 0.000000
[541339.883619] .se->statistics.exec_max : 3.997531
[541339.883620] .se->statistics.slice_max : 12.972785
[541339.883622] .se->statistics.wait_max : 59.975528
[541339.883623] .se->statistics.wait_sum : 148988.894220
[541339.883625] .se->statistics.wait_count : 200175
[541339.883626] .se->load.weight : 2
[541339.883628] .se->avg.runnable_avg_sum : 11692
[541339.883629] .se->avg.runnable_avg_period : 47894
[541339.883631] .se->avg.load_avg_contrib : 61
[541339.883632] .se->avg.decay_count : 516261941
[541339.883634]
[541339.883634] cfs_rq[11]:/
[541339.883636] .exec_clock : 34573648.410766
[541339.883638] .MIN_vruntime : 0.000001
[541339.883639] .min_vruntime : 158399255.939352
[541339.883641] .max_vruntime : 0.000001
[541339.883642] .spread : 0.000000
[541339.883644] .spread0 : -4803325.871046
[541339.883645] .nr_spread_over : 20539
[541339.883646] .nr_running : 0
[541339.883648] .load : 0
[541339.883649] .runnable_load_avg : 0
[541339.883651] .blocked_load_avg : 3256
[541339.883652] .tg_load_contrib : 3168
[541339.883654] .tg_runnable_contrib : 565
[541339.883656] .tg_load_avg : 10813
[541339.883657] .tg->runnable_avg : 10323
[541339.883659] .tg->cfs_bandwidth.timer_active: 0
[541339.883660] .throttled : 0
[541339.883662] .throttle_count : 0
[541339.883664] .avg->runnable_avg_sum : 25568
[541339.883665] .avg->runnable_avg_period : 46392
[541339.883667]
[541339.883667] rt_rq[11]:
[541339.883668] .rt_nr_running : 0
[541339.883670] .rt_throttled : 0
[541339.883672] .rt_time : 0.000000
[541339.883673] .rt_runtime : 950.000000
[541339.883675]
[541339.883675] runnable tasks:
[541339.883675] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.883675] ----------------------------------------------------------------------------------------------------------
[541339.883679] rcuos/10 18 158399174.970610 4862247 120 158399174.970610 109018.040066 541067428.660186 0 /
[541339.883685] rcuos/11 19 158399174.971413 4846168 120 158399174.971413 108625.788360 541068232.545212 0 /
[541339.883693] watchdog/11 93 -11.972091 135471 0 -11.972091 3058.654530 0.002185 0 /
[541339.883699] migration/11 94 0.000000 174448 0 0.000000 3129.423457 0.001453 0 /
[541339.883704] ksoftirqd/11 95 158399169.077910 415240 120 158399169.077910 5671.684998 541255859.677126 0 /
[541339.883709] kworker/11:0H 97 2261.965576 7 100 2261.965576 0.033913 13147.739658 0 /
[541339.883715] khelper 118 10.961681 2 100 10.961681 0.011039 0.004822 0 /
[541339.883720] netns 120 35.041519 2 100 35.041519 0.007276 0.003399 0 /
[541339.883726] kswapd0 136 150057361.350718 362124 120 150057361.350718 335908.730697 516191984.779958 0 /
[541339.883732] ipv6_addrconf 158 72.237500 2 100 72.237500 0.013227 0.005480 0 /
[541339.883742] kworker/11:1H 725 158094867.621050 2698 100 158094867.621050 35.442834 541265168.995202 0 /
[541339.883748] kdmflush 1416 3171.850748 2 100 3171.850748 0.009078 0.004090 0 /
[541339.883755] kdmflush 1598 3319.319754 2 100 3319.319754 0.009993 0.003983 0 /
[541339.883760] kdmflush 1599 3331.328166 2 100 3331.328166 0.009226 0.005311 0 /
[541339.883764] bioset 1611 3343.339062 2 100 3343.339062 0.006790 0.004010 0 /
[541339.883769] kdmflush 1842 3610.142277 2 100 3610.142277 0.008435 0.003317 0 /
[541339.883774] kdmflush 1901 3646.041130 2 100 3646.041130 0.010951 0.004720 0 /
[541339.883778] xfsalloc 2575 3859.628908 2 100 3859.628908 0.008072 0.003289 0 /
[541339.883783] xfs_mru_cache 2576 3871.633765 2 100 3871.633765 0.005324 0.002551 0 /
[541339.883788] xfslogd 2578 3883.638432 2 100 3883.638432 0.005159 0.002506 0 /
[541339.883794] SignalSender 2909 2281.726664 133 120 2281.726664 1.030578 56.967959 0 /autogroup-242
[541339.883801] java 3115 37262.897279 930 120 37262.897279 2104.635845 539899914.804257 0 /autogroup-264
[541339.883807] java 422 37399.869786 663621 120 37399.869786 30950.195041 123537410.374056 0 /autogroup-264
[541339.883817] java 3234 85.194255 1506 120 85.194255 3464.962976 1634.183233 0 /autogroup-264
[541339.883822] java 3259 37200.464822 505 120 37200.464822 203.806989 538548086.086618 0 /autogroup-264
[541339.883829] java 3431 37398.189557 271754 120 37398.189557 11388.280148 541275314.244498 0 /autogroup-264
[541339.883835] java 4334 37404.591062 5415238 120 37404.591062 220723.770018 541037644.418661 0 /autogroup-264
[541339.883840] java 4336 47.403797 6 120 47.403797 0.545121 610.999431 0 /autogroup-264
[541339.883846] java 4341 47.273497 5 120 47.273497 0.355178 610.592522 0 /autogroup-264
[541339.883852] java 4439 71.546515 4 120 71.546515 0.467512 18.930238 0 /autogroup-264
[541339.883857] java 4441 85.694625 7 120 85.694625 0.285274 0.286069 0 /autogroup-264
[541339.883863] java 4446 123.435635 14 120 123.435635 1.024724 6.268224 0 /autogroup-264
[541339.883869] java 4449 122.862399 5 120 122.862399 0.401767 5.587719 0 /autogroup-264
[541339.883876] memcached 4565 70881.548320 8598081 120 70881.548320 389590.807183 540271262.352904 0 /autogroup-347
[541339.883882] memcached 4567 10.963913 1 120 10.963913 0.012496 0.000000 0 /autogroup-347
[541339.883890] java 5227 1821.712746 36 120 1821.712746 36.355797 7860.947656 0 /autogroup-358
[541339.883896] java 5235 1809.742700 8 120 1809.742700 0.287661 7852.282327 0 /autogroup-358
[541339.883902] java 5236 1809.933450 4 120 1809.933450 0.271414 57111.336637 0 /autogroup-358
[541339.883908] java 5240 37353.569300 10820625 120 37353.569300 438288.035076 540787378.215907 0 /autogroup-358
[541339.883916] java 418 37345.222254 1576330 120 37345.222254 80077.626573 123487217.659244 0 /autogroup-358
[541339.883923] java 373 195571.933955 9023491 120 195571.933955 1275046.348442 122247546.824683 0 /autogroup-8620
[541339.883932] java 7339 194894.056235 87 120 194894.056235 6.799343 1618636.079470 0 /autogroup-8620
[541339.883938] mysqld 19152 299057.259097 877596 120 299057.259097 18852.370828 111075709.402654 0 /autogroup-8936
[541339.883945] mysqld 19167 298968.588506 22249 120 298968.588506 736.496039 111091952.054936 0 /autogroup-8936
[541339.883952] mysqld 19508 298727.931321 26 120 298727.931321 4.728775 89848.631170 0 /autogroup-8936
[541339.883959] jfsCommit 23554 114566989.349713 226857 120 114566989.349713 4573.068072 7731222.245806 0 /
[541339.883967] kworker/11:2 13285 158356869.762214 47086 120 158356869.762214 934.151705 10187737.694124 0 /
[541339.883973] sudo 29175 107.664589 3 120 107.664589 8.448837 0.811823 0 /autogroup-11329
[541339.883983] apache2 5397 1526.377340 2 120 1526.377340 26.385212 0.000000 0 /autogroup-356
[541339.883989] xfs-data/dm-25 6339 152758261.173079 2 100 152758261.173079 0.024191 0.055654 0 /
[541339.883994] xfs-data/dm-21 6345 152758301.335902 2 100 152758301.335902 0.021961 0.057056 0 /
[541339.883999] xfs-cil/dm-21 6347 152758325.464213 2 100 152758325.464213 0.067413 0.054131 0 /
[541339.884004] xfs-data/dm-15 6351 152758349.550776 2 100 152758349.550776 0.055416 0.052676 0 /
[541339.884009] xfs-cil/dm-15 6353 152758373.685146 2 100 152758373.685146 0.074516 0.053091 0 /
[541339.884014] xfs-data/dm-14 6357 152758397.725768 2 100 152758397.725768 0.028145 0.067780 0 /
[541339.884019] xfs-data/dm-28 6369 152758421.828385 2 100 152758421.828385 0.049251 0.014919 0 /
[541339.884024] xfsaild/dm-28 6372 158399191.289115 15333 120 158399191.289115 1664.743382 1749130.898706 0 /
[541339.884030] swift-container 6402 20.160753 117 120 20.160753 208.563600 163.380559 0 /autogroup-11409
[541339.884043] swift-object-se 7194 164578.167575 50016 120 164578.167575 1734.260802 1642327.925131 0 /autogroup-11406
[541339.884048] swift-object-se 7674 164587.091616 48812 120 164587.091616 1727.848699 1578650.417637 0 /autogroup-11406
[541339.884055] swift-object-se 8003 164567.633148 51482 120 164567.633148 1739.143141 1578329.138986 0 /autogroup-11406
[541339.884061] swift-object-se 7296 164539.289106 50245 120 164539.289106 1781.908830 1638430.935713 0 /autogroup-11406
[541339.884068] swift-object-se 8042 164582.765313 49064 120 164582.765313 1724.355731 1578178.051677 0 /autogroup-11406
[541339.884073] swift-object-se 6614 164568.009169 46749 120 164568.009169 1650.865910 1703188.048614 0 /autogroup-11406
[541339.884080] swift-object-se 7980 164588.541725 47719 120 164588.541725 1724.061018 1579091.414750 0 /autogroup-11406
[541339.884087] swift-object-se 7719 164588.534014 49243 120 164588.534014 1718.939938 1578970.215654 0 /autogroup-11406
[541339.884092] swift-object-se 7732 164580.363957 48565 120 164580.363957 1734.475546 1578969.150969 0 /autogroup-11406
[541339.884097] swift-object-se 8064 164582.670453 47346 120 164582.670453 1694.949167 1578289.499345 0 /autogroup-11406
[541339.884103] swift-object-se 7188 164584.075242 49716 120 164584.075242 1705.587357 1642889.512178 0 /autogroup-11406
[541339.884108] swift-object-se 7193 164559.962459 49327 120 164559.962459 1705.713843 1643188.053451 0 /autogroup-11406
[541339.884116] swift-object-se 7861 164496.441183 49339 120 164496.441183 1739.913852 1578116.204333 0 /autogroup-11406
[541339.884121] swift-object-se 7844 164588.417032 50437 120 164588.417032 1716.193467 1578775.820250 0 /autogroup-11406
[541339.884126] swift-object-se 7857 164567.587159 50306 120 164567.587159 1763.812970 1578401.130368 0 /autogroup-11406
[541339.884132] swift-object-se 7872 164580.284295 49213 120 164580.284295 1744.541571 1578259.569775 0 /autogroup-11406
[541339.884137] swift-object-se 6677 164583.628599 48307 120 164583.628599 1698.216318 1672802.039527 0 /autogroup-11406
[541339.884142] swift-object-se 6680 164564.367804 49782 120 164564.367804 1722.503586 1672927.081182 0 /autogroup-11406
[541339.884149] swift-object-se 8013 164564.356842 48036 120 164564.356842 1702.738169 1579166.793621 0 /autogroup-11406
[541339.884156] swift-object-se 7815 164588.549175 49799 120 164588.549175 1716.535257 1578065.289908 0 /autogroup-11406
[541339.884165] swift-object-se 8009 164511.084615 47933 120 164511.084615 1700.864058 1578288.111409 0 /autogroup-11406
[541339.884170] swift-object-se 6538 164569.827465 51640 120 164569.827465 1756.033889 1731786.985777 0 /autogroup-11406
[541339.884175] swift-object-se 6546 164588.819637 52101 120 164588.819637 1785.593859 1732176.772265 0 /autogroup-11406
[541339.884181] swift-object-se 7699 164546.037630 51359 120 164546.037630 1747.784604 1577790.491598 0 /autogroup-11406
[541339.884187] swift-object-se 8074 164588.712835 47158 120 164588.712835 1675.411878 1578487.559357 0 /autogroup-11406
[541339.884193] swift-object-se 7933 164513.125997 48339 120 164513.125997 1714.107338 1578118.259145 0 /autogroup-11406
[541339.884199] swift-object-se 8019 164588.532826 47609 120 164588.532826 1685.440918 1579009.030068 0 /autogroup-11406
[541339.884207] swift-object-se 7854 164454.022188 47821 120 164454.022188 1663.223296 1578959.296360 0 /autogroup-11406
[541339.884214] swift-object-se 8050 164588.529286 48886 120 164588.529286 1742.423048 1577962.632841 0 /autogroup-11406
[541339.884221] swift-object-se 7786 164588.504738 50461 120 164588.504738 1744.255514 1578199.616041 0 /autogroup-11406
[541339.884227] swift-object-se 7938 164580.180458 46584 120 164580.180458 1683.165286 1578577.023997 0 /autogroup-11406
[541339.884232] swift-object-se 7941 164587.786216 47561 120 164587.786216 1698.434485 1578565.365271 0 /autogroup-11406
[541339.884239] swift-proxy-ser 6529 128332.219025 144247 120 128332.219025 132427.077321 1417987.837512 0 /autogroup-11408
[541339.884245] nginx 6722 27866.450600 239646 120 27866.450600 60348.338817 1552511.760835 0 /autogroup-11415
[541339.884254] java 7005 193.587889 50 120 193.587889 35.349635 20274.240245 0 /autogroup-11418
[541339.884259] java 7009 132.026468 6 120 132.026468 0.252733 0.571744 0 /autogroup-11418
[541339.884268] java 7375 931.690798 23 120 931.690798 3.216805 1563853.798683 0 /autogroup-11418
[541339.884274] java 19920 979.798428 2 120 979.798428 0.283015 0.081003 0 /autogroup-11418
[541339.884281] java 6966 10944.839056 5104 120 10944.839056 16761.927197 1633961.831161 0 /autogroup-11424
[541339.884290] java 7161 10792.858846 96 120 10792.858846 95.122275 1620293.113124 0 /autogroup-11424
[541339.884295] java 7207 9849.830923 24 120 9849.830923 2.846493 1498523.333109 0 /autogroup-11424
[541339.884301] java 7247 11013.131915 7258 120 11013.131915 5758.231089 1648038.402381 0 /autogroup-11424
[541339.884306] java 7317 11013.294050 6970 120 11013.294050 6067.020978 1644684.426645 0 /autogroup-11424
[541339.884313] java 7344 11003.929374 24925 120 11003.929374 5631.006957 1644032.861009 0 /autogroup-11424
[541339.884319] java 7563 11021.775037 7372 120 11021.775037 6324.137721 1590658.398142 0 /autogroup-11424
[541339.884324] java 7564 11013.110797 7204 120 11013.110797 5626.841920 1591579.848763 0 /autogroup-11424
[541339.884332] magfsd 16735 149460.569010 23110 120 149460.569010 4959.558166 389962.744152 0 /autogroup-11432
[541339.884340] magfsd 19890 149473.500685 1058 120 149473.500685 946.491823 14758.285934 0 /autogroup-11432
[541339.884345] magfsd 19918 149470.799588 348 120 149470.799588 365.289620 8720.063007 0 /autogroup-11432
[541339.884352] sh 7221 2642.753107 986 120 2642.753107 112.820410 1656403.534631 0 /autogroup-11436
[541339.884357] kworker/11:0 8093 158399246.783946 42088 120 158399246.783946 796.443064 1589618.503609 0 /
[541339.884363] sudo 8299 18.881096 5 120 18.881096 6.109964 13.116591 0 /autogroup-11441
[541339.884369] kworker/11:1 8754 158036564.344367 9760 120 158036564.344367 180.389223 1330242.272917 0 /
[541339.884377] kworker/11:3 16295 158399166.133415 17232 120 158399166.133415 235.147416 504498.520217 0 /
[541339.884383] kworker/11:4 16963 158215643.937419 1771 120 158215643.937419 33.464129 287075.646243 0 /
[541339.884391] timeout 19940 2654.484204 2 120 2654.484204 0.757880 0.000000 0 /autogroup-11436
[541339.884408]
[541339.884417] cpu#12, 2199.987 MHz
[541339.884424] .nr_running : 3
[541339.884431] .load : 223
[541339.884438] .nr_switches : 271138210
[541339.884445] .nr_load_updates : 21692901
[541339.884453] .nr_uninterruptible : 308118
[541339.884460] .next_balance : 4430.360681
[541339.884467] .curr->pid : 7773
[541339.884473] .clock : 541339884.271948
[541339.884480] .cpu_load[0] : 244
[541339.884486] .cpu_load[1] : 195
[541339.884493] .cpu_load[2] : 148
[541339.884500] .cpu_load[3] : 111
[541339.884502] .cpu_load[4] : 100
[541339.884503] .yld_count : 13144911
[541339.884505] .sched_count : 284337948
[541339.884506] .sched_goidle : 101788495
[541339.884507] .avg_idle : 102954
[541339.884509] .max_idle_balance_cost : 500000
[541339.884510] .ttwu_count : 131803727
[541339.884512] .ttwu_local : 21812437
[541339.884514]
[541339.884514] cfs_rq[12]:/autogroup-11436
[541339.884516] .exec_clock : 354.610760
[541339.884518] .MIN_vruntime : 0.000001
[541339.884519] .min_vruntime : 2430.849073
[541339.884521] .max_vruntime : 0.000001
[541339.884522] .spread : 0.000000
[541339.884524] .spread0 : -163200150.961325
[541339.884525] .nr_spread_over : 217
[541339.884526] .nr_running : 0
[541339.884527] .load : 0
[541339.884528] .runnable_load_avg : 0
[541339.884529] .blocked_load_avg : 0
[541339.884531] .tg_load_contrib : 0
[541339.884532] .tg_runnable_contrib : 5
[541339.884533] .tg_load_avg : 3112
[541339.884535] .tg->runnable_avg : 313
[541339.884536] .tg->cfs_bandwidth.timer_active: 0
[541339.884537] .throttled : 0
[541339.884539] .throttle_count : 0
[541339.884540] .se->exec_start : 541339882.758653
[541339.884542] .se->vruntime : 158090586.195253
[541339.884543] .se->sum_exec_runtime : 354.610760
[541339.884545] .se->statistics.wait_start : 0.000000
[541339.884546] .se->statistics.sleep_start : 0.000000
[541339.884547] .se->statistics.block_start : 0.000000
[541339.884549] .se->statistics.sleep_max : 0.000000
[541339.884550] .se->statistics.block_max : 0.000000
[541339.884551] .se->statistics.exec_max : 3.208161
[541339.884553] .se->statistics.slice_max : 1.629528
[541339.884554] .se->statistics.wait_max : 11.943866
[541339.884556] .se->statistics.wait_sum : 96.781572
[541339.884557] .se->statistics.wait_count : 1010
[541339.884558] .se->load.weight : 2
[541339.884560] .se->avg.runnable_avg_sum : 252
[541339.884562] .se->avg.runnable_avg_period : 48999
[541339.884563] .se->avg.load_avg_contrib : 0
[541339.884565] .se->avg.decay_count : 516261943
[541339.884567]
[541339.884567] cfs_rq[12]:/autogroup-11424
[541339.884569] .exec_clock : 23647.149068
[541339.884571] .MIN_vruntime : 0.000001
[541339.884572] .min_vruntime : 10875.653493
[541339.884574] .max_vruntime : 0.000001
[541339.884576] .spread : 0.000000
[541339.884577] .spread0 : -163191706.156905
[541339.884579] .nr_spread_over : 11
[541339.884581] .nr_running : 0
[541339.884582] .load : 0
[541339.884584] .runnable_load_avg : 0
[541339.884585] .blocked_load_avg : 13
[541339.884587] .tg_load_contrib : 13
[541339.884589] .tg_runnable_contrib : 1
[541339.884590] .tg_load_avg : 108
[541339.884592] .tg->runnable_avg : 150
[541339.884593] .tg->cfs_bandwidth.timer_active: 0
[541339.884599] .throttled : 0
[541339.884606] .throttle_count : 0
[541339.884614] .se->exec_start : 541339783.297539
[541339.884620] .se->vruntime : 158090477.681049
[541339.884627] .se->sum_exec_runtime : 23649.490570
[541339.884635] .se->statistics.wait_start : 0.000000
[541339.884642] .se->statistics.sleep_start : 0.000000
[541339.884650] .se->statistics.block_start : 0.000000
[541339.884657] .se->statistics.sleep_max : 0.000000
[541339.884663] .se->statistics.block_max : 0.000000
[541339.884670] .se->statistics.exec_max : 3.999882
[541339.884676] .se->statistics.slice_max : 11.990052
[541339.884681] .se->statistics.wait_max : 18.267997
[541339.884688] .se->statistics.wait_sum : 6944.230363
[541339.884695] .se->statistics.wait_count : 64728
[541339.884701] .se->load.weight : 2
[541339.884707] .se->avg.runnable_avg_sum : 56
[541339.884714] .se->avg.runnable_avg_period : 47570
[541339.884718] .se->avg.load_avg_contrib : 11
[541339.884720] .se->avg.decay_count : 516261848
[541339.884722]
[541339.884722] cfs_rq[12]:/autogroup-347
[541339.884724] .exec_clock : 87784.276615
[541339.884725] .MIN_vruntime : 0.000001
[541339.884727] .min_vruntime : 70670.495849
[541339.884728] .max_vruntime : 0.000001
[541339.884729] .spread : 0.000000
[541339.884731] .spread0 : -163131911.314549
[541339.884732] .nr_spread_over : 0
[541339.884734] .nr_running : 0
[541339.884735] .load : 0
[541339.884737] .runnable_load_avg : 0
[541339.884738] .blocked_load_avg : 23
[541339.884739] .tg_load_contrib : 23
[541339.884741] .tg_runnable_contrib : 25
[541339.884742] .tg_load_avg : 69
[541339.884744] .tg->runnable_avg : 60
[541339.884745] .tg->cfs_bandwidth.timer_active: 0
[541339.884747] .throttled : 0
[541339.884748] .throttle_count : 0
[541339.884750] .se->exec_start : 541339882.687700
[541339.884752] .se->vruntime : 158090585.964347
[541339.884753] .se->sum_exec_runtime : 87955.634838
[541339.884755] .se->statistics.wait_start : 0.000000
[541339.884757] .se->statistics.sleep_start : 0.000000
[541339.884758] .se->statistics.block_start : 0.000000
[541339.884759] .se->statistics.sleep_max : 0.000000
[541339.884761] .se->statistics.block_max : 0.000000
[541339.884762] .se->statistics.exec_max : 5.629581
[541339.884763] .se->statistics.slice_max : 1.649379
[541339.884765] .se->statistics.wait_max : 16.721066
[541339.884766] .se->statistics.wait_sum : 136102.454912
[541339.884767] .se->statistics.wait_count : 1854250
[541339.884769] .se->load.weight : 2
[541339.884770] .se->avg.runnable_avg_sum : 1172
[541339.884771] .se->avg.runnable_avg_period : 46938
[541339.884772] .se->avg.load_avg_contrib : 16
[541339.884773] .se->avg.decay_count : 516261943
[541339.884775]
[541339.884775] cfs_rq[12]:/autogroup-11432
[541339.884777] .exec_clock : 173958.094123
[541339.884778] .MIN_vruntime : 0.000001
[541339.884780] .min_vruntime : 151611.478996
[541339.884781] .max_vruntime : 0.000001
[541339.884782] .spread : 0.000000
[541339.884783] .spread0 : -163050970.331402
[541339.884785] .nr_spread_over : 968
[541339.884786] .nr_running : 1
[541339.884788] .load : 1024
[541339.884789] .runnable_load_avg : 414
[541339.884790] .blocked_load_avg : 467
[541339.884792] .tg_load_contrib : 465
[541339.884793] .tg_runnable_contrib : 87
[541339.884794] .tg_load_avg : 2162
[541339.884796] .tg->runnable_avg : 1795
[541339.884797] .tg->cfs_bandwidth.timer_active: 0
[541339.884798] .throttled : 0
[541339.884799] .throttle_count : 0
[541339.884801] .se->exec_start : 541339884.789728
[541339.884803] .se->vruntime : 158090594.122727
[541339.884804] .se->sum_exec_runtime : 173971.557543
[541339.884806] .se->statistics.wait_start : 0.000000
[541339.884808] .se->statistics.sleep_start : 0.000000
[541339.884809] .se->statistics.block_start : 0.000000
[541339.884811] .se->statistics.sleep_max : 0.000000
[541339.884812] .se->statistics.block_max : 0.000000
[541339.884814] .se->statistics.exec_max : 3.999175
[541339.884816] .se->statistics.slice_max : 63.760375
[541339.884818] .se->statistics.wait_max : 38.855641
[541339.884819] .se->statistics.wait_sum : 78595.530305
[541339.884821] .se->statistics.wait_count : 194863
[541339.884823] .se->load.weight : 2
[541339.884824] .se->avg.runnable_avg_sum : 4062
[541339.884826] .se->avg.runnable_avg_period : 46655
[541339.884827] .se->avg.load_avg_contrib : 220
[541339.884835] .se->avg.decay_count : 516261945
[541339.884842]
[541339.884842] cfs_rq[12]:/autogroup-11415
[541339.884851] .exec_clock : 44607.570093
[541339.884857] .MIN_vruntime : 0.000001
[541339.884863] .min_vruntime : 28787.161508
[541339.884869] .max_vruntime : 0.000001
[541339.884876] .spread : 0.000000
[541339.884882] .spread0 : -163173794.648890
[541339.884889] .nr_spread_over : 0
[541339.884895] .nr_running : 0
[541339.884902] .load : 0
[541339.884909] .runnable_load_avg : 0
[541339.884915] .blocked_load_avg : 311
[541339.884922] .tg_load_contrib : 308
[541339.884929] .tg_runnable_contrib : 15
[541339.884935] .tg_load_avg : 1368
[541339.884942] .tg->runnable_avg : 1045
[541339.884949] .tg->cfs_bandwidth.timer_active: 0
[541339.884957] .throttled : 0
[541339.884964] .throttle_count : 0
[541339.884971] .se->exec_start : 541339884.142570
[541339.884978] .se->vruntime : 158090592.198502
[541339.884985] .se->sum_exec_runtime : 44614.463938
[541339.884992] .se->statistics.wait_start : 0.000000
[541339.884999] .se->statistics.sleep_start : 0.000000
[541339.885006] .se->statistics.block_start : 0.000000
[541339.885013] .se->statistics.sleep_max : 0.000000
[541339.885016] .se->statistics.block_max : 0.000000
[541339.885018] .se->statistics.exec_max : 3.997132
[541339.885020] .se->statistics.slice_max : 15.183382
[541339.885021] .se->statistics.wait_max : 19.098137
[541339.885023] .se->statistics.wait_sum : 50689.450367
[541339.885025] .se->statistics.wait_count : 163726
[541339.885027] .se->load.weight : 2
[541339.885028] .se->avg.runnable_avg_sum : 955
[541339.885029] .se->avg.runnable_avg_period : 47681
[541339.885031] .se->avg.load_avg_contrib : 223
[541339.885033] .se->avg.decay_count : 516261944
[541339.885034]
[541339.885034] cfs_rq[12]:/autogroup-11408
[541339.885036] .exec_clock : 238007.124971
[541339.885038] .MIN_vruntime : 0.000001
[541339.885039] .min_vruntime : 130438.134789
[541339.885040] .max_vruntime : 0.000001
[541339.885042] .spread : 0.000000
[541339.885043] .spread0 : -163072143.675609
[541339.885044] .nr_spread_over : 0
[541339.885046] .nr_running : 0
[541339.885048] .load : 0
[541339.885049] .runnable_load_avg : 0
[541339.885050] .blocked_load_avg : 274
[541339.885052] .tg_load_contrib : 274
[541339.885053] .tg_runnable_contrib : 259
[541339.885054] .tg_load_avg : 5816
[541339.885056] .tg->runnable_avg : 4430
[541339.885057] .tg->cfs_bandwidth.timer_active: 0
[541339.885058] .throttled : 0
[541339.885059] .throttle_count : 0
[541339.885061] .se->exec_start : 541339881.637378
[541339.885062] .se->vruntime : 158090603.671029
[541339.885064] .se->sum_exec_runtime : 238025.198674
[541339.885065] .se->statistics.wait_start : 0.000000
[541339.885066] .se->statistics.sleep_start : 0.000000
[541339.885068] .se->statistics.block_start : 0.000000
[541339.885069] .se->statistics.sleep_max : 0.000000
[541339.885070] .se->statistics.block_max : 0.000000
[541339.885072] .se->statistics.exec_max : 3.997634
[541339.885073] .se->statistics.slice_max : 13.058530
[541339.885075] .se->statistics.wait_max : 52.747597
[541339.885076] .se->statistics.wait_sum : 153474.400029
[541339.885078] .se->statistics.wait_count : 205370
[541339.885079] .se->load.weight : 2
[541339.885080] .se->avg.runnable_avg_sum : 12082
[541339.885081] .se->avg.runnable_avg_period : 47649
[541339.885083] .se->avg.load_avg_contrib : 47
[541339.885084] .se->avg.decay_count : 516261942
[541339.885086]
[541339.885086] cfs_rq[12]:/autogroup-11406
[541339.885088] .exec_clock : 275185.469798
[541339.885089] .MIN_vruntime : 0.000001
[541339.885091] .min_vruntime : 164708.450702
[541339.885092] .max_vruntime : 0.000001
[541339.885093] .spread : 0.000000
[541339.885094] .spread0 : -163037873.359696
[541339.885096] .nr_spread_over : 21
[541339.885097] .nr_running : 0
[541339.885098] .load : 0
[541339.885100] .runnable_load_avg : 0
[541339.885101] .blocked_load_avg : 374
[541339.885102] .tg_load_contrib : 298
[541339.885104] .tg_runnable_contrib : 398
[541339.885105] .tg_load_avg : 11402
[541339.885106] .tg->runnable_avg : 6463
[541339.885107] .tg->cfs_bandwidth.timer_active: 0
[541339.885109] .throttled : 0
[541339.885110] .throttle_count : 0
[541339.885112] .se->exec_start : 541339885.023076
[541339.885113] .se->vruntime : 158090606.654350
[541339.885115] .se->sum_exec_runtime : 275216.799626
[541339.885116] .se->statistics.wait_start : 0.000000
[541339.885118] .se->statistics.sleep_start : 0.000000
[541339.885119] .se->statistics.block_start : 0.000000
[541339.885120] .se->statistics.sleep_max : 0.000000
[541339.885122] .se->statistics.block_max : 0.000000
[541339.885123] .se->statistics.exec_max : 4.008976
[541339.885124] .se->statistics.slice_max : 13.407067
[541339.885126] .se->statistics.wait_max : 60.096418
[541339.885127] .se->statistics.wait_sum : 276868.842883
[541339.885129] .se->statistics.wait_count : 951472
[541339.885130] .se->load.weight : 2
[541339.885131] .se->avg.runnable_avg_sum : 18213
[541339.885133] .se->avg.runnable_avg_period : 46441
[541339.885134] .se->avg.load_avg_contrib : 26
[541339.885135] .se->avg.decay_count : 516261945
[541339.885137]
[541339.885137] cfs_rq[12]:/
[541339.885140] .exec_clock : 34468665.148551
[541339.885142] .MIN_vruntime : 0.000001
[541339.885143] .min_vruntime : 158090606.654350
[541339.885144] .max_vruntime : 0.000001
[541339.885146] .spread : 0.000000
[541339.885147] .spread0 : -5111975.156048
[541339.885149] .nr_spread_over : 20514
[541339.885150] .nr_running : 0
[541339.885151] .load : 0
[541339.885152] .runnable_load_avg : 0
[541339.885154] .blocked_load_avg : 465
[541339.885155] .tg_load_contrib : 459
[541339.885156] .tg_runnable_contrib : 609
[541339.885158] .tg_load_avg : 7398
[541339.885159] .tg->runnable_avg : 10426
[541339.885161] .tg->cfs_bandwidth.timer_active: 0
[541339.885162] .throttled : 0
[541339.885164] .throttle_count : 0
[541339.885165] .avg->runnable_avg_sum : 27925
[541339.885166] .avg->runnable_avg_period : 46467
[541339.885168]
[541339.885168] rt_rq[12]:
[541339.885169] .rt_nr_running : 0
[541339.885171] .rt_throttled : 0
[541339.885172] .rt_time : 0.000000
[541339.885174] .rt_runtime : 950.000000
[541339.885179]
[541339.885179] runnable tasks:
[541339.885179] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.885179] ----------------------------------------------------------------------------------------------------------
[541339.885187] rcuos/3 11 158090473.938721 6538769 120 158090473.938721 179042.894098 540998916.391561 0 /
[541339.885222] rcuos/8 16 158090472.390354 4969604 120 158090472.390354 112591.456715 541062793.745063 0 /
[541339.885260] watchdog/12 98 -11.973146 135471 0 -11.973146 3005.683293 0.002218 0 /
[541339.885278] migration/12 99 0.000000 173382 0 0.000000 5944.123764 0.001359 0 /
[541339.885283] ksoftirqd/12 100 158090473.930683 415438 120 158090473.930683 5970.945615 541256373.636714 0 /
[541339.885288] kworker/12:0H 102 17450.901310 8 100 17450.901310 0.047371 70353.476272 0 /
[541339.885294] writeback 122 22.966425 2 100 22.966425 0.006831 0.003140 0 /
[541339.885299] ksmd 123 34.977568 2 125 34.977568 0.008018 0.003056 0 /
[541339.885304] crypto 125 59.065002 2 100 59.065002 0.006089 0.002768 0 /
[541339.885308] kintegrityd 126 71.070879 2 100 71.070879 0.006415 0.002430 0 /
[541339.885313] bioset 127 83.076734 2 100 83.076734 0.006371 0.002417 0 /
[541339.885318] kblockd 128 95.082306 2 100 95.082306 0.006061 0.002493 0 /
[541339.885322] ata_sff 129 107.088255 2 100 107.088255 0.006646 0.003189 0 /
[541339.885327] md 131 131.100421 2 100 131.100421 0.006406 0.002802 0 /
[541339.885332] devfreq_wq 132 143.106161 2 100 143.106161 0.006244 0.003232 0 /
[541339.885337] ecryptfs-kthrea 138 215.191010 2 120 215.191010 0.009942 0.003932 0 /
[541339.885342] kthrotld 150 359.389524 2 100 359.389524 0.012418 0.003577 0 /
[541339.885347] acpi_thermal_pm 151 372.520288 2 100 372.520288 0.012243 0.004969 0 /
[541339.885354] scsi_eh_6 283 571.820738 2 120 571.820738 0.012346 0.006679 0 /
[541339.885358] scsi_tmf_6 284 583.830981 2 100 583.830981 0.011068 0.006537 0 /
[541339.885363] fw_event0 285 595.842479 2 100 595.842479 0.012326 0.005411 0 /
[541339.885368] bnx2x_iov 287 669.190329 2 100 669.190329 0.012306 0.006184 0 /
[541339.885382] raid5wq 292 1302.780670 2 100 1302.780670 0.018099 0.005994 0 /
[541339.885416] scsi_eh_7 334 1439.587046 2 120 1439.587046 0.066858 0.035632 0 /
[541339.885446] scsi_eh_8 337 1463.617813 2 120 1463.617813 0.018176 0.006059 0 /
[541339.885476] fw_event1 340 1487.672091 2 100 1487.672091 0.038008 0.004018 0 /
[541339.885484] scsi_eh_9 342 1511.811657 2 120 1511.811657 0.034072 0.004184 0 /
[541339.885488] scsi_tmf_9 343 1523.841146 2 100 1523.841146 0.030220 0.015019 0 /
[541339.885493] fw_event2 344 1535.872046 2 100 1535.872046 0.031576 0.015518 0 /
[541339.885499] poll_2_status 345 1547.905625 2 100 1547.905625 0.034273 0.014915 0 /
[541339.885503] poll_0_status 348 1584.026578 2 100 1584.026578 0.064662 0.029026 0 /
[541339.885509] poll_1_status 352 1608.134235 2 100 1608.134235 0.104719 0.028295 0 /
[541339.885517] dm_bufio_cache 625 1859.945965 2 100 1859.945965 0.032925 0.005492 0 /
[541339.885524] kdmflush 1458 4129.905963 2 100 4129.905963 0.035608 0.004990 0 /
[541339.885530] kdmflush 1498 4144.977786 2 100 4144.977786 0.007375 0.004226 0 /
[541339.885535] bioset 1541 4157.025294 2 100 4157.025294 0.014181 0.004933 0 /
[541339.885543] kdmflush 1853 4603.435715 2 100 4603.435715 0.010107 0.004850 0 /
[541339.885548] bioset 1854 4615.455397 2 100 4615.455397 0.025103 0.004320 0 /
[541339.885558] ruby-timer-thr 9607 512.634123 168 120 512.634123 5.593679 283051854.421815 0 /autogroup-227
[541339.885564] ruby 2737 5703.955960 255289 120 5703.955960 234060.022830 541055636.792526 0 /autogroup-230
[541339.885574] java 3116 39133.346906 829 120 39133.346906 2175.132325 539899904.419417 0 /autogroup-264
[541339.885585] java 3462 16996.082598 256 120 16996.082598 39.353655 520767211.772675 0 /autogroup-257
[541339.885624] memcached 4564 70670.495849 8302299 120 70670.495849 376587.584432 540295842.083343 0 /autogroup-347
[541339.885658] java 5239 73.035253 2 120 73.035253 0.037522 0.006024 0 /autogroup-358
[541339.885690] collectdmon 5310 0.446519 1 120 0.446519 1.627895 0.000000 0 /autogroup-363
[541339.885722] login 5351 4.870836 25 120 4.870836 7.141184 78.303602 0 /autogroup-368
[541339.885751] kworker/12:1H 5372 157976927.028080 2943 100 157976927.028080 39.316869 541244781.134184 0 /
[541339.885758] java 381 186468.734683 13980 120 186468.734683 2972.416602 123557615.721206 0 /autogroup-8620
[541339.885764] java 386 186468.623747 13878 120 186468.623747 2977.392924 123557598.365032 0 /autogroup-8620
[541339.885769] java 390 186468.644196 14257 120 186468.644196 2945.303128 123557633.903032 0 /autogroup-8620
[541339.885776] java 423 186221.307456 7075 120 186221.307456 688.942789 123545395.869043 0 /autogroup-8620
[541339.885784] mysqld 19158 292800.686610 111547 120 292800.686610 9059.303927 111086579.016989 0 /autogroup-8936
[541339.885790] mysqld 19164 292801.050092 123271 120 292801.050092 5381.950039 111090679.051948 0 /autogroup-8936
[541339.885800] mysqld 19793 292228.626286 12 120 292228.626286 1.936168 1.519647 0 /autogroup-8936
[541339.885808] jfsCommit 23558 114390695.030050 227518 120 114390695.030050 4589.059674 7731187.360338 0 /
[541339.885814] kworker/12:2 7306 158083746.414700 68679 120 158083746.414700 1257.506055 31949541.746397 0 /
[541339.885822] PassengerWatchd 5356 18.061622 10 120 18.061622 7.096283 138.443404 0 /autogroup-11395
[541339.885829] apache2 5389 1985.969825 2 120 1985.969825 18.829299 9.402500 0 /autogroup-356
[541339.885835] apache2 5408 2010.009011 1 120 2010.009011 0.012194 0.000000 0 /autogroup-356
[541339.885839] apache2 5409 2022.033156 1 120 2022.033156 0.024152 0.000000 0 /autogroup-356
[541339.885843] apache2 5411 2034.042777 1 120 2034.042777 0.011264 0.000000 0 /autogroup-356
[541339.885848] apache2 5413 2046.067666 1 120 2046.067666 0.024896 0.000000 0 /autogroup-356
[541339.885854] apache2 5423 2082.118634 1 120 2082.118634 0.012805 0.000000 0 /autogroup-356
[541339.885860] apache2 5427 2106.156303 1 120 2106.156303 0.024021 0.000000 0 /autogroup-356
[541339.885865] apache2 5425 2094.132289 1 120 2094.132289 0.013662 0.000000 0 /autogroup-356
[541339.885870] apache2 5428 2106.141685 1 120 2106.141685 0.009403 0.000000 0 /autogroup-356
[541339.885879] xfs-conv/dm-28 6370 152438085.906348 2 100 152438085.906348 0.023686 0.060636 0 /
[541339.885888] swift-object-se 7825 164690.298991 49617 120 164690.298991 1702.712033 1578310.948282 0 /autogroup-11406
[541339.885894] swift-object-se 8058 164676.828455 50555 120 164676.828455 1732.138794 1578386.820766 0 /autogroup-11406
[541339.885903] swift-object-se 7762 164696.728090 49949 120 164696.728090 1704.122403 1578724.995156 0 /autogroup-11406
[541339.885931] swift-object-se 7769 164696.885563 51155 120 164696.885563 1738.271275 1578685.805438 0 /autogroup-11406
[541339.885961] swift-object-se 7978 164671.372064 46598 120 164671.372064 1705.543020 1577977.645669 0 /autogroup-11406
[541339.885998] swift-object-se 7766 164696.471049 49657 120 164696.471049 1731.807320 1578967.796304 0 /autogroup-11406
[541339.886031] swift-object-se 6616 164695.342563 47506 120 164695.342563 1646.343249 1703807.836291 0 /autogroup-11406
[541339.886063] swift-object-se 6617 164695.356133 46687 120 164695.356133 1611.523045 1703635.794536 0 /autogroup-11406
[541339.886077] swift-object-se 7534 164691.406710 48794 120 164691.406710 1698.471705 1587709.398443 0 /autogroup-11406
[541339.886089] swift-object-se 6612 164686.703940 50795 120 164686.703940 1751.056536 1702508.933081 0 /autogroup-11406
[541339.886096] swift-object-se 7739 164690.228411 48671 120 164690.228411 1685.310928 1578983.795024 0 /autogroup-11406
[541339.886102] swift-object-se 7905 164687.467275 48041 120 164687.467275 1703.244262 1578795.175540 0 /autogroup-11406
[541339.886110] swift-object-se 6674 164696.761669 50727 120 164696.761669 1717.935752 1673321.848389 0 /autogroup-11406
[541339.886116] swift-object-se 7773 164696.812627 48893 120 164696.812627 1695.466186 1578427.240104 0 /autogroup-11406
[541339.886122] swift-object-se 7793 164695.437831 48894 120 164695.437831 1718.533743 1578965.476496 0 /autogroup-11406
[541339.886130] swift-object-se 7917 164685.798967 49077 120 164685.798967 1713.258162 1578508.071400 0 /autogroup-11406
[541339.886137] swift-object-se 8005 164695.384935 50947 120 164695.384935 1749.660854 1577862.304362 0 /autogroup-11406
[541339.886143] swift-object-se 7722 164695.537878 48864 120 164695.537878 1712.720964 1578634.487261 0 /autogroup-11406
[541339.886151] swift-object-se 7864 164695.526739 48005 120 164695.526739 1720.635007 1578605.451219 0 /autogroup-11406
[541339.886162] swift-object-se 7957 164689.051815 49202 120 164689.051815 1713.376470 1578753.730788 0 /autogroup-11406
[541339.886168] swift-object-se 7959 164687.388327 48196 120 164687.388327 1703.630490 1578803.702734 0 /autogroup-11406
[541339.886174] swift-object-se 7274 164695.697632 49436 120 164695.697632 1694.883987 1639450.622972 0 /autogroup-11406
[541339.886184] swift-object-se 8002 164694.207989 48712 120 164694.207989 1696.129612 1578303.600625 0 /autogroup-11406
[541339.886192] swift-object-se 7733 164687.634160 51206 120 164687.634160 1745.104721 1578285.793906 0 /autogroup-11406
[541339.886197] swift-object-se 7749 164687.508033 52606 120 164687.508033 1768.700137 1578335.690245 0 /autogroup-11406
[541339.886207] swift-proxy-ser 6506 130429.645193 153041 120 130429.645193 149307.157708 1373690.482935 0 /autogroup-11408
[541339.886212] swift-proxy-ser 6510 130423.462382 127840 120 130423.462382 117405.080627 1452493.472754 0 /autogroup-11408
[541339.886217] swift-proxy-ser 6514 130430.680645 105417 120 130430.680645 81045.718351 1536397.285832 0 /autogroup-11408
[541339.886226] nginx 6730 28785.638073 218134 120 28785.638073 41559.624630 1572634.382222 0 /autogroup-11415
[541339.886233] java 6790 144.326563 4 120 144.326563 0.128755 0.014677 0 /autogroup-11418
[541339.886238] java 6792 156.406850 2 120 156.406850 0.080294 0.003746 0 /autogroup-11418
[541339.886258] java 6793 168.455821 3 120 168.455821 0.048978 0.008233 0 /autogroup-11418
[541339.886277] java 6810 218.668601 3 120 218.668601 0.173604 0.058342 0 /autogroup-11418
[541339.886298] java 7006 1663.903140 19 120 1663.903140 9.170510 50381.792718 0 /autogroup-11418
[541339.886321] java 7010 1494.283979 6 120 1494.283979 0.324657 0.451197 0 /autogroup-11418
[541339.886331] java 7022 1592.448745 6 120 1592.448745 0.497793 734.656127 0 /autogroup-11418
[541339.886337] java 7333 1619.554074 6 120 1619.554074 3.515074 170.097237 0 /autogroup-11418
[541339.886341] java 7337 2493.558125 9 120 2493.558125 1.175970 1498521.723457 0 /autogroup-11418
[541339.886351] java 7162 283.855966 5 120 283.855966 2.897774 84.930457 0 /autogroup-11424
[541339.886355] java 7208 10649.860202 357 120 10649.860202 33.698766 1634365.590480 0 /autogroup-11424
[541339.886359] java 7243 10863.494523 7182 120 10863.494523 6011.629965 1647690.831316 0 /autogroup-11424
[541339.886363] java 7249 10863.704328 7185 120 10863.704328 6285.171148 1646972.029900 0 /autogroup-11424
[541339.886367] java 7320 10873.484227 7279 120 10873.484227 5862.471436 1644427.380004 0 /autogroup-11424
[541339.886372] java 7570 10863.707495 7568 120 10863.707495 6424.335325 1586179.414326 0 /autogroup-11424
[541339.886377] java 7628 10863.691532 7704 120 10863.691532 6242.458117 1585795.503409 0 /autogroup-11424
[541339.886380] java 7635 10875.653493 194513 120 10875.653493 13775.582717 1570795.665914 0 /autogroup-11424
[541339.886384] magfsd 7112 151611.919697 810944 120 151611.919697 484292.347639 999694.221159 0 /autogroup-11432
[541339.886389] magfsd 19693 151610.888105 6138 120 151610.888105 4695.641173 66902.509009 0 /autogroup-11432
[541339.886395] magfsd 19898 151599.964761 940 120 151599.964761 597.577602 12434.439257 0 /autogroup-11432
[541339.886398] magfsd 19900 151600.938187 886 120 151600.938187 616.548252 11667.566508 0 /autogroup-11432
[541339.886402] magfsd 19926 151397.617983 259 120 151397.617983 164.733676 2380.574621 0 /autogroup-11432
[541339.886415] magfsd 19927 151599.793382 188 120 151599.793382 183.283678 2023.148678 0 /autogroup-11432
[541339.886436] magfsd 19929 151600.454184 117 120 151600.454184 8.152218 1641.604774 0 /autogroup-11432
[541339.886457] kworker/12:0 8224 158090491.339002 32853 120 158090491.339002 618.095319 1557070.411912 0 /
[541339.886462] kworker/12:1 11275 158090594.534186 17775 120 158090594.534186 339.905074 1141717.360523 0 /
[541339.886467] kworker/12:3 16322 157489007.279574 4232 120 157489007.279574 77.568461 375125.049810 0 /
[541339.886471] kworker/u32:7 16688 157477787.955234 830 120 157477787.955234 82.909878 287537.088468 0 /
[541339.886478]
[541339.886479] cpu#13, 2199.987 MHz
[541339.886481] .nr_running : 4
[541339.886482] .load : 862
[541339.886483] .nr_switches : 272085792
[541339.886484] .nr_load_updates : 22134583
[541339.886485] .nr_uninterruptible : 316535
[541339.886486] .next_balance : 4430.360697
[541339.886487] .curr->pid : 1490
[541339.886489] .clock : 541339884.147948
[541339.886491] .cpu_load[0] : 271
[541339.886492] .cpu_load[1] : 275
[541339.886493] .cpu_load[2] : 231
[541339.886494] .cpu_load[3] : 161
[541339.886495] .cpu_load[4] : 104
[541339.886496] .yld_count : 3381242
[541339.886497] .sched_count : 275505095
[541339.886497] .sched_goidle : 102179770
[541339.886498] .avg_idle : 477803
[541339.886500] .max_idle_balance_cost : 500000
[541339.886501] .ttwu_count : 132357663
[541339.886503] .ttwu_local : 22277283
[541339.886507]
[541339.886507] cfs_rq[13]:/autogroup-2
[541339.886513] .exec_clock : 156.366412
[541339.886518] .MIN_vruntime : 0.000001
[541339.886523] .min_vruntime : 880.450813
[541339.886529] .max_vruntime : 0.000001
[541339.886535] .spread : 0.000000
[541339.886540] .spread0 : -163201701.359585
[541339.886545] .nr_spread_over : 72
[541339.886550] .nr_running : 0
[541339.886554] .load : 0
[541339.886560] .runnable_load_avg : 0
[541339.886566] .blocked_load_avg : 0
[541339.886573] .tg_load_contrib : 0
[541339.886579] .tg_runnable_contrib : 3
[541339.886584] .tg_load_avg : 0
[541339.886590] .tg->runnable_avg : 3
[541339.886596] .tg->cfs_bandwidth.timer_active: 0
[541339.886601] .throttled : 0
[541339.886602] .throttle_count : 0
[541339.886604] .se->exec_start : 541339874.176094
[541339.886605] .se->vruntime : 158050800.974720
[541339.886606] .se->sum_exec_runtime : 156.439093
[541339.886607] .se->statistics.wait_start : 0.000000
[541339.886608] .se->statistics.sleep_start : 0.000000
[541339.886610] .se->statistics.block_start : 0.000000
[541339.886611] .se->statistics.sleep_max : 0.000000
[541339.886613] .se->statistics.block_max : 0.000000
[541339.886614] .se->statistics.exec_max : 13.367440
[541339.886615] .se->statistics.slice_max : 0.748359
[541339.886617] .se->statistics.wait_max : 2.136796
[541339.886618] .se->statistics.wait_sum : 5.401447
[541339.886619] .se->statistics.wait_count : 673
[541339.886621] .se->load.weight : 2
[541339.886622] .se->avg.runnable_avg_sum : 186
[541339.886623] .se->avg.runnable_avg_period : 48116
[541339.886624] .se->avg.load_avg_contrib : 0
[541339.886626] .se->avg.decay_count : 516261935
[541339.886628]
[541339.886628] cfs_rq[13]:/autogroup-11436
[541339.886630] .exec_clock : 275.524764
[541339.886631] .MIN_vruntime : 0.000001
[541339.886633] .min_vruntime : 2109.726786
[541339.886635] .max_vruntime : 0.000001
[541339.886636] .spread : 0.000000
[541339.886638] .spread0 : -163200472.083612
[541339.886639] .nr_spread_over : 193
[541339.886641] .nr_running : 0
[541339.886642] .load : 0
[541339.886643] .runnable_load_avg : 0
[541339.886644] .blocked_load_avg : 43
[541339.886646] .tg_load_contrib : 43
[541339.886647] .tg_runnable_contrib : 57
[541339.886649] .tg_load_avg : 4132
[541339.886650] .tg->runnable_avg : 363
[541339.886651] .tg->cfs_bandwidth.timer_active: 0
[541339.886652] .throttled : 0
[541339.886654] .throttle_count : 0
[541339.886655] .se->exec_start : 541339877.026638
[541339.886657] .se->vruntime : 158050805.535535
[541339.886658] .se->sum_exec_runtime : 275.531215
[541339.886660] .se->statistics.wait_start : 0.000000
[541339.886661] .se->statistics.sleep_start : 0.000000
[541339.886663] .se->statistics.block_start : 0.000000
[541339.886664] .se->statistics.sleep_max : 0.000000
[541339.886665] .se->statistics.block_max : 0.000000
[541339.886667] .se->statistics.exec_max : 2.298414
[541339.886668] .se->statistics.slice_max : 1.693837
[541339.886670] .se->statistics.wait_max : 6.641895
[541339.886671] .se->statistics.wait_sum : 100.946168
[541339.886673] .se->statistics.wait_count : 776
[541339.886674] .se->load.weight : 2
[541339.886675] .se->avg.runnable_avg_sum : 2683
[541339.886676] .se->avg.runnable_avg_period : 47990
[541339.886678] .se->avg.load_avg_contrib : 2
[541339.886679] .se->avg.decay_count : 516261938
[541339.886681]
[541339.886681] cfs_rq[13]:/autogroup-148
[541339.886683] .exec_clock : 921.648947
[541339.886684] .MIN_vruntime : 0.000001
[541339.886686] .min_vruntime : 836.435360
[541339.886687] .max_vruntime : 0.000001
[541339.886689] .spread : 0.000000
[541339.886690] .spread0 : -163201745.375038
[541339.886692] .nr_spread_over : 0
[541339.886693] .nr_running : 1
[541339.886694] .load : 1024
[541339.886696] .runnable_load_avg : 421
[541339.886700] .blocked_load_avg : 0
[541339.886705] .tg_load_contrib : 421
[541339.886710] .tg_runnable_contrib : 425
[541339.886715] .tg_load_avg : 1048
[541339.886720] .tg->runnable_avg : 608
[541339.886725] .tg->cfs_bandwidth.timer_active: 0
[541339.886729] .throttled : 0
[541339.886734] .throttle_count : 0
[541339.886739] .se->exec_start : 541339884.147948
[541339.886744] .se->vruntime : 158050823.176241
[541339.886750] .se->sum_exec_runtime : 922.278788
[541339.886755] .se->statistics.wait_start : 0.000000
[541339.886759] .se->statistics.sleep_start : 0.000000
[541339.886764] .se->statistics.block_start : 0.000000
[541339.886769] .se->statistics.sleep_max : 0.000000
[541339.886774] .se->statistics.block_max : 0.000000
[541339.886778] .se->statistics.exec_max : 2.827033
[541339.886784] .se->statistics.slice_max : 2.933941
[541339.886789] .se->statistics.wait_max : 13.384981
[541339.886794] .se->statistics.wait_sum : 309.994317
[541339.886798] .se->statistics.wait_count : 15227
[541339.886803] .se->load.weight : 637
[541339.886808] .se->avg.runnable_avg_sum : 19732
[541339.886812] .se->avg.runnable_avg_period : 47507
[541339.886817] .se->avg.load_avg_contrib : 233
[541339.886822] .se->avg.decay_count : 516261925
[541339.886827]
[541339.886827] cfs_rq[13]:/autogroup-11329
[541339.886829] .exec_clock : 3752.179032
[541339.886831] .MIN_vruntime : 0.000001
[541339.886833] .min_vruntime : 3233.467575
[541339.886834] .max_vruntime : 0.000001
[541339.886835] .spread : 0.000000
[541339.886837] .spread0 : -163199348.342823
[541339.886838] .nr_spread_over : 15
[541339.886839] .nr_running : 0
[541339.886841] .load : 0
[541339.886842] .runnable_load_avg : 0
[541339.886843] .blocked_load_avg : 9
[541339.886845] .tg_load_contrib : 9
[541339.886846] .tg_runnable_contrib : 9
[541339.886848] .tg_load_avg : 395
[541339.886849] .tg->runnable_avg : 410
[541339.886850] .tg->cfs_bandwidth.timer_active: 0
[541339.886852] .throttled : 0
[541339.886853] .throttle_count : 0
[541339.886855] .se->exec_start : 541339877.043956
[541339.886857] .se->vruntime : 158050801.747864
[541339.886858] .se->sum_exec_runtime : 3752.920322
[541339.886860] .se->statistics.wait_start : 0.000000
[541339.886861] .se->statistics.sleep_start : 0.000000
[541339.886863] .se->statistics.block_start : 0.000000
[541339.886864] .se->statistics.sleep_max : 0.000000
[541339.886865] .se->statistics.block_max : 0.000000
[541339.886867] .se->statistics.exec_max : 3.996885
[541339.886868] .se->statistics.slice_max : 0.394022
[541339.886870] .se->statistics.wait_max : 12.929524
[541339.886871] .se->statistics.wait_sum : 1768.387333
[541339.886873] .se->statistics.wait_count : 54724
[541339.886874] .se->load.weight : 2
[541339.886876] .se->avg.runnable_avg_sum : 538
[541339.886877] .se->avg.runnable_avg_period : 46401
[541339.886879] .se->avg.load_avg_contrib : 9
[541339.886880] .se->avg.decay_count : 516261938
[541339.886882]
[541339.886882] cfs_rq[13]:/autogroup-11432
[541339.886884] .exec_clock : 173568.623385
[541339.886886] .MIN_vruntime : 0.000001
[541339.886887] .min_vruntime : 148533.219476
[541339.886889] .max_vruntime : 0.000001
[541339.886890] .spread : 0.000000
[541339.886892] .spread0 : -163054048.590922
[541339.886893] .nr_spread_over : 733
[541339.886894] .nr_running : 0
[541339.886896] .load : 0
[541339.886897] .runnable_load_avg : 0
[541339.886898] .blocked_load_avg : 0
[541339.886900] .tg_load_contrib : 0
[541339.886902] .tg_runnable_contrib : 10
[541339.886903] .tg_load_avg : 2273
[541339.886905] .tg->runnable_avg : 1884
[541339.886906] .tg->cfs_bandwidth.timer_active: 0
[541339.886908] .throttled : 0
[541339.886909] .throttle_count : 0
[541339.886911] .se->exec_start : 541339782.753695
[541339.886912] .se->vruntime : 158050752.033128
[541339.886918] .se->sum_exec_runtime : 173582.617287
[541339.886922] .se->statistics.wait_start : 0.000000
[541339.886926] .se->statistics.sleep_start : 0.000000
[541339.886931] .se->statistics.block_start : 0.000000
[541339.886935] .se->statistics.sleep_max : 0.000000
[541339.886941] .se->statistics.block_max : 0.000000
[541339.886946] .se->statistics.exec_max : 3.998782
[541339.886951] .se->statistics.slice_max : 107.250965
[541339.886957] .se->statistics.wait_max : 51.571550
[541339.886964] .se->statistics.wait_sum : 80645.761599
[541339.886971] .se->statistics.wait_count : 203508
[541339.886977] .se->load.weight : 2
[541339.886982] .se->avg.runnable_avg_sum : 506
[541339.886989] .se->avg.runnable_avg_period : 47615
[541339.886994] .se->avg.load_avg_contrib : 0
[541339.887001] .se->avg.decay_count : 516261848
[541339.887006]
[541339.887006] cfs_rq[13]:/autogroup-11415
[541339.887014] .exec_clock : 43846.242988
[541339.887020] .MIN_vruntime : 0.000001
[541339.887025] .min_vruntime : 28742.180270
[541339.887032] .max_vruntime : 0.000001
[541339.887039] .spread : 0.000000
[541339.887046] .spread0 : -163173839.630128
[541339.887052] .nr_spread_over : 0
[541339.887058] .nr_running : 0
[541339.887064] .load : 0
[541339.887067] .runnable_load_avg : 0
[541339.887072] .blocked_load_avg : 0
[541339.887079] .tg_load_contrib : 0
[541339.887081] .tg_runnable_contrib : 8
[541339.887082] .tg_load_avg : 1390
[541339.887083] .tg->runnable_avg : 1077
[541339.887085] .tg->cfs_bandwidth.timer_active: 0
[541339.887086] .throttled : 0
[541339.887087] .throttle_count : 0
[541339.887089] .se->exec_start : 541339700.416689
[541339.887090] .se->vruntime : 158050537.222254
[541339.887092] .se->sum_exec_runtime : 43852.544298
[541339.887093] .se->statistics.wait_start : 0.000000
[541339.887095] .se->statistics.sleep_start : 0.000000
[541339.887096] .se->statistics.block_start : 0.000000
[541339.887098] .se->statistics.sleep_max : 0.000000
[541339.887099] .se->statistics.block_max : 0.000000
[541339.887100] .se->statistics.exec_max : 3.997203
[541339.887102] .se->statistics.slice_max : 14.128217
[541339.887103] .se->statistics.wait_max : 18.946442
[541339.887105] .se->statistics.wait_sum : 50258.400510
[541339.887106] .se->statistics.wait_count : 160337
[541339.887108] .se->load.weight : 2
[541339.887109] .se->avg.runnable_avg_sum : 382
[541339.887111] .se->avg.runnable_avg_period : 47007
[541339.887112] .se->avg.load_avg_contrib : 0
[541339.887113] .se->avg.decay_count : 516261769
[541339.887115]
[541339.887115] cfs_rq[13]:/autogroup-11408
[541339.887117] .exec_clock : 235745.929204
[541339.887119] .MIN_vruntime : 0.000001
[541339.887121] .min_vruntime : 129878.524634
[541339.887122] .max_vruntime : 0.000001
[541339.887124] .spread : 0.000000
[541339.887125] .spread0 : -163072703.285764
[541339.887126] .nr_spread_over : 0
[541339.887128] .nr_running : 0
[541339.887129] .load : 0
[541339.887130] .runnable_load_avg : 0
[541339.887131] .blocked_load_avg : 86
[541339.887133] .tg_load_contrib : 180
[541339.887134] .tg_runnable_contrib : 138
[541339.887135] .tg_load_avg : 6080
[541339.887137] .tg->runnable_avg : 4655
[541339.887138] .tg->cfs_bandwidth.timer_active: 0
[541339.887139] .throttled : 0
[541339.887141] .throttle_count : 0
[541339.887142] .se->exec_start : 541339856.086871
[541339.887144] .se->vruntime : 158050798.760432
[541339.887145] .se->sum_exec_runtime : 235762.733183
[541339.887147] .se->statistics.wait_start : 0.000000
[541339.887148] .se->statistics.sleep_start : 0.000000
[541339.887149] .se->statistics.block_start : 0.000000
[541339.887151] .se->statistics.sleep_max : 0.000000
[541339.887152] .se->statistics.block_max : 0.000000
[541339.887154] .se->statistics.exec_max : 3.998417
[541339.887155] .se->statistics.slice_max : 21.246268
[541339.887156] .se->statistics.wait_max : 71.963993
[541339.887158] .se->statistics.wait_sum : 151628.098522
[541339.887159] .se->statistics.wait_count : 199414
[541339.887160] .se->load.weight : 2
[541339.887162] .se->avg.runnable_avg_sum : 6351
[541339.887163] .se->avg.runnable_avg_period : 46841
[541339.887164] .se->avg.load_avg_contrib : 35
[541339.887166] .se->avg.decay_count : 516261933
[541339.887168]
[541339.887168] cfs_rq[13]:/autogroup-11406
[541339.887170] .exec_clock : 275749.365146
[541339.887171] .MIN_vruntime : 164642.026750
[541339.887173] .min_vruntime : 164653.939427
[541339.887174] .max_vruntime : 164642.026750
[541339.887176] .spread : 0.000000
[541339.887183] .spread0 : -163037927.870971
[541339.887189] .nr_spread_over : 4
[541339.887195] .nr_running : 2
[541339.887201] .load : 2048
[541339.887206] .runnable_load_avg : 211
[541339.887212] .blocked_load_avg : 378
[541339.887216] .tg_load_contrib : 524
[541339.887224] .tg_runnable_contrib : 557
[541339.887230] .tg_load_avg : 12378
[541339.887237] .tg->runnable_avg : 6721
[541339.887242] .tg->cfs_bandwidth.timer_active: 0
[541339.887248] .throttled : 0
[541339.887255] .throttle_count : 0
[541339.887262] .se->exec_start : 541339887.156076
[541339.887263] .se->vruntime : 158050815.615673
[541339.887264] .se->sum_exec_runtime : 275780.855985
[541339.887265] .se->statistics.wait_start : 0.000000
[541339.887266] .se->statistics.sleep_start : 0.000000
[541339.887267] .se->statistics.block_start : 0.000000
[541339.887268] .se->statistics.sleep_max : 0.000000
[541339.887269] .se->statistics.block_max : 0.000000
[541339.887270] .se->statistics.exec_max : 3.998459
[541339.887271] .se->statistics.slice_max : 13.498072
[541339.887272] .se->statistics.wait_max : 81.832207
[541339.887273] .se->statistics.wait_sum : 277914.809957
[541339.887274] .se->statistics.wait_count : 958086
[541339.887275] .se->load.weight : 152
[541339.887276] .se->avg.runnable_avg_sum : 25521
[541339.887276] .se->avg.runnable_avg_period : 46817
[541339.887277] .se->avg.load_avg_contrib : 43
[541339.887278] .se->avg.decay_count : 516261944
[541339.887279]
[541339.887279] cfs_rq[13]:/
[541339.887281] .exec_clock : 34584368.315586
[541339.887282] .MIN_vruntime : 158050827.899253
[541339.887283] .min_vruntime : 158050822.895007
[541339.887284] .max_vruntime : 158050827.899253
[541339.887285] .spread : 0.000000
[541339.887286] .spread0 : -5151758.915391
[541339.887287] .nr_spread_over : 20419
[541339.887288] .nr_running : 2
[541339.887288] .load : 789
[541339.887289] .runnable_load_avg : 240
[541339.887291] .blocked_load_avg : 728
[541339.887292] .tg_load_contrib : 1045
[541339.887294] .tg_runnable_contrib : 652
[541339.887295] .tg_load_avg : 8059
[541339.887296] .tg->runnable_avg : 10641
[541339.887297] .tg->cfs_bandwidth.timer_active: 0
[541339.887298] .throttled : 0
[541339.887299] .throttle_count : 0
[541339.887300] .avg->runnable_avg_sum : 29634
[541339.887301] .avg->runnable_avg_period : 46488
[541339.887302]
[541339.887302] rt_rq[13]:
[541339.887303] .rt_nr_running : 0
[541339.887304] .rt_throttled : 0
[541339.887305] .rt_time : 0.000000
[541339.887306] .rt_runtime : 950.000000
[541339.887307]
[541339.887307] runnable tasks:
[541339.887307] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.887307] ----------------------------------------------------------------------------------------------------------
[541339.887310] init 1 880.450813 67558 120 880.450813 16330.121542 541322791.543850 0 /autogroup-2
[541339.887332] rcu_sched 7 158050806.681463 49012485 120 158050806.681463 935725.203061 538128904.338083 0 /
[541339.887358] watchdog/13 103 -11.972801 135471 0 -11.972801 2824.114483 0.002132 0 /
[541339.887369] migration/13 104 0.000000 182961 0 0.000000 3554.076298 0.001337 0 /
[541339.887372] ksoftirqd/13 105 158050747.693605 417508 120 158050747.693605 5563.136520 541255831.558117 0 /
[541339.887376] kworker/13:0H 107 4411.517571 8 100 4411.517571 0.049532 48358.053961 0 /
[541339.887380] fsnotify_mark 137 135757775.482937 106 120 135757775.482937 2.100515 496215969.123325 0 /
[541339.887384] deferwq 178 96.585084 2 100 96.585084 0.038848 0.008283 0 /
[541339.887388] charger_manager 179 108.625132 2 100 108.625132 0.041294 0.007224 0 /
[541339.887392] bnx2x 286 469.527446 2 100 469.527446 0.016381 0.006636 0 /
[541339.887395] scsi_tmf_7 335 510.795555 2 100 510.795555 0.015818 0.006545 0 /
[541339.887400] scsi_tmf_8 339 522.827327 2 100 522.827327 0.032476 0.004714 0 /
[541339.887406] md127_raid1 378 158050781.947081 1621509 120 158050781.947081 57317.214069 541253160.051937 0 /
[541339.887411] md126_raid1 383 158045981.880284 1059650 120 158045981.880284 53204.271622 541269326.996029 0 /
[541339.887419] bioset 1417 1926.109089 2 100 1926.109089 0.005701 0.004861 0 /
[541339.887424] rs:main Q:Reg 1490 839.552443 334583 120 839.552443 25572.210566 541297864.996261 0 /autogroup-148
[541339.887427] bioset 1499 2004.272570 2 100 2004.272570 0.008333 0.004607 0 /
[541339.887433] bioset 1732 2320.877589 2 100 2320.877589 0.007803 0.020683 0 /
[541339.887463] kdmflush 1752 2341.955062 2 100 2341.955062 0.007754 0.004271 0 /
[541339.887492] bioset 1753 2353.962342 2 100 2353.962342 0.008126 0.003526 0 /
[541339.887522] kdmflush 1775 2369.331878 2 100 2369.331878 0.007825 0.003379 0 /
[541339.887547] bioset 1776 2381.336665 2 100 2381.336665 0.005303 0.003039 0 /
[541339.887577] kworker/13:1H 2907 157877922.030796 7513 100 157877922.030796 128.814465 541258459.149967 0 /
[541339.887583] ruby-timer-thr 3022 78968.893835 79741 120 78968.893835 2339.813919 541272184.252438 0 /autogroup-249
[541339.887590] java 3112 39008.387933 1303 120 39008.387933 2100.383853 539899928.626414 0 /autogroup-264
[541339.887598] sshd 3087 63.763936 212 120 63.763936 49.417658 539738675.986319 0 /autogroup-296
[541339.887607] java 4342 341.953369 4 120 341.953369 0.209039 610.573489 0 /autogroup-264
[541339.887611] java 4450 366.372240 5 120 366.372240 0.413643 5.459450 0 /autogroup-264
[541339.887615] java 4453 378.345507 5 120 378.345507 0.375902 5.195258 0 /autogroup-264
[541339.887622] java 5229 89.947926 37 120 89.947926 35.767593 7861.368593 0 /autogroup-358
[541339.887631] collectd 5317 7651.615288 72066 120 7651.615288 18198.631814 541247215.503963 0 /autogroup-363
[541339.887639] java 403 175415.370967 821 120 175415.370967 18.207808 121925645.265742 0 /autogroup-8620
[541339.887644] java 416 177734.338121 887 120 177734.338121 26.105095 122280675.761070 0 /autogroup-8620
[541339.887650] java 427 185327.846140 7052 120 185327.846140 701.765729 123545341.714392 0 /autogroup-8620
[541339.887656] mysqld 19148 310746.031888 222419 120 310746.031888 5876.975939 111089466.066290 0 /autogroup-8936
[541339.887662] mysqld 19153 310747.624648 1039827 120 310747.624648 32406.411881 111061059.980462 0 /autogroup-8936
[541339.887668] mysqld 7148 310197.779761 44801 120 310197.779761 17249.138532 1596236.157013 0 /autogroup-8936
[541339.887673] mysqld 7153 310197.651690 33930 120 310197.651690 12408.029039 1602594.181034 0 /autogroup-8936
[541339.887683] mysqld 19887 310708.717629 12 120 310708.717629 2.504200 2.600210 0 /autogroup-8936
[541339.887689] atop 26080 0.001100 1812 100 0.001100 9668.239467 25768951.964630 0 /autogroup-10824
[541339.887697] su 29176 75.534034 8 120 75.534034 8.141243 1.809229 0 /autogroup-11329
[541339.887717] PassengerWatchd 5358 11.113685 2 120 11.113685 0.162270 0.014615 0 /autogroup-11395
[541339.887746] PassengerWatchd 5386 48.659120 3 120 48.659120 0.167316 0.029000 0 /autogroup-11395
[541339.887770] apache2 5400 1809.207552 2154 120 1809.207552 73.662129 2152932.689383 0 /autogroup-356
[541339.887791] apache2 5390 1740.809806 2 120 1740.809806 18.561956 9.432801 0 /autogroup-356
[541339.887814] apache2 5426 1776.863350 1 120 1776.863350 0.022590 0.000000 0 /autogroup-356
[541339.887826] apache2 5429 1788.875671 1 120 1788.875671 0.012328 0.000000 0 /autogroup-356
[541339.887832] apache2 5418 1752.827403 1 120 1752.827403 0.017604 0.000000 0 /autogroup-356
[541339.887837] apache2 5421 1764.840767 1 120 1764.840767 0.013371 0.000000 0 /autogroup-356
[541339.887843] apache2 5430 1800.887440 1 120 1800.887440 0.013539 0.000000 0 /autogroup-356
[541339.887853] swift-object-se 7253 164642.089969 49073 120 164642.089969 1685.340715 1640373.520114 0 /autogroup-11406
[541339.887864] swift-object-se 7799 164638.932738 48075 120 164638.932738 1672.806236 1579355.596655 0 /autogroup-11406
[541339.887870] swift-object-se 7875 164642.031983 48741 120 164642.031983 1722.557400 1578933.664889 0 /autogroup-11406
[541339.887879] swift-object-se 7270 164628.559978 51893 120 164628.559978 1745.185920 1639571.899584 0 /autogroup-11406
[541339.887885] swift-object-se 7297 164627.953045 51580 120 164627.953045 1739.250460 1638830.806908 0 /autogroup-11406
[541339.887890] swift-object-se 7304 164637.802133 49938 120 164637.802133 1738.701787 1638990.756517 0 /autogroup-11406
[541339.887897] swift-object-se 6608 164641.955193 48027 120 164641.955193 1672.068017 1703594.110677 0 /autogroup-11406
[541339.887902] swift-object-se 6619 164591.454815 48051 120 164591.454815 1678.241063 1703175.844663 0 /autogroup-11406
[541339.887907] swift-object-se 7532 164638.960529 48577 120 164638.960529 1707.776838 1587876.438687 0 /autogroup-11406
[541339.887917] swift-object-se 8067 164628.675848 48307 120 164628.675848 1716.190136 1578355.607171 0 /autogroup-11406
[541339.887926] swift-object-se 7743 164638.380994 50608 120 164638.380994 1731.096392 1578262.805947 0 /autogroup-11406
[541339.887931] swift-object-se 7848 164642.238143 48989 120 164642.238143 1711.141791 1578398.164904 0 /autogroup-11406
[541339.887939] swift-object-se 6675 164638.688822 49350 120 164638.688822 1713.955455 1673160.425082 0 /autogroup-11406
[541339.887968] swift-object-se 6678 164639.737185 49599 120 164639.737185 1721.366546 1672969.935074 0 /autogroup-11406
[541339.887997] swift-object-se 7783 164558.555548 50217 120 164558.555548 1729.966411 1578482.640660 0 /autogroup-11406
[541339.888024] swift-object-se 7847 164603.116346 48705 120 164603.116346 1694.249256 1578716.964060 0 /autogroup-11406
[541339.888052] swift-object-se 6544 164631.797144 50721 120 164631.797144 1731.400260 1732086.567404 0 /autogroup-11406
[541339.888081] swift-object-se 7684 164631.809221 51209 120 164631.809221 1758.380415 1578502.544476 0 /autogroup-11406
[541339.888090] swift-object-se 7968 164635.577924 48583 120 164635.577924 1691.631351 1578583.637186 0 /autogroup-11406
[541339.888097] swift-object-se 7881 164635.548967 50705 120 164635.548967 1742.588208 1578259.465053 0 /autogroup-11406
[541339.888103] swift-object-se 8033 164634.049869 49407 120 164634.049869 1723.620830 1578335.090585 0 /autogroup-11406
[541339.888108] swift-object-se 8037 164616.318265 49304 120 164616.318265 1727.847626 1578448.297535 0 /autogroup-11406
[541339.888114] swift-object-se 7806 164603.358269 50381 120 164603.358269 1719.276838 1578796.493241 0 /autogroup-11406
[541339.888119] swift-object-se 7808 164619.861602 50198 120 164619.861602 1694.334432 1578299.258398 0 /autogroup-11406
[541339.888124] swift-object-se 7812 164642.188856 49642 120 164642.188856 1697.616218 1579230.380201 0 /autogroup-11406
[541339.888132] swift-object-se 7752 164635.522389 49725 120 164635.522389 1721.347104 1579291.484471 0 /autogroup-11406
[541339.888142] swift-proxy-ser 6530 129842.877552 124103 120 129842.877552 113803.958021 1466598.392131 0 /autogroup-11408
[541339.888150] java 6779 498.354872 54 120 498.354872 13.528336 822344.522556 0 /autogroup-11418
[541339.888156] java 6791 11.876384 4 120 11.876384 0.068477 0.014238 0 /autogroup-11418
[541339.888161] java 6811 128.011173 4 120 128.011173 0.374127 15864.338586 0 /autogroup-11418
[541339.888166] java 6814 60.479277 2 120 60.479277 0.108625 0.007463 0 /autogroup-11418
[541339.888171] java 6949 72.655893 2 120 72.655893 0.176623 0.012986 0 /autogroup-11418
[541339.888184] java 6963 48.145996 3 120 48.145996 0.097168 0.011487 0 /autogroup-11424
[541339.888188] java 6964 830.681214 4 120 830.681214 0.457294 14072.801070 0 /autogroup-11424
[541339.888193] java 6971 141.669059 2 120 141.669059 0.182268 0.018958 0 /autogroup-11424
[541339.888199] java 7056 654.313946 5 120 654.313946 1.177341 178.975495 0 /autogroup-11424
[541339.888206] java 7164 735.836027 16 120 735.836027 5.123227 2.743611 0 /autogroup-11424
[541339.888211] java 7165 742.988032 2 120 742.988032 0.298299 0.071713 0 /autogroup-11424
[541339.888216] java 7214 803.777489 4 120 803.777489 0.244409 0.112051 0 /autogroup-11424
[541339.888226] magfsd 7309 11.162323 1 120 11.162323 0.210906 0.000000 0 /autogroup-11432
[541339.888234] magfsd 19930 148522.482477 170 120 148522.482477 80.644845 1329.569926 0 /autogroup-11432
[541339.888266] kworker/13:0 8092 158050771.434108 36597 120 158050771.434108 679.349942 1590282.715431 0 /
[541339.888298] kworker/13:1 13631 158050815.922281 21425 120 158050815.922281 438.570012 890779.801350 0 /
[541339.888309] kworker/13:4 13768 158049760.359254 9447 120 158049760.359254 179.486057 853558.973169 0 /
[541339.888318] fio 19195 3230.636353 18468 120 3230.636353 571.648180 184888.088482 0 /autogroup-11329
[541339.888324] kworker/13:2 19264 157894687.153300 3978 120 157894687.153300 82.924903 144702.630913 0 /
[541339.888331] kworker/13:3 19843 157894699.207309 25 120 157894699.207309 0.075479 39.177328 0 /
[541339.888345]
[541339.888348] cpu#14, 2199.987 MHz
[541339.888350] .nr_running : 1
[541339.888352] .load : 78
[541339.888353] .nr_switches : 273821782
[541339.888355] .nr_load_updates : 22461203
[541339.888356] .nr_uninterruptible : 313543
[541339.888358] .next_balance : 4430.360681
[541339.888359] .curr->pid : 6484
[541339.888361] .clock : 541339887.926299
[541339.888362] .cpu_load[0] : 64
[541339.888364] .cpu_load[1] : 109
[541339.888365] .cpu_load[2] : 173
[541339.888367] .cpu_load[3] : 191
[541339.888368] .cpu_load[4] : 178
[541339.888369] .yld_count : 2433816
[541339.888371] .sched_count : 276407813
[541339.888372] .sched_goidle : 102934431
[541339.888374] .avg_idle : 82744
[541339.888375] .max_idle_balance_cost : 500000
[541339.888376] .ttwu_count : 133107113
[541339.888378] .ttwu_local : 22378892
[541339.888380]
[541339.888380] cfs_rq[14]:/autogroup-11436
[541339.888382] .exec_clock : 224.496504
[541339.888384] .MIN_vruntime : 0.000001
[541339.888385] .min_vruntime : 1885.932882
[541339.888387] .max_vruntime : 0.000001
[541339.888388] .spread : 0.000000
[541339.888390] .spread0 : -163200695.877516
[541339.888396] .nr_spread_over : 174
[541339.888401] .nr_running : 0
[541339.888408] .load : 0
[541339.888415] .runnable_load_avg : 0
[541339.888422] .blocked_load_avg : 1023
[541339.888428] .tg_load_contrib : 1023
[541339.888434] .tg_runnable_contrib : 71
[541339.888441] .tg_load_avg : 5017
[541339.888447] .tg->runnable_avg : 383
[541339.888453] .tg->cfs_bandwidth.timer_active: 0
[541339.888459] .throttled : 0
[541339.888466] .throttle_count : 0
[541339.888473] .se->exec_start : 541339880.679701
[541339.888480] .se->vruntime : 158283539.014817
[541339.888486] .se->sum_exec_runtime : 224.496504
[541339.888490] .se->statistics.wait_start : 0.000000
[541339.888494] .se->statistics.sleep_start : 0.000000
[541339.888495] .se->statistics.block_start : 0.000000
[541339.888497] .se->statistics.sleep_max : 0.000000
[541339.888498] .se->statistics.block_max : 0.000000
[541339.888499] .se->statistics.exec_max : 2.486331
[541339.888500] .se->statistics.slice_max : 0.886776
[541339.888501] .se->statistics.wait_max : 4.646179
[541339.888502] .se->statistics.wait_sum : 65.401654
[541339.888504] .se->statistics.wait_count : 663
[541339.888505] .se->load.weight : 2
[541339.888507] .se->avg.runnable_avg_sum : 3305
[541339.888508] .se->avg.runnable_avg_period : 47096
[541339.888510] .se->avg.load_avg_contrib : 70
[541339.888511] .se->avg.decay_count : 516261941
[541339.888513]
[541339.888513] cfs_rq[14]:/autogroup-11424
[541339.888516] .exec_clock : 25256.910290
[541339.888517] .MIN_vruntime : 0.000001
[541339.888519] .min_vruntime : 11596.673820
[541339.888521] .max_vruntime : 0.000001
[541339.888522] .spread : 0.000000
[541339.888524] .spread0 : -163190985.136578
[541339.888526] .nr_spread_over : 11
[541339.888527] .nr_running : 0
[541339.888528] .load : 0
[541339.888529] .runnable_load_avg : 0
[541339.888530] .blocked_load_avg : 0
[541339.888531] .tg_load_contrib : 0
[541339.888532] .tg_runnable_contrib : 0
[541339.888533] .tg_load_avg : 98
[541339.888534] .tg->runnable_avg : 135
[541339.888535] .tg->cfs_bandwidth.timer_active: 0
[541339.888536] .throttled : 0
[541339.888537] .throttle_count : 0
[541339.888538] .se->exec_start : 541339592.529145
[541339.888539] .se->vruntime : 158282591.431472
[541339.888540] .se->sum_exec_runtime : 25259.593111
[541339.888541] .se->statistics.wait_start : 0.000000
[541339.888542] .se->statistics.sleep_start : 0.000000
[541339.888543] .se->statistics.block_start : 0.000000
[541339.888544] .se->statistics.sleep_max : 0.000000
[541339.888545] .se->statistics.block_max : 0.000000
[541339.888546] .se->statistics.exec_max : 3.999217
[541339.888547] .se->statistics.slice_max : 10.652295
[541339.888548] .se->statistics.wait_max : 31.073303
[541339.888549] .se->statistics.wait_sum : 6992.719173
[541339.888550] .se->statistics.wait_count : 64460
[541339.888551] .se->load.weight : 2
[541339.888552] .se->avg.runnable_avg_sum : 16
[541339.888553] .se->avg.runnable_avg_period : 46724
[541339.888554] .se->avg.load_avg_contrib : 0
[541339.888555] .se->avg.decay_count : 516261666
[541339.888556]
[541339.888556] cfs_rq[14]:/autogroup-11408
[541339.888557] .exec_clock : 231406.301289
[541339.888558] .MIN_vruntime : 126902.328779
[541339.888559] .min_vruntime : 126909.336080
[541339.888560] .max_vruntime : 126902.328779
[541339.888561] .spread : 0.000000
[541339.888562] .spread0 : -163075672.474318
[541339.888564] .nr_spread_over : 0
[541339.888565] .nr_running : 1
[541339.888567] .load : 1024
[541339.888569] .runnable_load_avg : 70
[541339.888570] .blocked_load_avg : 389
[541339.888571] .tg_load_contrib : 463
[541339.888572] .tg_runnable_contrib : 94
[541339.888573] .tg_load_avg : 6156
[541339.888574] .tg->runnable_avg : 4558
[541339.888575] .tg->cfs_bandwidth.timer_active: 0
[541339.888575] .throttled : 0
[541339.888576] .throttle_count : 0
[541339.888577] .se->exec_start : 541339886.936861
[541339.888578] .se->vruntime : 158283604.327912
[541339.888579] .se->sum_exec_runtime : 231423.298354
[541339.888580] .se->statistics.wait_start : 541339888.421476
[541339.888581] .se->statistics.sleep_start : 0.000000
[541339.888582] .se->statistics.block_start : 0.000000
[541339.888583] .se->statistics.sleep_max : 0.000000
[541339.888584] .se->statistics.block_max : 0.000000
[541339.888585] .se->statistics.exec_max : 3.998010
[541339.888586] .se->statistics.slice_max : 13.628987
[541339.888587] .se->statistics.wait_max : 60.056107
[541339.888588] .se->statistics.wait_sum : 151255.113986
[541339.888589] .se->statistics.wait_count : 198867
[541339.888590] .se->load.weight : 156
[541339.888591] .se->avg.runnable_avg_sum : 4249
[541339.888593] .se->avg.runnable_avg_period : 46239
[541339.888594] .se->avg.load_avg_contrib : 77
[541339.888599] .se->avg.decay_count : 0
[541339.888605]
[541339.888605] cfs_rq[14]:/autogroup-11415
[541339.888611] .exec_clock : 42578.735751
[541339.888619] .MIN_vruntime : 0.000001
[541339.888625] .min_vruntime : 28366.318149
[541339.888631] .max_vruntime : 0.000001
[541339.888637] .spread : 0.000000
[541339.888642] .spread0 : -163174215.492249
[541339.888647] .nr_spread_over : 0
[541339.888652] .nr_running : 0
[541339.888658] .load : 0
[541339.888664] .runnable_load_avg : 0
[541339.888670] .blocked_load_avg : 0
[541339.888676] .tg_load_contrib : 0
[541339.888680] .tg_runnable_contrib : 9
[541339.888685] .tg_load_avg : 1418
[541339.888691] .tg->runnable_avg : 1107
[541339.888697] .tg->cfs_bandwidth.timer_active: 0
[541339.888701] .throttled : 0
[541339.888707] .throttle_count : 0
[541339.888713] .se->exec_start : 541339709.931395
[541339.888716] .se->vruntime : 158283143.772217
[541339.888722] .se->sum_exec_runtime : 42585.136815
[541339.888726] .se->statistics.wait_start : 0.000000
[541339.888733] .se->statistics.sleep_start : 0.000000
[541339.888735] .se->statistics.block_start : 0.000000
[541339.888736] .se->statistics.sleep_max : 0.000000
[541339.888737] .se->statistics.block_max : 0.000000
[541339.888738] .se->statistics.exec_max : 3.997200
[541339.888739] .se->statistics.slice_max : 6.761555
[541339.888740] .se->statistics.wait_max : 37.403031
[541339.888741] .se->statistics.wait_sum : 49863.522900
[541339.888742] .se->statistics.wait_count : 160435
[541339.888743] .se->load.weight : 2
[541339.888744] .se->avg.runnable_avg_sum : 423
[541339.888745] .se->avg.runnable_avg_period : 47602
[541339.888745] .se->avg.load_avg_contrib : 0
[541339.888746] .se->avg.decay_count : 516261778
[541339.888748]
[541339.888748] cfs_rq[14]:/autogroup-11432
[541339.888749] .exec_clock : 182243.006166
[541339.888750] .MIN_vruntime : 0.000001
[541339.888751] .min_vruntime : 155149.072522
[541339.888752] .max_vruntime : 0.000001
[541339.888753] .spread : 0.000000
[541339.888754] .spread0 : -163047432.737876
[541339.888755] .nr_spread_over : 605
[541339.888756] .nr_running : 0
[541339.888757] .load : 0
[541339.888758] .runnable_load_avg : 0
[541339.888759] .blocked_load_avg : 293
[541339.888760] .tg_load_contrib : 293
[541339.888760] .tg_runnable_contrib : 640
[541339.888761] .tg_load_avg : 1831
[541339.888762] .tg->runnable_avg : 1827
[541339.888763] .tg->cfs_bandwidth.timer_active: 0
[541339.888764] .throttled : 0
[541339.888765] .throttle_count : 0
[541339.888766] .se->exec_start : 541339876.158220
[541339.888767] .se->vruntime : 158283547.373249
[541339.888768] .se->sum_exec_runtime : 182256.748252
[541339.888769] .se->statistics.wait_start : 0.000000
[541339.888770] .se->statistics.sleep_start : 0.000000
[541339.888771] .se->statistics.block_start : 0.000000
[541339.888772] .se->statistics.sleep_max : 0.000000
[541339.888773] .se->statistics.block_max : 0.000000
[541339.888774] .se->statistics.exec_max : 4.003662
[541339.888775] .se->statistics.slice_max : 46.215544
[541339.888776] .se->statistics.wait_max : 33.074558
[541339.888777] .se->statistics.wait_sum : 82332.601228
[541339.888778] .se->statistics.wait_count : 203170
[541339.888779] .se->load.weight : 2
[541339.888780] .se->avg.runnable_avg_sum : 29818
[541339.888781] .se->avg.runnable_avg_period : 47634
[541339.888782] .se->avg.load_avg_contrib : 117
[541339.888783] .se->avg.decay_count : 516261937
[541339.888784]
[541339.888784] cfs_rq[14]:/autogroup-11406
[541339.888785] .exec_clock : 273265.924514
[541339.888786] .MIN_vruntime : 0.000001
[541339.888787] .min_vruntime : 163693.761041
[541339.888788] .max_vruntime : 0.000001
[541339.888789] .spread : 0.000000
[541339.888790] .spread0 : -163038888.049357
[541339.888791] .nr_spread_over : 2
[541339.888792] .nr_running : 1
[541339.888793] .load : 1024
[541339.888793] .runnable_load_avg : 640
[541339.888794] .blocked_load_avg : 170
[541339.888795] .tg_load_contrib : 810
[541339.888796] .tg_runnable_contrib : 284
[541339.888797] .tg_load_avg : 13168
[541339.888798] .tg->runnable_avg : 6852
[541339.888799] .tg->cfs_bandwidth.timer_active: 0
[541339.888800] .throttled : 0
[541339.888801] .throttle_count : 0
[541339.888802] .se->exec_start : 541339888.421476
[541339.888803] .se->vruntime : 158283616.327912
[541339.888804] .se->sum_exec_runtime : 273297.777203
[541339.888805] .se->statistics.wait_start : 0.000000
[541339.888806] .se->statistics.sleep_start : 0.000000
[541339.888807] .se->statistics.block_start : 0.000000
[541339.888807] .se->statistics.sleep_max : 0.000000
[541339.888808] .se->statistics.block_max : 0.000000
[541339.888809] .se->statistics.exec_max : 3.997962
[541339.888810] .se->statistics.slice_max : 17.997091
[541339.888811] .se->statistics.wait_max : 49.218412
[541339.888812] .se->statistics.wait_sum : 275795.881174
[541339.888813] .se->statistics.wait_count : 955124
[541339.888814] .se->load.weight : 78
[541339.888815] .se->avg.runnable_avg_sum : 13331
[541339.888816] .se->avg.runnable_avg_period : 46615
[541339.888817] .se->avg.load_avg_contrib : 64
[541339.888818] .se->avg.decay_count : 516261946
[541339.888819]
[541339.888819] cfs_rq[14]:/
[541339.888820] .exec_clock : 34637426.276161
[541339.888821] .MIN_vruntime : 158283604.327912
[541339.888822] .min_vruntime : 158283616.327912
[541339.888823] .max_vruntime : 158283604.327912
[541339.888824] .spread : 0.000000
[541339.888825] .spread0 : -4918965.482486
[541339.888826] .nr_spread_over : 20155
[541339.888827] .nr_running : 2
[541339.888828] .load : 234
[541339.888829] .runnable_load_avg : 141
[541339.888830] .blocked_load_avg : 174
[541339.888831] .tg_load_contrib : 319
[541339.888832] .tg_runnable_contrib : 825
[541339.888833] .tg_load_avg : 8459
[541339.888833] .tg->runnable_avg : 10698
[541339.888834] .tg->cfs_bandwidth.timer_active: 0
[541339.888835] .throttled : 0
[541339.888836] .throttle_count : 0
[541339.888837] .avg->runnable_avg_sum : 38911
[541339.888838] .avg->runnable_avg_period : 47934
[541339.888839]
[541339.888839] rt_rq[14]:
[541339.888840] .rt_nr_running : 0
[541339.888841] .rt_throttled : 0
[541339.888842] .rt_time : 0.000000
[541339.888843] .rt_runtime : 950.000000
[541339.888844]
[541339.888844] runnable tasks:
[541339.888844] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.888844] ----------------------------------------------------------------------------------------------------------
[541339.888850] watchdog/14 108 -11.971412 135471 0 -11.971412 2897.760839 0.002114 0 /
[541339.888854] migration/14 109 0.000000 181181 0 0.000000 4708.405690 0.001295 0 /
[541339.888857] ksoftirqd/14 110 158283507.165911 420074 120 158283507.165911 5677.644803 541254517.335977 0 /
[541339.888861] kworker/14:0H 112 14411.138847 8 100 14411.138847 0.073916 115000.783594 0 /
[541339.888864] khungtaskd 121 157952150.069248 4528 120 157952150.069248 961.271790 541265449.540889 0 /
[541339.888869] scsi_tmf_0 264 107.920119 2 100 107.920119 0.011304 0.005479 0 /
[541339.888872] scsi_tmf_1 266 131.946870 2 100 131.946870 0.018342 0.008389 0 /
[541339.888875] scsi_tmf_2 268 155.965860 2 100 155.965860 0.010160 0.006137 0 /
[541339.888878] scsi_tmf_3 270 179.986407 2 100 179.986407 0.010730 0.005105 0 /
[541339.888881] scsi_tmf_4 272 204.005983 2 100 204.005983 0.010402 0.005130 0 /
[541339.888885] scsi_tmf_5 276 240.030889 2 100 240.030889 0.009722 0.005108 0 /
[541339.888889] usb-storage 336 158279618.232556 5298613 120 158279618.232556 111697.097428 541214114.454133 0 /
[541339.888913] bioset 377 593.473879 2 100 593.473879 0.006731 0.003027 0 /
[541339.888940] edac-poller 689 1048.951079 2 100 1048.951079 0.008371 0.012035 0 /
[541339.888966] ext4-rsv-conver 1153 2210.263254 2 100 2210.263254 0.006833 0.098755 0 /
[541339.888993] kdmflush 1392 2297.404798 2 100 2297.404798 0.043521 0.135440 0 /
[541339.889022] kdmflush 1393 2309.380243 2 100 2309.380243 0.014686 0.016651 0 /
[541339.889032] bioset 1394 2317.509442 2 100 2317.509442 0.032421 0.002963 0 /
[541339.889035] kdmflush 1467 2395.199651 2 100 2395.199651 0.009002 0.004442 0 /
[541339.889040] kdmflush 1731 2713.273842 2 100 2713.273842 0.010074 0.004958 0 /
[541339.889048] java 3110 39346.065739 784 120 39346.065739 2196.560680 539899916.025507 0 /autogroup-264
[541339.889054] java 3173 17162.524561 5542 120 17162.524561 3611.811291 541176361.636886 0 /autogroup-257
[541339.889058] java 3184 17162.377437 5870 120 17162.377437 3565.827935 541176362.093782 0 /autogroup-257
[541339.889061] java 3185 17162.472134 5186 120 17162.472134 3521.806169 541176367.513451 0 /autogroup-257
[541339.889065] java 3187 17162.470709 4973 120 17162.470709 3639.387088 541176349.571580 0 /autogroup-257
[541339.889068] java 3188 17162.471149 5193 120 17162.471149 1800.894257 541178155.087348 0 /autogroup-257
[541339.889072] java 3192 17162.473683 5626 120 17162.473683 3600.462476 541176353.256993 0 /autogroup-257
[541339.889077] java 3262 39270.612323 543 120 39270.612323 191.310063 538548097.472382 0 /autogroup-264
[541339.889082] java 3504 39598.664011 10829516 120 39598.664011 444545.244798 540792299.434143 0 /autogroup-264
[541339.889088] java 19857 39594.818694 5 120 39594.818694 0.143857 0.036037 0 /autogroup-264
[541339.889094] nsrexecd 4713 189.573304 191097 120 189.573304 8513.693026 541268531.417977 0 /autogroup-349
[541339.889099] java 5222 40855.982192 14331 120 40855.982192 8129.905579 541186053.640631 0 /autogroup-358
[541339.889107] kworker/14:1H 5700 158266299.802629 5373 100 158266299.802629 72.437481 541216424.997559 0 /
[541339.889111] java 387 179548.521188 13888 120 179548.521188 3067.302749 123557608.469486 0 /autogroup-8620
[541339.889115] java 396 169821.297047 820 120 169821.297047 18.070004 121925642.503988 0 /autogroup-8620
[541339.889119] java 425 178603.811196 2601 120 178603.811196 271.466450 123478971.860258 0 /autogroup-8620
[541339.889124] mysqld 19143 7801.695894 1 120 7801.695894 0.144789 0.000000 0 /autogroup-8936
[541339.889128] mysqld 19165 7899.564410 1 120 7899.564410 0.068236 0.000000 0 /autogroup-8936
[541339.889134] mysqld 16352 305181.221135 83 120 305181.221135 16.337048 449517.560969 0 /autogroup-8936
[541339.889138] mysqld 19786 305083.236520 12 120 305083.236520 1.565300 1.278103 0 /autogroup-8936
[541339.889141] mysqld 19787 305096.815995 12 120 305096.815995 1.579482 1.237216 0 /autogroup-8936
[541339.889144] mysqld 19788 305110.326668 12 120 305110.326668 1.510680 1.217454 0 /autogroup-8936
[541339.889148] mysqld 19789 305124.176601 12 120 305124.176601 1.849940 1.230170 0 /autogroup-8936
[541339.889152] jfsCommit 23562 114354005.479028 226682 120 114354005.479028 5710.542866 7730055.512426 0 /
[541339.889156] dragent 5088 179695.662542 25703 120 179695.662542 897.206457 25657668.577709 0 /autogroup-8620
[541339.889162] PassengerWatchd 5388 44.457130 2 120 44.457130 0.135526 0.014302 0 /autogroup-11395
[541339.889165] PassengerHelper 5359 17.662832 11 120 17.662832 6.057417 17.423727 0 /autogroup-11395
[541339.889170] apache2 5391 1653.507844 2 120 1653.507844 18.516390 9.286920 0 /autogroup-356
[541339.889178] swift-object-se 7261 163667.625074 50030 120 163667.625074 1730.573833 1639638.786897 0 /autogroup-11406
[541339.889183] swift-object-se 8077 163664.937264 47889 120 163664.937264 1697.920270 1577426.658333 0 /autogroup-11406
[541339.889191] swift-object-se 6609 163661.645091 47031 120 163661.645091 1669.854277 1702693.168791 0 /autogroup-11406
[541339.889202] swift-object-se 6621 163633.777900 49328 120 163633.777900 1707.360497 1703363.781316 0 /autogroup-11406
[541339.889229] swift-object-se 8068 163660.587860 47539 120 163660.587860 1681.784679 1578516.143100 0 /autogroup-11406
[541339.889252] swift-object-se 7728 163674.868194 49238 120 163674.868194 1687.899648 1578854.730812 0 /autogroup-11406
[541339.889257] swift-object-se 7172 163660.605543 52683 120 163660.605543 1783.832234 1642524.069538 0 /autogroup-11406
[541339.889260] swift-object-se 7173 163660.637840 51555 120 163660.637840 1771.252030 1642357.096507 0 /autogroup-11406
[541339.889266] swift-object-se 7804 163674.893563 48306 120 163674.893563 1694.898135 1578755.383807 0 /autogroup-11406
[541339.889270] swift-object-se 6671 163672.752857 50736 120 163672.752857 1725.068030 1672632.549824 0 /autogroup-11406
[541339.889276] swift-object-se 7836 163667.574837 49461 120 163667.574837 1716.292417 1578785.716856 0 /autogroup-11406
[541339.889280] swift-object-se 6547 163670.607237 52406 120 163670.607237 1809.199669 1731580.088538 0 /autogroup-11406
[541339.889285] swift-object-se 8072 163678.147075 49210 120 163678.147075 1695.278514 1578373.415608 0 /autogroup-11406
[541339.889288] swift-object-se 8073 163678.155843 47038 120 163678.155843 1713.119764 1578139.885625 0 /autogroup-11406
[541339.889295] swift-object-se 7856 163593.772947 48718 120 163593.772947 1691.644798 1577853.307827 0 /autogroup-11406
[541339.889298] swift-object-se 7998 163673.146797 48614 120 163673.146797 1703.150967 1578865.767954 0 /autogroup-11406
[541339.889302] swift-object-se 6484 163694.818202 905260 120 163694.818202 214831.838856 1101549.084022 0 /autogroup-11406
[541339.889307] swift-object-se 7755 163678.037473 50775 120 163678.037473 1752.832454 1578847.383772 0 /autogroup-11406
[541339.889327] swift-object-se 7931 163680.897190 46547 120 163680.897190 1669.295700 1578738.906591 0 /autogroup-11406
[541339.889330] swift-object-se 7938 163681.487286 46586 120 163681.487286 1683.303799 1578758.758690 0 /autogroup-11406
[541339.889335] Rswift-proxy-ser 6509 126902.328779 144933 120 126902.328779 132174.862289 1425178.124122 0 /autogroup-11408
[541339.889339] swift-proxy-ser 6524 126909.336080 134526 120 126909.336080 126259.999587 1436445.635402 0 /autogroup-11408
[541339.889345] java 6804 1090.719079 6125 120 1090.719079 488.557106 1667318.416643 0 /autogroup-11418
[541339.889359] java 6884 660.688459 11 120 660.688459 4.856043 15290.111873 0 /autogroup-11424
[541339.889364] java 6900 11577.433637 3118 120 11577.433637 2578.549086 1660553.713523 0 /autogroup-11424
[541339.889369] java 6955 39.062091 4 120 39.062091 0.092975 0.014236 0 /autogroup-11424
[541339.889374] java 6957 51.211977 4 120 51.211977 0.149893 0.011098 0 /autogroup-11424
[541339.889379] java 6958 63.250778 2 120 63.250778 0.038808 0.003600 0 /autogroup-11424
[541339.889384] java 7054 486.935007 3 120 486.935007 1.050766 26.891533 0 /autogroup-11424
[541339.889390] java 7057 524.165899 7 120 524.165899 0.742408 179.298293 0 /autogroup-11424
[541339.889396] java 7216 586.132182 22 120 586.132182 1.651094 2.161967 0 /autogroup-11424
[541339.889400] java 7322 11584.695088 7508 120 11584.695088 6085.515862 1644415.414679 0 /autogroup-11424
[541339.889404] java 7340 743.951408 15 120 743.951408 59.339739 200.108213 0 /autogroup-11424
[541339.889407] java 7346 11553.835909 13127 120 11553.835909 455.760652 1648937.893624 0 /autogroup-11424
[541339.889410] java 7377 11053.997499 19 120 11053.997499 1.865752 1563782.061504 0 /autogroup-11424
[541339.889415] java 7613 11570.403563 7482 120 11570.403563 6062.291181 1586155.119521 0 /autogroup-11424
[541339.889420] magfsd 7116 154757.522061 336 120 154757.522061 10.913577 1658280.296582 0 /autogroup-11432
[541339.889425] magfsd 19782 155030.995095 3626 120 155030.995095 772.149478 44245.167042 0 /autogroup-11432
[541339.889429] magfsd 19893 155127.189146 1064 120 155127.189146 878.769675 14430.144658 0 /autogroup-11432
[541339.889433] kworker/14:1 8191 158275965.180451 38179 120 158275965.180451 728.573240 1562440.032028 0 /
[541339.889436] sshd 8200 -11.517123 84 120 -11.517123 37.001888 13344.525422 0 /autogroup-11440
[541339.889441] kworker/14:3 8665 158283535.404740 33324 120 158283535.404740 645.949898 1439134.374024 0 /
[541339.889444] kworker/14:4 10940 157891453.290485 3149 120 157891453.290485 60.373621 1170725.154664 0 /
[541339.889449] kworker/14:2 16548 158280004.990713 11325 120 158280004.990713 228.293623 439866.499170 0 /
[541339.889453] kworker/14:0 19666 158245955.605583 69 120 158245955.605583 0.610186 70044.601254 0 /
[541339.889458]
[541339.889459] cpu#15, 2199.987 MHz
[541339.889460] .nr_running : 1
[541339.889461] .load : 77
[541339.889462] .nr_switches : 271479167
[541339.889463] .nr_load_updates : 21825988
[541339.889464] .nr_uninterruptible : 313685
[541339.889465] .next_balance : 4430.360681
[541339.889466] .curr->pid : 6482
[541339.889467] .clock : 541339889.124286
[541339.889468] .cpu_load[0] : 192
[541339.889469] .cpu_load[1] : 175
[541339.889470] .cpu_load[2] : 171
[541339.889471] .cpu_load[3] : 145
[541339.889472] .cpu_load[4] : 106
[541339.889473] .yld_count : 13183031
[541339.889474] .sched_count : 284699537
[541339.889475] .sched_goidle : 101846710
[541339.889476] .avg_idle : 243292
[541339.889477] .max_idle_balance_cost : 500000
[541339.889478] .ttwu_count : 132221320
[541339.889478] .ttwu_local : 21994910
[541339.889480]
[541339.889480] cfs_rq[15]:/autogroup-8936
[541339.889481] .exec_clock : 288730.588985
[541339.889482] .MIN_vruntime : 0.000001
[541339.889483] .min_vruntime : 302737.820377
[541339.889484] .max_vruntime : 0.000001
[541339.889485] .spread : 0.000000
[541339.889486] .spread0 : -162899843.990021
[541339.889487] .nr_spread_over : 1443
[541339.889488] .nr_running : 0
[541339.889489] .load : 0
[541339.889490] .runnable_load_avg : 0
[541339.889491] .blocked_load_avg : 0
[541339.889492] .tg_load_contrib : 0
[541339.889493] .tg_runnable_contrib : 0
[541339.889494] .tg_load_avg : 86
[541339.889495] .tg->runnable_avg : 127
[541339.889496] .tg->cfs_bandwidth.timer_active: 0
[541339.889497] .throttled : 0
[541339.889498] .throttle_count : 0
[541339.889499] .se->exec_start : 541339851.634840
[541339.889500] .se->vruntime : 158270418.121479
[541339.889501] .se->sum_exec_runtime : 288741.391073
[541339.889502] .se->statistics.wait_start : 0.000000
[541339.889503] .se->statistics.sleep_start : 0.000000
[541339.889504] .se->statistics.block_start : 0.000000
[541339.889505] .se->statistics.sleep_max : 0.000000
[541339.889506] .se->statistics.block_max : 0.000000
[541339.889507] .se->statistics.exec_max : 4.046228
[541339.889508] .se->statistics.slice_max : 116.871191
[541339.889509] .se->statistics.wait_max : 18.708441
[541339.889510] .se->statistics.wait_sum : 20760.823634
[541339.889511] .se->statistics.wait_count : 421949
[541339.889512] .se->load.weight : 2
[541339.889513] .se->avg.runnable_avg_sum : 6
[541339.889514] .se->avg.runnable_avg_period : 47548
[541339.889515] .se->avg.load_avg_contrib : 0
[541339.889516] .se->avg.decay_count : 516261913
[541339.889517]
[541339.889517] cfs_rq[15]:/autogroup-356
[541339.889518] .exec_clock : 1320.254526
[541339.889519] .MIN_vruntime : 0.000001
[541339.889520] .min_vruntime : 1759.347988
[541339.889521] .max_vruntime : 0.000001
[541339.889522] .spread : 0.000000
[541339.889523] .spread0 : -163200822.462410
[541339.889524] .nr_spread_over : 13
[541339.889525] .nr_running : 0
[541339.889526] .load : 0
[541339.889527] .runnable_load_avg : 0
[541339.889528] .blocked_load_avg : 0
[541339.889529] .tg_load_contrib : 0
[541339.889530] .tg_runnable_contrib : 0
[541339.889531] .tg_load_avg : 30
[541339.889532] .tg->runnable_avg : 31
[541339.889533] .tg->cfs_bandwidth.timer_active: 0
[541339.889534] .throttled : 0
[541339.889535] .throttle_count : 0
[541339.889536] .se->exec_start : 541339846.046624
[541339.889537] .se->vruntime : 158270414.525521
[541339.889538] .se->sum_exec_runtime : 1321.568655
[541339.889539] .se->statistics.wait_start : 0.000000
[541339.889540] .se->statistics.sleep_start : 0.000000
[541339.889541] .se->statistics.block_start : 0.000000
[541339.889542] .se->statistics.sleep_max : 0.000000
[541339.889543] .se->statistics.block_max : 0.000000
[541339.889544] .se->statistics.exec_max : 3.998043
[541339.889545] .se->statistics.slice_max : 0.784108
[541339.889546] .se->statistics.wait_max : 182.774432
[541339.889547] .se->statistics.wait_sum : 631.214521
[541339.889548] .se->statistics.wait_count : 34538
[541339.889549] .se->load.weight : 2
[541339.889550] .se->avg.runnable_avg_sum : 9
[541339.889551] .se->avg.runnable_avg_period : 46944
[541339.889552] .se->avg.load_avg_contrib : 0
[541339.889553] .se->avg.decay_count : 516261908
[541339.889554]
[541339.889554] cfs_rq[15]:/autogroup-11424
[541339.889555] .exec_clock : 25943.993181
[541339.889556] .MIN_vruntime : 0.000001
[541339.889557] .min_vruntime : 12368.975094
[541339.889558] .max_vruntime : 0.000001
[541339.889559] .spread : 0.000000
[541339.889560] .spread0 : -163190212.835304
[541339.889561] .nr_spread_over : 6
[541339.889562] .nr_running : 0
[541339.889563] .load : 0
[541339.889564] .runnable_load_avg : 0
[541339.889565] .blocked_load_avg : 0
[541339.889566] .tg_load_contrib : 0
[541339.889567] .tg_runnable_contrib : 0
[541339.889568] .tg_load_avg : 102
[541339.889569] .tg->runnable_avg : 133
[541339.889570] .tg->cfs_bandwidth.timer_active: 0
[541339.889571] .throttled : 0
[541339.889572] .throttle_count : 0
[541339.889573] .se->exec_start : 541339793.211952
[541339.889574] .se->vruntime : 158270372.058686
[541339.889575] .se->sum_exec_runtime : 25946.533242
[541339.889576] .se->statistics.wait_start : 0.000000
[541339.889577] .se->statistics.sleep_start : 0.000000
[541339.889578] .se->statistics.block_start : 0.000000
[541339.889579] .se->statistics.sleep_max : 0.000000
[541339.889580] .se->statistics.block_max : 0.000000
[541339.889581] .se->statistics.exec_max : 4.007810
[541339.889582] .se->statistics.slice_max : 46.473750
[541339.889583] .se->statistics.wait_max : 16.251432
[541339.889584] .se->statistics.wait_sum : 7082.530204
[541339.889585] .se->statistics.wait_count : 67305
[541339.889586] .se->load.weight : 2
[541339.889587] .se->avg.runnable_avg_sum : 21
[541339.889588] .se->avg.runnable_avg_period : 46221
[541339.889589] .se->avg.load_avg_contrib : 0
[541339.889589] .se->avg.decay_count : 516261858
[541339.889591]
[541339.889591] cfs_rq[15]:/autogroup-11329
[541339.889592] .exec_clock : 4431.326159
[541339.889593] .MIN_vruntime : 0.000001
[541339.889594] .min_vruntime : 3593.863713
[541339.889595] .max_vruntime : 0.000001
[541339.889596] .spread : 0.000000
[541339.889597] .spread0 : -163198987.946685
[541339.889598] .nr_spread_over : 11
[541339.889599] .nr_running : 0
[541339.889600] .load : 0
[541339.889601] .runnable_load_avg : 0
[541339.889601] .blocked_load_avg : 0
[541339.889602] .tg_load_contrib : 0
[541339.889603] .tg_runnable_contrib : 0
[541339.889604] .tg_load_avg : 395
[541339.889605] .tg->runnable_avg : 407
[541339.889606] .tg->cfs_bandwidth.timer_active: 0
[541339.889607] .throttled : 0
[541339.889608] .throttle_count : 0
[541339.889609] .se->exec_start : 541339733.950377
[541339.889610] .se->vruntime : 158270308.672361
[541339.889611] .se->sum_exec_runtime : 4432.200429
[541339.889612] .se->statistics.wait_start : 0.000000
[541339.889613] .se->statistics.sleep_start : 0.000000
[541339.889614] .se->statistics.block_start : 0.000000
[541339.889615] .se->statistics.sleep_max : 0.000000
[541339.889616] .se->statistics.block_max : 0.000000
[541339.889617] .se->statistics.exec_max : 20.920180
[541339.889618] .se->statistics.slice_max : 0.262107
[541339.889619] .se->statistics.wait_max : 57.086919
[541339.889620] .se->statistics.wait_sum : 1919.714564
[541339.889621] .se->statistics.wait_count : 56949
[541339.889622] .se->load.weight : 2
[541339.889623] .se->avg.runnable_avg_sum : 43
[541339.889624] .se->avg.runnable_avg_period : 46784
[541339.889625] .se->avg.load_avg_contrib : 0
[541339.889626] .se->avg.decay_count : 516261801
[541339.889627]
[541339.889627] cfs_rq[15]:/autogroup-11415
[541339.889629] .exec_clock : 45528.360628
[541339.889630] .MIN_vruntime : 0.000001
[541339.889631] .min_vruntime : 29790.155772
[541339.889631] .max_vruntime : 0.000001
[541339.889632] .spread : 0.000000
[541339.889633] .spread0 : -163172791.654626
[541339.889634] .nr_spread_over : 0
[541339.889635] .nr_running : 0
[541339.889636] .load : 0
[541339.889637] .runnable_load_avg : 0
[541339.889638] .blocked_load_avg : 64
[541339.889639] .tg_load_contrib : 64
[541339.889640] .tg_runnable_contrib : 62
[541339.889641] .tg_load_avg : 1418
[541339.889642] .tg->runnable_avg : 1100
[541339.889643] .tg->cfs_bandwidth.timer_active: 0
[541339.889644] .throttled : 0
[541339.889645] .throttle_count : 0
[541339.889646] .se->exec_start : 541339860.163710
[541339.889647] .se->vruntime : 158270432.873572
[541339.889648] .se->sum_exec_runtime : 45535.154076
[541339.889649] .se->statistics.wait_start : 0.000000
[541339.889650] .se->statistics.sleep_start : 0.000000
[541339.889651] .se->statistics.block_start : 0.000000
[541339.889652] .se->statistics.sleep_max : 0.000000
[541339.889653] .se->statistics.block_max : 0.000000
[541339.889654] .se->statistics.exec_max : 3.997004
[541339.889655] .se->statistics.slice_max : 8.080614
[541339.889656] .se->statistics.wait_max : 15.704202
[541339.889658] .se->statistics.wait_sum : 51295.702272
[541339.889660] .se->statistics.wait_count : 165727
[541339.889661] .se->load.weight : 2
[541339.889662] .se->avg.runnable_avg_sum : 2899
[541339.889664] .se->avg.runnable_avg_period : 47321
[541339.889665] .se->avg.load_avg_contrib : 44
[541339.889666] .se->avg.decay_count : 516261922
[541339.889668]
[541339.889668] cfs_rq[15]:/autogroup-11432
[541339.889670] .exec_clock : 172117.350066
[541339.889671] .MIN_vruntime : 0.000001
[541339.889672] .min_vruntime : 151049.825485
[541339.889673] .max_vruntime : 0.000001
[541339.889675] .spread : 0.000000
[541339.889676] .spread0 : -163051531.984913
[541339.889677] .nr_spread_over : 900
[541339.889679] .nr_running : 0
[541339.889680] .load : 0
[541339.889681] .runnable_load_avg : 0
[541339.889682] .blocked_load_avg : 28
[541339.889684] .tg_load_contrib : 28
[541339.889685] .tg_runnable_contrib : 37
[541339.889687] .tg_load_avg : 1857
[541339.889688] .tg->runnable_avg : 1851
[541339.889689] .tg->cfs_bandwidth.timer_active: 0
[541339.889691] .throttled : 0
[541339.889692] .throttle_count : 0
[541339.889694] .se->exec_start : 541339889.051446
[541339.889696] .se->vruntime : 158270504.894252
[541339.889697] .se->sum_exec_runtime : 172132.383799
[541339.889698] .se->statistics.wait_start : 0.000000
[541339.889700] .se->statistics.sleep_start : 0.000000
[541339.889701] .se->statistics.block_start : 0.000000
[541339.889702] .se->statistics.sleep_max : 0.000000
[541339.889704] .se->statistics.block_max : 0.000000
[541339.889705] .se->statistics.exec_max : 4.001430
[541339.889706] .se->statistics.slice_max : 60.462052
[541339.889708] .se->statistics.wait_max : 59.644740
[541339.889709] .se->statistics.wait_sum : 81957.389297
[541339.889711] .se->statistics.wait_count : 200025
[541339.889712] .se->load.weight : 2
[541339.889714] .se->avg.runnable_avg_sum : 1727
[541339.889715] .se->avg.runnable_avg_period : 47634
[541339.889716] .se->avg.load_avg_contrib : 15
[541339.889718] .se->avg.decay_count : 516261949
[541339.889720]
[541339.889720] cfs_rq[15]:/autogroup-11408
[541339.889722] .exec_clock : 236002.363313
[541339.889724] .MIN_vruntime : 0.000001
[541339.889726] .min_vruntime : 129525.692970
[541339.889727] .max_vruntime : 0.000001
[541339.889729] .spread : 0.000000
[541339.889730] .spread0 : -163073056.117428
[541339.889732] .nr_spread_over : 0
[541339.889734] .nr_running : 0
[541339.889735] .load : 0
[541339.889737] .runnable_load_avg : 0
[541339.889739] .blocked_load_avg : 841
[541339.889740] .tg_load_contrib : 819
[541339.889742] .tg_runnable_contrib : 382
[541339.889744] .tg_load_avg : 6035
[541339.889745] .tg->runnable_avg : 4629
[541339.889747] .tg->cfs_bandwidth.timer_active: 0
[541339.889749] .throttled : 0
[541339.889750] .throttle_count : 0
[541339.889752] .se->exec_start : 541339889.124286
[541339.889754] .se->vruntime : 158270518.406969
[541339.889756] .se->sum_exec_runtime : 236019.568993
[541339.889757] .se->statistics.wait_start : 0.000000
[541339.889759] .se->statistics.sleep_start : 0.000000
[541339.889760] .se->statistics.block_start : 0.000000
[541339.889762] .se->statistics.sleep_max : 0.000000
[541339.889764] .se->statistics.block_max : 0.000000
[541339.889765] .se->statistics.exec_max : 3.998594
[541339.889767] .se->statistics.slice_max : 19.989240
[541339.889769] .se->statistics.wait_max : 50.905721
[541339.889771] .se->statistics.wait_sum : 153348.005026
[541339.889772] .se->statistics.wait_count : 199419
[541339.889774] .se->load.weight : 2
[541339.889775] .se->avg.runnable_avg_sum : 17744
[541339.889777] .se->avg.runnable_avg_period : 47146
[541339.889778] .se->avg.load_avg_contrib : 138
[541339.889780] .se->avg.decay_count : 516261949
[541339.889782]
[541339.889782] cfs_rq[15]:/autogroup-11406
[541339.889784] .exec_clock : 271684.916016
[541339.889786] .MIN_vruntime : 0.000001
[541339.889787] .min_vruntime : 162409.517349
[541339.889789] .max_vruntime : 0.000001
[541339.889791] .spread : 0.000000
[541339.889792] .spread0 : -163040172.293049
[541339.889794] .nr_spread_over : 2
[541339.889796] .nr_running : 1
[541339.889797] .load : 1024
[541339.889799] .runnable_load_avg : 426
[541339.889801] .blocked_load_avg : 175
[541339.889802] .tg_load_contrib : 567
[541339.889804] .tg_runnable_contrib : 480
[541339.889805] .tg_load_avg : 13739
[541339.889807] .tg->runnable_avg : 6910
[541339.889809] .tg->cfs_bandwidth.timer_active: 0
[541339.889810] .throttled : 0
[541339.889812] .throttle_count : 0
[541339.889814] .se->exec_start : 541339889.124286
[541339.889815] .se->vruntime : 158270518.633534
[541339.889817] .se->sum_exec_runtime : 271718.050090
[541339.889819] .se->statistics.wait_start : 0.000000
[541339.889820] .se->statistics.sleep_start : 0.000000
[541339.889822] .se->statistics.block_start : 0.000000
[541339.889823] .se->statistics.sleep_max : 0.000000
[541339.889825] .se->statistics.block_max : 0.000000
[541339.889826] .se->statistics.exec_max : 4.016427
[541339.889828] .se->statistics.slice_max : 13.063800
[541339.889829] .se->statistics.wait_max : 52.310406
[541339.889831] .se->statistics.wait_sum : 276644.819175
[541339.889832] .se->statistics.wait_count : 946425
[541339.889834] .se->load.weight : 73
[541339.889835] .se->avg.runnable_avg_sum : 23462
[541339.889837] .se->avg.runnable_avg_period : 47951
[541339.889838] .se->avg.load_avg_contrib : 42
[541339.889839] .se->avg.decay_count : 0
[541339.889841]
[541339.889841] cfs_rq[15]:/
[541339.889843] .exec_clock : 34667334.775843
[541339.889844] .MIN_vruntime : 0.000001
[541339.889846] .min_vruntime : 158270527.992942
[541339.889847] .max_vruntime : 0.000001
[541339.889848] .spread : 0.000000
[541339.889850] .spread0 : -4932053.817456
[541339.889851] .nr_spread_over : 19936
[541339.889852] .nr_running : 1
[541339.889854] .load : 137
[541339.889856] .runnable_load_avg : 42
[541339.889857] .blocked_load_avg : 170
[541339.889859] .tg_load_contrib : 214
[541339.889860] .tg_runnable_contrib : 628
[541339.889862] .tg_load_avg : 10102
[541339.889863] .tg->runnable_avg : 10796
[541339.889864] .tg->cfs_bandwidth.timer_active: 0
[541339.889866] .throttled : 0
[541339.889868] .throttle_count : 0
[541339.889869] .avg->runnable_avg_sum : 29261
[541339.889871] .avg->runnable_avg_period : 47074
[541339.889872]
[541339.889872] rt_rq[15]:
[541339.889874] .rt_nr_running : 0
[541339.889876] .rt_throttled : 0
[541339.889877] .rt_time : 0.000000
[541339.889879] .rt_runtime : 950.000000
[541339.889881]
[541339.889881] runnable tasks:
[541339.889881] task PID tree-key switches prio exec-runtime sum-exec sum-sleep
[541339.889881] ----------------------------------------------------------------------------------------------------------
[541339.889885] rcuos/1 9 158270372.141879 6784159 120 158270372.141879 191283.190350 540986680.191762 0 /
[541339.889894] watchdog/15 113 -11.972164 135471 0 -11.972164 2826.755372 0.001887 0 /
[541339.889899] migration/15 114 0.000000 184404 0 0.000000 4462.713284 0.001295 0 /
[541339.889904] ksoftirqd/15 115 158270438.945639 416405 120 158270438.945639 5495.442273 541255624.600567 0 /
[541339.889909] kworker/15:0H 117 22305.819539 8 100 22305.819539 0.073754 612094.533122 0 /
[541339.889918] bioset 382 492.220322 2 100 492.220322 0.010432 0.003625 0 /
[541339.889923] kdmflush 398 600.699094 2 100 600.699094 0.039444 0.033085 0 /
[541339.889931] kdmflush 1354 2162.788464 2 100 2162.788464 0.010265 0.003454 0 /
[541339.889936] bioset 1355 2173.523822 2 100 2173.523822 0.013581 0.004078 0 /
[541339.889942] bioset 1396 2191.077025 2 100 2191.077025 0.009617 0.004866 0 /
[541339.889947] bioset 1459 2270.438717 2 100 2270.438717 0.006566 0.003057 0 /
[541339.889952] systemd-logind 1522 1.443702 203 120 1.443702 223.906000 539778240.756382 0 /autogroup-154
[541339.889961] ruby-timer-thr 2715 10.983158 1 120 10.983158 0.031743 0.000000 0 /autogroup-216
[541339.889966] SignalSender 2905 658.255964 113 120 658.255964 0.893670 37.424746 0 /autogroup-216
[541339.889971] SignalSender 2906 11.781953 144 120 11.781953 0.978334 55.208308 0 /autogroup-219
[541339.889976] SignalSender 2908 24.235523 114 120 24.235523 0.741811 35.103124 0 /autogroup-222
[541339.889983] SignalSender 2993 11.574316 145 120 11.574316 1.250080 46.040910 0 /autogroup-233
[541339.889988] SignalSender 3013 11.632049 125 120 11.632049 0.834369 48.552028 0 /autogroup-236
[541339.889994] SignalSender 2972 11.811755 111 120 11.811755 0.860338 50.047600 0 /autogroup-239
[541339.890001] java 3105 37069.058000 1310 120 37069.058000 2162.828918 539899905.614344 0 /autogroup-264
[541339.890010] getty 3057 0.186773 1 120 0.186773 1.419653 0.000000 0 /autogroup-282
[541339.890016] java 3170 11.286514 382 120 11.286514 607.685241 725.151079 0 /autogroup-257
[541339.890023] acpid 3084 0.514521 5 120 0.514521 0.610788 16.831270 0 /autogroup-299
[541339.890036] java 5242 47.436413 2 120 47.436413 0.140328 0.008018 0 /autogroup-358
[541339.890039] java 5368 145.615411 2 120 145.615411 0.196505 0.049171 0 /autogroup-358
[541339.890046] kworker/15:1H 8018 157775100.493951 2831 100 157775100.493951 35.725973 540623202.559835 0 /
[541339.890051] java 415 183690.581984 2642 120 183690.581984 274.385002 123479251.095576 0 /autogroup-8620
[541339.890055] mysqld 19147 302725.832785 259083 120 302725.832785 8312.235558 111087266.878221 0 /autogroup-8936
[541339.890064] jfsCommit 23564 114411213.731039 226664 120 114411213.731039 4600.902877 7731218.500947 0 /
[541339.890070] PassengerHelper 5361 11.071112 1 120 11.071112 0.119696 0.000000 0 /autogroup-11395
[541339.890073] PassengerHelper 5367 47.424822 1 120 47.424822 0.106171 0.000000 0 /autogroup-11395
[541339.890078] apache2 5419 1759.347988 2157 120 1759.347988 77.381521 2153924.792308 0 /autogroup-356
[541339.890082] apache2 5433 1754.323169 1 120 1754.323169 0.019447 0.000000 0 /autogroup-356
[541339.890085] apache2 5392 1742.303729 2 120 1742.303729 18.509728 8.906412 0 /autogroup-356
[541339.890089] apache2 5412 1746.764411 18 120 1746.764411 0.693726 2039462.564073 0 /autogroup-356
[541339.890096] swift-object-se 7256 162300.748453 50604 120 162300.748453 1744.790632 1638634.116236 0 /autogroup-11406
[541339.890100] swift-object-se 7260 162394.554196 49112 120 162394.554196 1724.578169 1640199.437925 0 /autogroup-11406
[541339.890103] swift-object-se 7264 162352.761650 49551 120 162352.761650 1711.587847 1639335.992305 0 /autogroup-11406
[541339.890107] swift-object-se 7266 162388.241844 49966 120 162388.241844 1704.150621 1640062.226163 0 /autogroup-11406
[541339.890111] swift-object-se 7281 162378.227776 51023 120 162378.227776 1747.267345 1638558.987406 0 /autogroup-11406
[541339.890115] swift-object-se 7283 162378.265851 51140 120 162378.265851 1724.680263 1639135.639464 0 /autogroup-11406
[541339.890119] swift-object-se 7664 162386.353600 48689 120 162386.353600 1711.181370 1579280.524841 0 /autogroup-11406
[541339.890123] swift-object-se 7184 162378.402122 51603 120 162378.402122 1761.943698 1642360.668643 0 /autogroup-11406
[541339.890127] swift-object-se 7191 162389.661143 49797 120 162389.661143 1735.067394 1642761.711093 0 /autogroup-11406
[541339.890131] swift-object-se 7677 162394.327953 48390 120 162394.327953 1715.850222 1578791.296829 0 /autogroup-11406
[541339.890134] swift-object-se 7988 162394.338676 49257 120 162394.338676 1747.476152 1578462.233340 0 /autogroup-11406
[541339.890138] swift-object-se 7993 162344.132077 48624 120 162344.132077 1720.147887 1578118.126032 0 /autogroup-11406
[541339.890142] swift-object-se 7293 162388.750314 49461 120 162388.750314 1708.743510 1639301.101026 0 /autogroup-11406
[541339.890146] swift-object-se 7307 162394.392119 50234 120 162394.392119 1731.502344 1639733.091700 0 /autogroup-11406
[541339.890153] swift-object-se 7731 162383.430781 48932 120 162383.430781 1733.285281 1578605.128944 0 /autogroup-11406
[541339.890163] swift-object-se 7716 162392.669803 49471 120 162392.669803 1703.991663 1578473.313683 0 /autogroup-11406
[541339.890167] swift-object-se 7741 162392.778044 51057 120 162392.778044 1753.119680 1577910.695893 0 /autogroup-11406
[541339.890175] Rswift-object-se 6482 162410.234362 909931 120 162410.234362 216211.369434 1101491.310439 0 /autogroup-11406
[541339.890178] swift-object-se 7273 162386.872608 50364 120 162386.872608 1695.989721 1639872.867866 0 /autogroup-11406
[541339.890181] swift-object-se 7276 162347.990695 51042 120 162347.990695 1730.287541 1639444.368079 0 /autogroup-11406
[541339.890185] swift-object-se 7278 162352.792258 49232 120 162352.792258 1698.599034 1639103.771002 0 /autogroup-11406
[541339.890189] swift-object-se 7880 162390.853329 51304 120 162390.853329 1743.833192 1578229.545878 0 /autogroup-11406
[541339.890192] swift-object-se 7883 162373.447494 50815 120 162373.447494 1734.677686 1578642.782786 0 /autogroup-11406
[541339.890196] swift-object-se 7912 162394.366965 49386 120 162394.366965 1717.020817 1579027.507666 0 /autogroup-11406
[541339.890199] swift-object-se 8031 162367.211596 49386 120 162367.211596 1739.305883 1578048.798599 0 /autogroup-11406
[541339.890203] swift-object-se 8036 162352.749349 48919 120 162352.749349 1734.394409 1578786.071921 0 /autogroup-11406
[541339.890206] swift-object-se 8051 162375.890884 48516 120 162375.890884 1732.784888 1577450.848238 0 /autogroup-11406
[541339.890210] swift-object-se 8053 162398.441023 49942 120 162398.441023 1764.070461 1577701.424649 0 /autogroup-11406
[541339.890213] swift-object-se 7539 162397.011258 51455 120 162397.011258 1733.016994 1587635.551059 0 /autogroup-11406
[541339.890218] swift-object-se 7707 162390.431231 49934 120 162390.431231 1714.961552 1579194.370649 0 /autogroup-11406
[541339.890222] swift-object-se 7758 162390.833486 49044 120 162390.833486 1708.101058 1579339.881975 0 /autogroup-11406
[541339.890228] swift-proxy-ser 6521 129526.488902 143950 120 129526.488902 134934.584676 1414356.213520 0 /autogroup-11408
[541339.890231] swift-proxy-ser 6526 129520.995540 122807 120 129520.995540 106196.862172 1492287.962339 0 /autogroup-11408
[541339.890235] swift-proxy-ser 6527 129525.281160 122258 120 129525.281160 110503.974487 1478533.812502 0 /autogroup-11408
[541339.890239] nginx 6719 29781.138703 131492 120 29781.138703 23801.623120 1614437.879571 0 /autogroup-11415
[541339.890244] java 6805 397.257478 120 120 397.257478 16.106398 812900.378962 0 /autogroup-11418
[541339.890247] java 6808 399.216676 186 120 399.216676 58.256973 812863.703164 0 /autogroup-11418
[541339.890253] java 7029 35.619589 4 120 35.619589 0.308120 0.821861 0 /autogroup-11418
[541339.890256] java 7033 35.634946 5 120 35.634946 0.344458 0.637248 0 /autogroup-11418
[541339.890259] java 7041 96.171153 6 120 96.171153 0.343948 0.916052 0 /autogroup-11418
[541339.890263] java 7045 96.138490 5 120 96.138490 0.238644 0.640826 0 /autogroup-11418
[541339.890268] java 6956 49.075344 4 120 49.075344 0.104410 0.014588 0 /autogroup-11424
[541339.890272] java 6967 61.161685 2 120 61.161685 0.086348 0.005797 0 /autogroup-11424
[541339.890276] java 7067 109.880024 10 120 109.880024 0.351893 7.082354 0 /autogroup-11424
[541339.890279] java 7071 186.258884 42 120 186.258884 2.370093 30928.635299 0 /autogroup-11424
[541339.890283] java 7098 12269.859485 449 120 12269.859485 34.382354 1648284.376005 0 /autogroup-11424
[541339.890287] java 7210 12145.192789 99 120 12145.192789 10.929053 1638646.020096 0 /autogroup-11424
[541339.890292] java 7341 176.845674 4 120 176.845674 1.869952 22.954894 0 /autogroup-11424
[541339.890296] java 7562 12364.981886 7254 120 12364.981886 5888.151930 1591156.143430 0 /autogroup-11424
[541339.890300] java 7603 12357.601839 7922 120 12357.601839 6425.625868 1585996.862578 0 /autogroup-11424
[541339.890305] magfsd 7240 151038.131668 53301 120 151038.131668 4550.584112 1633408.390519 0 /autogroup-11432
[541339.890309] magfsd 19762 150962.692607 4683 120 150962.692607 3263.759630 46067.389228 0 /autogroup-11432
[541339.890313] magfsd 19803 150962.611854 4091 120 150962.611854 2770.193958 41412.347618 0 /autogroup-11432
[541339.890316] magfsd 19888 150962.564771 1228 120 150962.564771 837.189040 14872.647220 0 /autogroup-11432
[541339.890321] kworker/15:0 8231 157974833.422509 27711 120 157974833.422509 507.525327 1493648.236976 0 /
[541339.890325] kworker/15:1 8871 158198561.548627 28227 120 158198561.548627 550.547088 1348430.512298 0 /
[541339.890330] kworker/15:3 13963 158270496.912136 13798 120 158270496.912136 259.843226 790302.318084 0 /
[541339.890333] kworker/15:2 16381 158258212.127750 12266 120 158258212.127750 245.626727 478446.117803 0 /
[541339.890337] sleep 16912 5611.420057 2 120 5611.420057 0.482167 0.000000 0 /autogroup-270
[541339.890342] kworker/15:4 19743 158267068.919424 2609 120 158267068.919424 56.814796 57934.177937 0 /
[541339.890346]
[-- Attachment #3: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 7:26 ` Shrinand Javadekar
@ 2015-06-04 22:08 ` Dave Chinner
0 siblings, 0 replies; 21+ messages in thread
From: Dave Chinner @ 2015-06-04 22:08 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Thu, Jun 04, 2015 at 12:26:19AM -0700, Shrinand Javadekar wrote:
> I made two changes based on the suggestions above:
>
> 1. Reverted the agcount back to the default: 4.
> 2. Bumped the directory block size to 8k (-n size=8k)
>
> This definitely has made things better. My throughput for one run of
> my 40GB (5GB on each disk) test has gone up from ~70MB/s to 88MB/s.
> The pauses started off being very small : ~1 sec. Right now, with 20GB
> data in each disk, I see the pauses are ~4 seconds.
>
> I ran echo w > /proc/sysrq-trigger as soon as the system went into one
> of these pauses. Attached here is the output of dmesg after that. I'm
Ok, it didn't catch anything blocked, just dumped scheduler info for
each CPU. But the fact the changes had a positive impact means we
are probably on the right track.
> going to run a test overnight to see how it performs. Especially, how
> big do the pauses get as more and more data is written into the
> system.
>
> Also, unfortunately, I don't have a kernel dev setup ready to try out
> the patch immediately. I will try and setup the environment to try it
> out.
Ok, I'll be doing more testing here on it, but it would be great if
you could see what difference it makes and report back. No hurry,
such a change is probably too late for the next merge window, so
there's plenty of time to get it right...
Thanks for all the time you've spent triaging this problem so far!
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-04 6:23 ` Dave Chinner
2015-06-04 7:26 ` Shrinand Javadekar
@ 2015-06-05 0:59 ` Shrinand Javadekar
2015-06-05 17:31 ` Shrinand Javadekar
1 sibling, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-05 0:59 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
Dave,
I believe this code is slightly different from the one I have (kernel
v3.16.0). Can you give me a patch for kernel v3.16.0? I have a working
setup to try this out.
http://lxr.free-electrons.com/source/fs/xfs/xfs_buf.c?v=3.16
Thanks in advance.
-Shri
On Wed, Jun 3, 2015 at 11:23 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Thu, Jun 04, 2015 at 12:03:39PM +1000, Dave Chinner wrote:
>> Fixing this requires a tweak to the algorithm in
>> __xfs_buf_delwri_submit() so that we don't lock an entire list of
>> thousands of IOs before starting submission. In the mean time,
>> reducing the number of AGs will reduce the impact of this because
>> the delayed write submission code will skip buffers that are already
>> locked or pinned in memory, and hence an AG under modification at
>> the time submission occurs will be skipped by the delwri code.
>
> You might like to try the patch below on a test machine to see if
> it helps with the problem.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: reduce lock hold times in buffer writeback
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
> fs/xfs/xfs_buf.c | 80 ++++++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 61 insertions(+), 19 deletions(-)
>
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index bbe4e9e..8d2cc36 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -1768,15 +1768,63 @@ xfs_buf_cmp(
> return 0;
> }
>
> +static void
> +xfs_buf_delwri_submit_buffers(
> + struct list_head *buffer_list,
> + struct list_head *io_list,
> + bool wait)
> +{
> + struct xfs_buf *bp, *n;
> + struct blk_plug plug;
> +
> + blk_start_plug(&plug);
> + list_for_each_entry_safe(bp, n, buffer_list, b_list) {
> + bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC |
> + XBF_WRITE_FAIL);
> + bp->b_flags |= XBF_WRITE | XBF_ASYNC;
> +
> + /*
> + * We do all IO submission async. This means if we need
> + * to wait for IO completion we need to take an extra
> + * reference so the buffer is still valid on the other
> + * side. We need to move the buffer onto the io_list
> + * at this point so the caller can still access it.
> + */
> + if (wait) {
> + xfs_buf_hold(bp);
> + list_move_tail(&bp->b_list, io_list);
> + } else
> + list_del_init(&bp->b_list);
> +
> + xfs_buf_submit(bp);
> + }
> + blk_finish_plug(&plug);
> +}
> +
> +/*
> + * submit buffers for write.
> + *
> + * When we have a large buffer list, we do not want to hold all the buffers
> + * locked while we block on the request queue waiting for IO dispatch. To avoid
> + * this problem, we lock and submit buffers in groups of 50, thereby minimising
> + * the lock hold times for lists which may contain thousands of objects.
> + *
> + * To do this, we sort the buffer list before we walk the list to lock and
> + * submit buffers, and we plug and unplug around each group of buffers we
> + * submit.
> + */
> static int
> __xfs_buf_delwri_submit(
> struct list_head *buffer_list,
> struct list_head *io_list,
> bool wait)
> {
> - struct blk_plug plug;
> struct xfs_buf *bp, *n;
> + LIST_HEAD (submit_list);
> int pinned = 0;
> + int count = 0;
> +
> + list_sort(NULL, buffer_list, xfs_buf_cmp);
>
> list_for_each_entry_safe(bp, n, buffer_list, b_list) {
> if (!wait) {
> @@ -1802,30 +1850,24 @@ __xfs_buf_delwri_submit(
> continue;
> }
>
> - list_move_tail(&bp->b_list, io_list);
> + list_move_tail(&bp->b_list, &submit_list);
> trace_xfs_buf_delwri_split(bp, _RET_IP_);
> - }
> -
> - list_sort(NULL, io_list, xfs_buf_cmp);
> -
> - blk_start_plug(&plug);
> - list_for_each_entry_safe(bp, n, io_list, b_list) {
> - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
> - bp->b_flags |= XBF_WRITE | XBF_ASYNC;
>
> /*
> - * we do all Io submission async. This means if we need to wait
> - * for IO completion we need to take an extra reference so the
> - * buffer is still valid on the other side.
> + * We do small batches of IO submission to minimise lock hold
> + * time and unnecessary writeback of buffers that are hot and
> + * would otherwise be relogged and hence not require immediate
> + * writeback.
> */
> - if (wait)
> - xfs_buf_hold(bp);
> - else
> - list_del_init(&bp->b_list);
> + if (count++ < 50)
> + continue;
>
> - xfs_buf_submit(bp);
> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
> + count = 0;
> }
> - blk_finish_plug(&plug);
> +
> + if (!list_empty(&submit_list))
> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
>
> return pinned;
> }
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-05 0:59 ` Shrinand Javadekar
@ 2015-06-05 17:31 ` Shrinand Javadekar
2015-06-08 21:56 ` Shrinand Javadekar
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-05 17:31 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
The file xfs_buf.c seems to have gone through a few revisions. I tried
to understand the code and make the changes in the 3.16.0 kernel but
it didn't quite work out. XFS crashed while unmounting the disks.
On Thu, Jun 4, 2015 at 5:59 PM, Shrinand Javadekar
<shrinand@maginatics.com> wrote:
> Dave,
>
> I believe this code is slightly different from the one I have (kernel
> v3.16.0). Can you give me a patch for kernel v3.16.0? I have a working
> setup to try this out.
>
> http://lxr.free-electrons.com/source/fs/xfs/xfs_buf.c?v=3.16
>
> Thanks in advance.
> -Shri
>
>
> On Wed, Jun 3, 2015 at 11:23 PM, Dave Chinner <david@fromorbit.com> wrote:
>> On Thu, Jun 04, 2015 at 12:03:39PM +1000, Dave Chinner wrote:
>>> Fixing this requires a tweak to the algorithm in
>>> __xfs_buf_delwri_submit() so that we don't lock an entire list of
>>> thousands of IOs before starting submission. In the mean time,
>>> reducing the number of AGs will reduce the impact of this because
>>> the delayed write submission code will skip buffers that are already
>>> locked or pinned in memory, and hence an AG under modification at
>>> the time submission occurs will be skipped by the delwri code.
>>
>> You might like to try the patch below on a test machine to see if
>> it helps with the problem.
>>
>> Cheers,
>>
>> Dave.
>> --
>> Dave Chinner
>> david@fromorbit.com
>>
>> xfs: reduce lock hold times in buffer writeback
>>
>> From: Dave Chinner <dchinner@redhat.com>
>>
>> Signed-off-by: Dave Chinner <dchinner@redhat.com>
>> ---
>> fs/xfs/xfs_buf.c | 80 ++++++++++++++++++++++++++++++++++++++++++--------------
>> 1 file changed, 61 insertions(+), 19 deletions(-)
>>
>> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
>> index bbe4e9e..8d2cc36 100644
>> --- a/fs/xfs/xfs_buf.c
>> +++ b/fs/xfs/xfs_buf.c
>> @@ -1768,15 +1768,63 @@ xfs_buf_cmp(
>> return 0;
>> }
>>
>> +static void
>> +xfs_buf_delwri_submit_buffers(
>> + struct list_head *buffer_list,
>> + struct list_head *io_list,
>> + bool wait)
>> +{
>> + struct xfs_buf *bp, *n;
>> + struct blk_plug plug;
>> +
>> + blk_start_plug(&plug);
>> + list_for_each_entry_safe(bp, n, buffer_list, b_list) {
>> + bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC |
>> + XBF_WRITE_FAIL);
>> + bp->b_flags |= XBF_WRITE | XBF_ASYNC;
>> +
>> + /*
>> + * We do all IO submission async. This means if we need
>> + * to wait for IO completion we need to take an extra
>> + * reference so the buffer is still valid on the other
>> + * side. We need to move the buffer onto the io_list
>> + * at this point so the caller can still access it.
>> + */
>> + if (wait) {
>> + xfs_buf_hold(bp);
>> + list_move_tail(&bp->b_list, io_list);
>> + } else
>> + list_del_init(&bp->b_list);
>> +
>> + xfs_buf_submit(bp);
>> + }
>> + blk_finish_plug(&plug);
>> +}
>> +
>> +/*
>> + * submit buffers for write.
>> + *
>> + * When we have a large buffer list, we do not want to hold all the buffers
>> + * locked while we block on the request queue waiting for IO dispatch. To avoid
>> + * this problem, we lock and submit buffers in groups of 50, thereby minimising
>> + * the lock hold times for lists which may contain thousands of objects.
>> + *
>> + * To do this, we sort the buffer list before we walk the list to lock and
>> + * submit buffers, and we plug and unplug around each group of buffers we
>> + * submit.
>> + */
>> static int
>> __xfs_buf_delwri_submit(
>> struct list_head *buffer_list,
>> struct list_head *io_list,
>> bool wait)
>> {
>> - struct blk_plug plug;
>> struct xfs_buf *bp, *n;
>> + LIST_HEAD (submit_list);
>> int pinned = 0;
>> + int count = 0;
>> +
>> + list_sort(NULL, buffer_list, xfs_buf_cmp);
>>
>> list_for_each_entry_safe(bp, n, buffer_list, b_list) {
>> if (!wait) {
>> @@ -1802,30 +1850,24 @@ __xfs_buf_delwri_submit(
>> continue;
>> }
>>
>> - list_move_tail(&bp->b_list, io_list);
>> + list_move_tail(&bp->b_list, &submit_list);
>> trace_xfs_buf_delwri_split(bp, _RET_IP_);
>> - }
>> -
>> - list_sort(NULL, io_list, xfs_buf_cmp);
>> -
>> - blk_start_plug(&plug);
>> - list_for_each_entry_safe(bp, n, io_list, b_list) {
>> - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
>> - bp->b_flags |= XBF_WRITE | XBF_ASYNC;
>>
>> /*
>> - * we do all Io submission async. This means if we need to wait
>> - * for IO completion we need to take an extra reference so the
>> - * buffer is still valid on the other side.
>> + * We do small batches of IO submission to minimise lock hold
>> + * time and unnecessary writeback of buffers that are hot and
>> + * would otherwise be relogged and hence not require immediate
>> + * writeback.
>> */
>> - if (wait)
>> - xfs_buf_hold(bp);
>> - else
>> - list_del_init(&bp->b_list);
>> + if (count++ < 50)
>> + continue;
>>
>> - xfs_buf_submit(bp);
>> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
>> + count = 0;
>> }
>> - blk_finish_plug(&plug);
>> +
>> + if (!list_empty(&submit_list))
>> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
>>
>> return pinned;
>> }
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-05 17:31 ` Shrinand Javadekar
@ 2015-06-08 21:56 ` Shrinand Javadekar
2015-06-09 23:12 ` Dave Chinner
0 siblings, 1 reply; 21+ messages in thread
From: Shrinand Javadekar @ 2015-06-08 21:56 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1: Type: text/plain, Size: 6922 bytes --]
I gave this another shot after understanding it better. I now have a
version that doesn't crash. I'm attaching the diff and the new
xfs_buf.c file.
However, in the new version the io_list list doesn't get populated at
all in __xfs_buf_delwri_submit(). I haven't completely familiarized
myself with what callers do with this list. Callers initialize this
list and send a pointer to __xfs_buf_delwri_submit() and expect a
populated list back.
Nonetheless, I ran my experiments after building and inserting the XFS
module with this change. Strangely enough, I see the performance going
down by ~25% compared to the original XFS module.
On Fri, Jun 5, 2015 at 10:31 AM, Shrinand Javadekar
<shrinand@maginatics.com> wrote:
> The file xfs_buf.c seems to have gone through a few revisions. I tried
> to understand the code and make the changes in the 3.16.0 kernel but
> it didn't quite work out. XFS crashed while unmounting the disks.
>
> On Thu, Jun 4, 2015 at 5:59 PM, Shrinand Javadekar
> <shrinand@maginatics.com> wrote:
>> Dave,
>>
>> I believe this code is slightly different from the one I have (kernel
>> v3.16.0). Can you give me a patch for kernel v3.16.0? I have a working
>> setup to try this out.
>>
>> http://lxr.free-electrons.com/source/fs/xfs/xfs_buf.c?v=3.16
>>
>> Thanks in advance.
>> -Shri
>>
>>
>> On Wed, Jun 3, 2015 at 11:23 PM, Dave Chinner <david@fromorbit.com> wrote:
>>> On Thu, Jun 04, 2015 at 12:03:39PM +1000, Dave Chinner wrote:
>>>> Fixing this requires a tweak to the algorithm in
>>>> __xfs_buf_delwri_submit() so that we don't lock an entire list of
>>>> thousands of IOs before starting submission. In the mean time,
>>>> reducing the number of AGs will reduce the impact of this because
>>>> the delayed write submission code will skip buffers that are already
>>>> locked or pinned in memory, and hence an AG under modification at
>>>> the time submission occurs will be skipped by the delwri code.
>>>
>>> You might like to try the patch below on a test machine to see if
>>> it helps with the problem.
>>>
>>> Cheers,
>>>
>>> Dave.
>>> --
>>> Dave Chinner
>>> david@fromorbit.com
>>>
>>> xfs: reduce lock hold times in buffer writeback
>>>
>>> From: Dave Chinner <dchinner@redhat.com>
>>>
>>> Signed-off-by: Dave Chinner <dchinner@redhat.com>
>>> ---
>>> fs/xfs/xfs_buf.c | 80 ++++++++++++++++++++++++++++++++++++++++++--------------
>>> 1 file changed, 61 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
>>> index bbe4e9e..8d2cc36 100644
>>> --- a/fs/xfs/xfs_buf.c
>>> +++ b/fs/xfs/xfs_buf.c
>>> @@ -1768,15 +1768,63 @@ xfs_buf_cmp(
>>> return 0;
>>> }
>>>
>>> +static void
>>> +xfs_buf_delwri_submit_buffers(
>>> + struct list_head *buffer_list,
>>> + struct list_head *io_list,
>>> + bool wait)
>>> +{
>>> + struct xfs_buf *bp, *n;
>>> + struct blk_plug plug;
>>> +
>>> + blk_start_plug(&plug);
>>> + list_for_each_entry_safe(bp, n, buffer_list, b_list) {
>>> + bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC |
>>> + XBF_WRITE_FAIL);
>>> + bp->b_flags |= XBF_WRITE | XBF_ASYNC;
>>> +
>>> + /*
>>> + * We do all IO submission async. This means if we need
>>> + * to wait for IO completion we need to take an extra
>>> + * reference so the buffer is still valid on the other
>>> + * side. We need to move the buffer onto the io_list
>>> + * at this point so the caller can still access it.
>>> + */
>>> + if (wait) {
>>> + xfs_buf_hold(bp);
>>> + list_move_tail(&bp->b_list, io_list);
>>> + } else
>>> + list_del_init(&bp->b_list);
>>> +
>>> + xfs_buf_submit(bp);
>>> + }
>>> + blk_finish_plug(&plug);
>>> +}
>>> +
>>> +/*
>>> + * submit buffers for write.
>>> + *
>>> + * When we have a large buffer list, we do not want to hold all the buffers
>>> + * locked while we block on the request queue waiting for IO dispatch. To avoid
>>> + * this problem, we lock and submit buffers in groups of 50, thereby minimising
>>> + * the lock hold times for lists which may contain thousands of objects.
>>> + *
>>> + * To do this, we sort the buffer list before we walk the list to lock and
>>> + * submit buffers, and we plug and unplug around each group of buffers we
>>> + * submit.
>>> + */
>>> static int
>>> __xfs_buf_delwri_submit(
>>> struct list_head *buffer_list,
>>> struct list_head *io_list,
>>> bool wait)
>>> {
>>> - struct blk_plug plug;
>>> struct xfs_buf *bp, *n;
>>> + LIST_HEAD (submit_list);
>>> int pinned = 0;
>>> + int count = 0;
>>> +
>>> + list_sort(NULL, buffer_list, xfs_buf_cmp);
>>>
>>> list_for_each_entry_safe(bp, n, buffer_list, b_list) {
>>> if (!wait) {
>>> @@ -1802,30 +1850,24 @@ __xfs_buf_delwri_submit(
>>> continue;
>>> }
>>>
>>> - list_move_tail(&bp->b_list, io_list);
>>> + list_move_tail(&bp->b_list, &submit_list);
>>> trace_xfs_buf_delwri_split(bp, _RET_IP_);
>>> - }
>>> -
>>> - list_sort(NULL, io_list, xfs_buf_cmp);
>>> -
>>> - blk_start_plug(&plug);
>>> - list_for_each_entry_safe(bp, n, io_list, b_list) {
>>> - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
>>> - bp->b_flags |= XBF_WRITE | XBF_ASYNC;
>>>
>>> /*
>>> - * we do all Io submission async. This means if we need to wait
>>> - * for IO completion we need to take an extra reference so the
>>> - * buffer is still valid on the other side.
>>> + * We do small batches of IO submission to minimise lock hold
>>> + * time and unnecessary writeback of buffers that are hot and
>>> + * would otherwise be relogged and hence not require immediate
>>> + * writeback.
>>> */
>>> - if (wait)
>>> - xfs_buf_hold(bp);
>>> - else
>>> - list_del_init(&bp->b_list);
>>> + if (count++ < 50)
>>> + continue;
>>>
>>> - xfs_buf_submit(bp);
>>> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
>>> + count = 0;
>>> }
>>> - blk_finish_plug(&plug);
>>> +
>>> + if (!list_empty(&submit_list))
>>> + xfs_buf_delwri_submit_buffers(&submit_list, io_list, wait);
>>>
>>> return pinned;
>>> }
[-- Attachment #2: xfs_diff --]
[-- Type: application/octet-stream, Size: 1795 bytes --]
1758a1759,1782
>
> static void
> xfs_buf_delwri_submit_buffers(
> struct list_head *buffer_list,
> bool wait)
> {
> struct xfs_buf *bp, *n;
> struct blk_plug plug;
>
> blk_start_plug(&plug);
> list_for_each_entry_safe(bp, n, buffer_list, b_list) {
> bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
> bp->b_flags |= XBF_WRITE;
>
> if (!wait) {
> bp->b_flags |= XBF_ASYNC;
> list_del_init(&bp->b_list);
> }
> xfs_bdstrat_cb(bp);
> }
> blk_finish_plug(&plug);
> }
>
>
1765d1788
< struct blk_plug plug;
1767a1791,1794
> LIST_HEAD (submit_list);
> int count = 0;
>
> list_sort(NULL, buffer_list, xfs_buf_cmp);
1793c1820,1821
< list_move_tail(&bp->b_list, io_list);
---
> //list_move_tail(&bp->b_list, io_list);
> list_move_tail(&bp->b_list, &submit_list);
1795c1823,1825
< }
---
> if (count++ < 50) {
> continue;
> }
1797c1827,1829
< list_sort(NULL, io_list, xfs_buf_cmp);
---
> xfs_buf_delwri_submit_buffers(&submit_list, wait);
> count = 0;
> }
1799,1802c1831
< blk_start_plug(&plug);
< list_for_each_entry_safe(bp, n, io_list, b_list) {
< bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
< bp->b_flags |= XBF_WRITE;
---
> // list_sort(NULL, io_list, xfs_buf_cmp);
1804,1810c1833,1835
< if (!wait) {
< bp->b_flags |= XBF_ASYNC;
< list_del_init(&bp->b_list);
< }
< xfs_bdstrat_cb(bp);
< }
< blk_finish_plug(&plug);
---
> if (!list_empty(&submit_list)) {
> xfs_buf_delwri_submit_buffers(&submit_list, wait);
> }
[-- Attachment #3: xfs_buf.c --]
[-- Type: text/x-csrc, Size: 44298 bytes --]
/*
* Copyright (c) 2000-2006 Silicon Graphics, Inc.
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it would be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "xfs.h"
#include <linux/stddef.h>
#include <linux/errno.h>
#include <linux/gfp.h>
#include <linux/pagemap.h>
#include <linux/init.h>
#include <linux/vmalloc.h>
#include <linux/bio.h>
#include <linux/sysctl.h>
#include <linux/proc_fs.h>
#include <linux/workqueue.h>
#include <linux/percpu.h>
#include <linux/blkdev.h>
#include <linux/hash.h>
#include <linux/kthread.h>
#include <linux/migrate.h>
#include <linux/backing-dev.h>
#include <linux/freezer.h>
#include "xfs_log_format.h"
#include "xfs_trans_resv.h"
#include "xfs_sb.h"
#include "xfs_ag.h"
#include "xfs_mount.h"
#include "xfs_trace.h"
#include "xfs_log.h"
static kmem_zone_t *xfs_buf_zone;
static struct workqueue_struct *xfslogd_workqueue;
#ifdef XFS_BUF_LOCK_TRACKING
# define XB_SET_OWNER(bp) ((bp)->b_last_holder = current->pid)
# define XB_CLEAR_OWNER(bp) ((bp)->b_last_holder = -1)
# define XB_GET_OWNER(bp) ((bp)->b_last_holder)
#else
# define XB_SET_OWNER(bp) do { } while (0)
# define XB_CLEAR_OWNER(bp) do { } while (0)
# define XB_GET_OWNER(bp) do { } while (0)
#endif
#define xb_to_gfp(flags) \
((((flags) & XBF_READ_AHEAD) ? __GFP_NORETRY : GFP_NOFS) | __GFP_NOWARN)
static inline int
xfs_buf_is_vmapped(
struct xfs_buf *bp)
{
/*
* Return true if the buffer is vmapped.
*
* b_addr is null if the buffer is not mapped, but the code is clever
* enough to know it doesn't have to map a single page, so the check has
* to be both for b_addr and bp->b_page_count > 1.
*/
return bp->b_addr && bp->b_page_count > 1;
}
static inline int
xfs_buf_vmap_len(
struct xfs_buf *bp)
{
return (bp->b_page_count * PAGE_SIZE) - bp->b_offset;
}
/*
* When we mark a buffer stale, we remove the buffer from the LRU and clear the
* b_lru_ref count so that the buffer is freed immediately when the buffer
* reference count falls to zero. If the buffer is already on the LRU, we need
* to remove the reference that LRU holds on the buffer.
*
* This prevents build-up of stale buffers on the LRU.
*/
void
xfs_buf_stale(
struct xfs_buf *bp)
{
ASSERT(xfs_buf_islocked(bp));
bp->b_flags |= XBF_STALE;
/*
* Clear the delwri status so that a delwri queue walker will not
* flush this buffer to disk now that it is stale. The delwri queue has
* a reference to the buffer, so this is safe to do.
*/
bp->b_flags &= ~_XBF_DELWRI_Q;
spin_lock(&bp->b_lock);
atomic_set(&bp->b_lru_ref, 0);
if (!(bp->b_state & XFS_BSTATE_DISPOSE) &&
(list_lru_del(&bp->b_target->bt_lru, &bp->b_lru)))
atomic_dec(&bp->b_hold);
ASSERT(atomic_read(&bp->b_hold) >= 1);
spin_unlock(&bp->b_lock);
}
static int
xfs_buf_get_maps(
struct xfs_buf *bp,
int map_count)
{
ASSERT(bp->b_maps == NULL);
bp->b_map_count = map_count;
if (map_count == 1) {
bp->b_maps = &bp->__b_map;
return 0;
}
bp->b_maps = kmem_zalloc(map_count * sizeof(struct xfs_buf_map),
KM_NOFS);
if (!bp->b_maps)
return ENOMEM;
return 0;
}
/*
* Frees b_pages if it was allocated.
*/
static void
xfs_buf_free_maps(
struct xfs_buf *bp)
{
if (bp->b_maps != &bp->__b_map) {
kmem_free(bp->b_maps);
bp->b_maps = NULL;
}
}
struct xfs_buf *
_xfs_buf_alloc(
struct xfs_buftarg *target,
struct xfs_buf_map *map,
int nmaps,
xfs_buf_flags_t flags)
{
struct xfs_buf *bp;
int error;
int i;
bp = kmem_zone_zalloc(xfs_buf_zone, KM_NOFS);
if (unlikely(!bp))
return NULL;
/*
* We don't want certain flags to appear in b_flags unless they are
* specifically set by later operations on the buffer.
*/
flags &= ~(XBF_UNMAPPED | XBF_TRYLOCK | XBF_ASYNC | XBF_READ_AHEAD);
atomic_set(&bp->b_hold, 1);
atomic_set(&bp->b_lru_ref, 1);
init_completion(&bp->b_iowait);
INIT_LIST_HEAD(&bp->b_lru);
INIT_LIST_HEAD(&bp->b_list);
RB_CLEAR_NODE(&bp->b_rbnode);
sema_init(&bp->b_sema, 0); /* held, no waiters */
spin_lock_init(&bp->b_lock);
XB_SET_OWNER(bp);
bp->b_target = target;
bp->b_flags = flags;
/*
* Set length and io_length to the same value initially.
* I/O routines should use io_length, which will be the same in
* most cases but may be reset (e.g. XFS recovery).
*/
error = xfs_buf_get_maps(bp, nmaps);
if (error) {
kmem_zone_free(xfs_buf_zone, bp);
return NULL;
}
bp->b_bn = map[0].bm_bn;
bp->b_length = 0;
for (i = 0; i < nmaps; i++) {
bp->b_maps[i].bm_bn = map[i].bm_bn;
bp->b_maps[i].bm_len = map[i].bm_len;
bp->b_length += map[i].bm_len;
}
bp->b_io_length = bp->b_length;
atomic_set(&bp->b_pin_count, 0);
init_waitqueue_head(&bp->b_waiters);
XFS_STATS_INC(xb_create);
trace_xfs_buf_init(bp, _RET_IP_);
return bp;
}
/*
* Allocate a page array capable of holding a specified number
* of pages, and point the page buf at it.
*/
STATIC int
_xfs_buf_get_pages(
xfs_buf_t *bp,
int page_count)
{
/* Make sure that we have a page list */
if (bp->b_pages == NULL) {
bp->b_page_count = page_count;
if (page_count <= XB_PAGES) {
bp->b_pages = bp->b_page_array;
} else {
bp->b_pages = kmem_alloc(sizeof(struct page *) *
page_count, KM_NOFS);
if (bp->b_pages == NULL)
return -ENOMEM;
}
memset(bp->b_pages, 0, sizeof(struct page *) * page_count);
}
return 0;
}
/*
* Frees b_pages if it was allocated.
*/
STATIC void
_xfs_buf_free_pages(
xfs_buf_t *bp)
{
if (bp->b_pages != bp->b_page_array) {
kmem_free(bp->b_pages);
bp->b_pages = NULL;
}
}
/*
* Releases the specified buffer.
*
* The modification state of any associated pages is left unchanged.
* The buffer must not be on any hash - use xfs_buf_rele instead for
* hashed and refcounted buffers
*/
void
xfs_buf_free(
xfs_buf_t *bp)
{
trace_xfs_buf_free(bp, _RET_IP_);
ASSERT(list_empty(&bp->b_lru));
if (bp->b_flags & _XBF_PAGES) {
uint i;
if (xfs_buf_is_vmapped(bp))
vm_unmap_ram(bp->b_addr - bp->b_offset,
bp->b_page_count);
for (i = 0; i < bp->b_page_count; i++) {
struct page *page = bp->b_pages[i];
__free_page(page);
}
} else if (bp->b_flags & _XBF_KMEM)
kmem_free(bp->b_addr);
_xfs_buf_free_pages(bp);
xfs_buf_free_maps(bp);
kmem_zone_free(xfs_buf_zone, bp);
}
/*
* Allocates all the pages for buffer in question and builds it's page list.
*/
STATIC int
xfs_buf_allocate_memory(
xfs_buf_t *bp,
uint flags)
{
size_t size;
size_t nbytes, offset;
gfp_t gfp_mask = xb_to_gfp(flags);
unsigned short page_count, i;
xfs_off_t start, end;
int error;
/*
* for buffers that are contained within a single page, just allocate
* the memory from the heap - there's no need for the complexity of
* page arrays to keep allocation down to order 0.
*/
size = BBTOB(bp->b_length);
if (size < PAGE_SIZE) {
bp->b_addr = kmem_alloc(size, KM_NOFS);
if (!bp->b_addr) {
/* low memory - use alloc_page loop instead */
goto use_alloc_page;
}
if (((unsigned long)(bp->b_addr + size - 1) & PAGE_MASK) !=
((unsigned long)bp->b_addr & PAGE_MASK)) {
/* b_addr spans two pages - use alloc_page instead */
kmem_free(bp->b_addr);
bp->b_addr = NULL;
goto use_alloc_page;
}
bp->b_offset = offset_in_page(bp->b_addr);
bp->b_pages = bp->b_page_array;
bp->b_pages[0] = virt_to_page(bp->b_addr);
bp->b_page_count = 1;
bp->b_flags |= _XBF_KMEM;
return 0;
}
use_alloc_page:
start = BBTOB(bp->b_maps[0].bm_bn) >> PAGE_SHIFT;
end = (BBTOB(bp->b_maps[0].bm_bn + bp->b_length) + PAGE_SIZE - 1)
>> PAGE_SHIFT;
page_count = end - start;
error = _xfs_buf_get_pages(bp, page_count);
if (unlikely(error))
return error;
offset = bp->b_offset;
bp->b_flags |= _XBF_PAGES;
for (i = 0; i < bp->b_page_count; i++) {
struct page *page;
uint retries = 0;
retry:
page = alloc_page(gfp_mask);
if (unlikely(page == NULL)) {
if (flags & XBF_READ_AHEAD) {
bp->b_page_count = i;
error = ENOMEM;
goto out_free_pages;
}
/*
* This could deadlock.
*
* But until all the XFS lowlevel code is revamped to
* handle buffer allocation failures we can't do much.
*/
if (!(++retries % 100))
xfs_err(NULL,
"possible memory allocation deadlock in %s (mode:0x%x)",
__func__, gfp_mask);
XFS_STATS_INC(xb_page_retries);
congestion_wait(BLK_RW_ASYNC, HZ/50);
goto retry;
}
XFS_STATS_INC(xb_page_found);
nbytes = min_t(size_t, size, PAGE_SIZE - offset);
size -= nbytes;
bp->b_pages[i] = page;
offset = 0;
}
return 0;
out_free_pages:
for (i = 0; i < bp->b_page_count; i++)
__free_page(bp->b_pages[i]);
return error;
}
/*
* Map buffer into kernel address-space if necessary.
*/
STATIC int
_xfs_buf_map_pages(
xfs_buf_t *bp,
uint flags)
{
ASSERT(bp->b_flags & _XBF_PAGES);
if (bp->b_page_count == 1) {
/* A single page buffer is always mappable */
bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset;
} else if (flags & XBF_UNMAPPED) {
bp->b_addr = NULL;
} else {
int retried = 0;
unsigned noio_flag;
/*
* vm_map_ram() will allocate auxillary structures (e.g.
* pagetables) with GFP_KERNEL, yet we are likely to be under
* GFP_NOFS context here. Hence we need to tell memory reclaim
* that we are in such a context via PF_MEMALLOC_NOIO to prevent
* memory reclaim re-entering the filesystem here and
* potentially deadlocking.
*/
noio_flag = memalloc_noio_save();
do {
bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count,
-1, PAGE_KERNEL);
if (bp->b_addr)
break;
vm_unmap_aliases();
} while (retried++ <= 1);
memalloc_noio_restore(noio_flag);
if (!bp->b_addr)
return -ENOMEM;
bp->b_addr += bp->b_offset;
}
return 0;
}
/*
* Finding and Reading Buffers
*/
/*
* Look up, and creates if absent, a lockable buffer for
* a given range of an inode. The buffer is returned
* locked. No I/O is implied by this call.
*/
xfs_buf_t *
_xfs_buf_find(
struct xfs_buftarg *btp,
struct xfs_buf_map *map,
int nmaps,
xfs_buf_flags_t flags,
xfs_buf_t *new_bp)
{
size_t numbytes;
struct xfs_perag *pag;
struct rb_node **rbp;
struct rb_node *parent;
xfs_buf_t *bp;
xfs_daddr_t blkno = map[0].bm_bn;
xfs_daddr_t eofs;
int numblks = 0;
int i;
for (i = 0; i < nmaps; i++)
numblks += map[i].bm_len;
numbytes = BBTOB(numblks);
/* Check for IOs smaller than the sector size / not sector aligned */
ASSERT(!(numbytes < btp->bt_meta_sectorsize));
ASSERT(!(BBTOB(blkno) & (xfs_off_t)btp->bt_meta_sectormask));
/*
* Corrupted block numbers can get through to here, unfortunately, so we
* have to check that the buffer falls within the filesystem bounds.
*/
eofs = XFS_FSB_TO_BB(btp->bt_mount, btp->bt_mount->m_sb.sb_dblocks);
if (blkno >= eofs) {
/*
* XXX (dgc): we should really be returning EFSCORRUPTED here,
* but none of the higher level infrastructure supports
* returning a specific error on buffer lookup failures.
*/
xfs_alert(btp->bt_mount,
"%s: Block out of range: block 0x%llx, EOFS 0x%llx ",
__func__, blkno, eofs);
WARN_ON(1);
return NULL;
}
/* get tree root */
pag = xfs_perag_get(btp->bt_mount,
xfs_daddr_to_agno(btp->bt_mount, blkno));
/* walk tree */
spin_lock(&pag->pag_buf_lock);
rbp = &pag->pag_buf_tree.rb_node;
parent = NULL;
bp = NULL;
while (*rbp) {
parent = *rbp;
bp = rb_entry(parent, struct xfs_buf, b_rbnode);
if (blkno < bp->b_bn)
rbp = &(*rbp)->rb_left;
else if (blkno > bp->b_bn)
rbp = &(*rbp)->rb_right;
else {
/*
* found a block number match. If the range doesn't
* match, the only way this is allowed is if the buffer
* in the cache is stale and the transaction that made
* it stale has not yet committed. i.e. we are
* reallocating a busy extent. Skip this buffer and
* continue searching to the right for an exact match.
*/
if (bp->b_length != numblks) {
ASSERT(bp->b_flags & XBF_STALE);
rbp = &(*rbp)->rb_right;
continue;
}
atomic_inc(&bp->b_hold);
goto found;
}
}
/* No match found */
if (new_bp) {
rb_link_node(&new_bp->b_rbnode, parent, rbp);
rb_insert_color(&new_bp->b_rbnode, &pag->pag_buf_tree);
/* the buffer keeps the perag reference until it is freed */
new_bp->b_pag = pag;
spin_unlock(&pag->pag_buf_lock);
} else {
XFS_STATS_INC(xb_miss_locked);
spin_unlock(&pag->pag_buf_lock);
xfs_perag_put(pag);
}
return new_bp;
found:
spin_unlock(&pag->pag_buf_lock);
xfs_perag_put(pag);
if (!xfs_buf_trylock(bp)) {
if (flags & XBF_TRYLOCK) {
xfs_buf_rele(bp);
XFS_STATS_INC(xb_busy_locked);
return NULL;
}
xfs_buf_lock(bp);
XFS_STATS_INC(xb_get_locked_waited);
}
/*
* if the buffer is stale, clear all the external state associated with
* it. We need to keep flags such as how we allocated the buffer memory
* intact here.
*/
if (bp->b_flags & XBF_STALE) {
ASSERT((bp->b_flags & _XBF_DELWRI_Q) == 0);
ASSERT(bp->b_iodone == NULL);
bp->b_flags &= _XBF_KMEM | _XBF_PAGES;
bp->b_ops = NULL;
}
trace_xfs_buf_find(bp, flags, _RET_IP_);
XFS_STATS_INC(xb_get_locked);
return bp;
}
/*
* Assembles a buffer covering the specified range. The code is optimised for
* cache hits, as metadata intensive workloads will see 3 orders of magnitude
* more hits than misses.
*/
struct xfs_buf *
xfs_buf_get_map(
struct xfs_buftarg *target,
struct xfs_buf_map *map,
int nmaps,
xfs_buf_flags_t flags)
{
struct xfs_buf *bp;
struct xfs_buf *new_bp;
int error = 0;
bp = _xfs_buf_find(target, map, nmaps, flags, NULL);
if (likely(bp))
goto found;
new_bp = _xfs_buf_alloc(target, map, nmaps, flags);
if (unlikely(!new_bp))
return NULL;
error = xfs_buf_allocate_memory(new_bp, flags);
if (error) {
xfs_buf_free(new_bp);
return NULL;
}
bp = _xfs_buf_find(target, map, nmaps, flags, new_bp);
if (!bp) {
xfs_buf_free(new_bp);
return NULL;
}
if (bp != new_bp)
xfs_buf_free(new_bp);
found:
if (!bp->b_addr) {
error = _xfs_buf_map_pages(bp, flags);
if (unlikely(error)) {
xfs_warn(target->bt_mount,
"%s: failed to map pagesn", __func__);
xfs_buf_relse(bp);
return NULL;
}
}
XFS_STATS_INC(xb_get);
trace_xfs_buf_get(bp, flags, _RET_IP_);
return bp;
}
STATIC int
_xfs_buf_read(
xfs_buf_t *bp,
xfs_buf_flags_t flags)
{
ASSERT(!(flags & XBF_WRITE));
ASSERT(bp->b_maps[0].bm_bn != XFS_BUF_DADDR_NULL);
bp->b_flags &= ~(XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD);
bp->b_flags |= flags & (XBF_READ | XBF_ASYNC | XBF_READ_AHEAD);
xfs_buf_iorequest(bp);
if (flags & XBF_ASYNC)
return 0;
return xfs_buf_iowait(bp);
}
xfs_buf_t *
xfs_buf_read_map(
struct xfs_buftarg *target,
struct xfs_buf_map *map,
int nmaps,
xfs_buf_flags_t flags,
const struct xfs_buf_ops *ops)
{
struct xfs_buf *bp;
flags |= XBF_READ;
bp = xfs_buf_get_map(target, map, nmaps, flags);
if (bp) {
trace_xfs_buf_read(bp, flags, _RET_IP_);
if (!XFS_BUF_ISDONE(bp)) {
XFS_STATS_INC(xb_get_read);
bp->b_ops = ops;
_xfs_buf_read(bp, flags);
} else if (flags & XBF_ASYNC) {
/*
* Read ahead call which is already satisfied,
* drop the buffer
*/
xfs_buf_relse(bp);
return NULL;
} else {
/* We do not want read in the flags */
bp->b_flags &= ~XBF_READ;
}
}
return bp;
}
/*
* If we are not low on memory then do the readahead in a deadlock
* safe manner.
*/
void
xfs_buf_readahead_map(
struct xfs_buftarg *target,
struct xfs_buf_map *map,
int nmaps,
const struct xfs_buf_ops *ops)
{
if (bdi_read_congested(target->bt_bdi))
return;
xfs_buf_read_map(target, map, nmaps,
XBF_TRYLOCK|XBF_ASYNC|XBF_READ_AHEAD, ops);
}
/*
* Read an uncached buffer from disk. Allocates and returns a locked
* buffer containing the disk contents or nothing.
*/
struct xfs_buf *
xfs_buf_read_uncached(
struct xfs_buftarg *target,
xfs_daddr_t daddr,
size_t numblks,
int flags,
const struct xfs_buf_ops *ops)
{
struct xfs_buf *bp;
bp = xfs_buf_get_uncached(target, numblks, flags);
if (!bp)
return NULL;
/* set up the buffer for a read IO */
ASSERT(bp->b_map_count == 1);
bp->b_bn = daddr;
bp->b_maps[0].bm_bn = daddr;
bp->b_flags |= XBF_READ;
bp->b_ops = ops;
if (XFS_FORCED_SHUTDOWN(target->bt_mount)) {
xfs_buf_relse(bp);
return NULL;
}
xfs_buf_iorequest(bp);
xfs_buf_iowait(bp);
return bp;
}
/*
* Return a buffer allocated as an empty buffer and associated to external
* memory via xfs_buf_associate_memory() back to it's empty state.
*/
void
xfs_buf_set_empty(
struct xfs_buf *bp,
size_t numblks)
{
if (bp->b_pages)
_xfs_buf_free_pages(bp);
bp->b_pages = NULL;
bp->b_page_count = 0;
bp->b_addr = NULL;
bp->b_length = numblks;
bp->b_io_length = numblks;
ASSERT(bp->b_map_count == 1);
bp->b_bn = XFS_BUF_DADDR_NULL;
bp->b_maps[0].bm_bn = XFS_BUF_DADDR_NULL;
bp->b_maps[0].bm_len = bp->b_length;
}
static inline struct page *
mem_to_page(
void *addr)
{
if ((!is_vmalloc_addr(addr))) {
return virt_to_page(addr);
} else {
return vmalloc_to_page(addr);
}
}
int
xfs_buf_associate_memory(
xfs_buf_t *bp,
void *mem,
size_t len)
{
int rval;
int i = 0;
unsigned long pageaddr;
unsigned long offset;
size_t buflen;
int page_count;
pageaddr = (unsigned long)mem & PAGE_MASK;
offset = (unsigned long)mem - pageaddr;
buflen = PAGE_ALIGN(len + offset);
page_count = buflen >> PAGE_SHIFT;
/* Free any previous set of page pointers */
if (bp->b_pages)
_xfs_buf_free_pages(bp);
bp->b_pages = NULL;
bp->b_addr = mem;
rval = _xfs_buf_get_pages(bp, page_count);
if (rval)
return rval;
bp->b_offset = offset;
for (i = 0; i < bp->b_page_count; i++) {
bp->b_pages[i] = mem_to_page((void *)pageaddr);
pageaddr += PAGE_SIZE;
}
bp->b_io_length = BTOBB(len);
bp->b_length = BTOBB(buflen);
return 0;
}
xfs_buf_t *
xfs_buf_get_uncached(
struct xfs_buftarg *target,
size_t numblks,
int flags)
{
unsigned long page_count;
int error, i;
struct xfs_buf *bp;
DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks);
bp = _xfs_buf_alloc(target, &map, 1, 0);
if (unlikely(bp == NULL))
goto fail;
page_count = PAGE_ALIGN(numblks << BBSHIFT) >> PAGE_SHIFT;
error = _xfs_buf_get_pages(bp, page_count);
if (error)
goto fail_free_buf;
for (i = 0; i < page_count; i++) {
bp->b_pages[i] = alloc_page(xb_to_gfp(flags));
if (!bp->b_pages[i])
goto fail_free_mem;
}
bp->b_flags |= _XBF_PAGES;
error = _xfs_buf_map_pages(bp, 0);
if (unlikely(error)) {
xfs_warn(target->bt_mount,
"%s: failed to map pages", __func__);
goto fail_free_mem;
}
trace_xfs_buf_get_uncached(bp, _RET_IP_);
return bp;
fail_free_mem:
while (--i >= 0)
__free_page(bp->b_pages[i]);
_xfs_buf_free_pages(bp);
fail_free_buf:
xfs_buf_free_maps(bp);
kmem_zone_free(xfs_buf_zone, bp);
fail:
return NULL;
}
/*
* Increment reference count on buffer, to hold the buffer concurrently
* with another thread which may release (free) the buffer asynchronously.
* Must hold the buffer already to call this function.
*/
void
xfs_buf_hold(
xfs_buf_t *bp)
{
trace_xfs_buf_hold(bp, _RET_IP_);
atomic_inc(&bp->b_hold);
}
/*
* Releases a hold on the specified buffer. If the
* the hold count is 1, calls xfs_buf_free.
*/
void
xfs_buf_rele(
xfs_buf_t *bp)
{
struct xfs_perag *pag = bp->b_pag;
trace_xfs_buf_rele(bp, _RET_IP_);
if (!pag) {
ASSERT(list_empty(&bp->b_lru));
ASSERT(RB_EMPTY_NODE(&bp->b_rbnode));
if (atomic_dec_and_test(&bp->b_hold))
xfs_buf_free(bp);
return;
}
ASSERT(!RB_EMPTY_NODE(&bp->b_rbnode));
ASSERT(atomic_read(&bp->b_hold) > 0);
if (atomic_dec_and_lock(&bp->b_hold, &pag->pag_buf_lock)) {
spin_lock(&bp->b_lock);
if (!(bp->b_flags & XBF_STALE) && atomic_read(&bp->b_lru_ref)) {
/*
* If the buffer is added to the LRU take a new
* reference to the buffer for the LRU and clear the
* (now stale) dispose list state flag
*/
if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) {
bp->b_state &= ~XFS_BSTATE_DISPOSE;
atomic_inc(&bp->b_hold);
}
spin_unlock(&bp->b_lock);
spin_unlock(&pag->pag_buf_lock);
} else {
/*
* most of the time buffers will already be removed from
* the LRU, so optimise that case by checking for the
* XFS_BSTATE_DISPOSE flag indicating the last list the
* buffer was on was the disposal list
*/
if (!(bp->b_state & XFS_BSTATE_DISPOSE)) {
list_lru_del(&bp->b_target->bt_lru, &bp->b_lru);
} else {
ASSERT(list_empty(&bp->b_lru));
}
spin_unlock(&bp->b_lock);
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
rb_erase(&bp->b_rbnode, &pag->pag_buf_tree);
spin_unlock(&pag->pag_buf_lock);
xfs_perag_put(pag);
xfs_buf_free(bp);
}
}
}
/*
* Lock a buffer object, if it is not already locked.
*
* If we come across a stale, pinned, locked buffer, we know that we are
* being asked to lock a buffer that has been reallocated. Because it is
* pinned, we know that the log has not been pushed to disk and hence it
* will still be locked. Rather than continuing to have trylock attempts
* fail until someone else pushes the log, push it ourselves before
* returning. This means that the xfsaild will not get stuck trying
* to push on stale inode buffers.
*/
int
xfs_buf_trylock(
struct xfs_buf *bp)
{
int locked;
locked = down_trylock(&bp->b_sema) == 0;
if (locked)
XB_SET_OWNER(bp);
trace_xfs_buf_trylock(bp, _RET_IP_);
return locked;
}
/*
* Lock a buffer object.
*
* If we come across a stale, pinned, locked buffer, we know that we
* are being asked to lock a buffer that has been reallocated. Because
* it is pinned, we know that the log has not been pushed to disk and
* hence it will still be locked. Rather than sleeping until someone
* else pushes the log, push it ourselves before trying to get the lock.
*/
void
xfs_buf_lock(
struct xfs_buf *bp)
{
trace_xfs_buf_lock(bp, _RET_IP_);
if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
xfs_log_force(bp->b_target->bt_mount, 0);
down(&bp->b_sema);
XB_SET_OWNER(bp);
trace_xfs_buf_lock_done(bp, _RET_IP_);
}
void
xfs_buf_unlock(
struct xfs_buf *bp)
{
XB_CLEAR_OWNER(bp);
up(&bp->b_sema);
trace_xfs_buf_unlock(bp, _RET_IP_);
}
STATIC void
xfs_buf_wait_unpin(
xfs_buf_t *bp)
{
DECLARE_WAITQUEUE (wait, current);
if (atomic_read(&bp->b_pin_count) == 0)
return;
add_wait_queue(&bp->b_waiters, &wait);
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
if (atomic_read(&bp->b_pin_count) == 0)
break;
io_schedule();
}
remove_wait_queue(&bp->b_waiters, &wait);
set_current_state(TASK_RUNNING);
}
/*
* Buffer Utility Routines
*/
STATIC void
xfs_buf_iodone_work(
struct work_struct *work)
{
struct xfs_buf *bp =
container_of(work, xfs_buf_t, b_iodone_work);
bool read = !!(bp->b_flags & XBF_READ);
bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD);
/* only validate buffers that were read without errors */
if (read && bp->b_ops && !bp->b_error && (bp->b_flags & XBF_DONE))
bp->b_ops->verify_read(bp);
if (bp->b_iodone)
(*(bp->b_iodone))(bp);
else if (bp->b_flags & XBF_ASYNC)
xfs_buf_relse(bp);
else {
ASSERT(read && bp->b_ops);
complete(&bp->b_iowait);
}
}
void
xfs_buf_ioend(
struct xfs_buf *bp,
int schedule)
{
bool read = !!(bp->b_flags & XBF_READ);
trace_xfs_buf_iodone(bp, _RET_IP_);
if (bp->b_error == 0)
bp->b_flags |= XBF_DONE;
if (bp->b_iodone || (read && bp->b_ops) || (bp->b_flags & XBF_ASYNC)) {
if (schedule) {
INIT_WORK(&bp->b_iodone_work, xfs_buf_iodone_work);
queue_work(xfslogd_workqueue, &bp->b_iodone_work);
} else {
xfs_buf_iodone_work(&bp->b_iodone_work);
}
} else {
bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD);
complete(&bp->b_iowait);
}
}
void
xfs_buf_ioerror(
xfs_buf_t *bp,
int error)
{
ASSERT(error >= 0 && error <= 0xffff);
bp->b_error = (unsigned short)error;
trace_xfs_buf_ioerror(bp, error, _RET_IP_);
}
void
xfs_buf_ioerror_alert(
struct xfs_buf *bp,
const char *func)
{
xfs_alert(bp->b_target->bt_mount,
"metadata I/O error: block 0x%llx (\"%s\") error %d numblks %d",
(__uint64_t)XFS_BUF_ADDR(bp), func, bp->b_error, bp->b_length);
}
/*
* Called when we want to stop a buffer from getting written or read.
* We attach the EIO error, muck with its flags, and call xfs_buf_ioend
* so that the proper iodone callbacks get called.
*/
STATIC int
xfs_bioerror(
xfs_buf_t *bp)
{
#ifdef XFSERRORDEBUG
ASSERT(XFS_BUF_ISREAD(bp) || bp->b_iodone);
#endif
/*
* No need to wait until the buffer is unpinned, we aren't flushing it.
*/
xfs_buf_ioerror(bp, EIO);
/*
* We're calling xfs_buf_ioend, so delete XBF_DONE flag.
*/
XFS_BUF_UNREAD(bp);
XFS_BUF_UNDONE(bp);
xfs_buf_stale(bp);
xfs_buf_ioend(bp, 0);
return EIO;
}
/*
* Same as xfs_bioerror, except that we are releasing the buffer
* here ourselves, and avoiding the xfs_buf_ioend call.
* This is meant for userdata errors; metadata bufs come with
* iodone functions attached, so that we can track down errors.
*/
int
xfs_bioerror_relse(
struct xfs_buf *bp)
{
int64_t fl = bp->b_flags;
/*
* No need to wait until the buffer is unpinned.
* We aren't flushing it.
*
* chunkhold expects B_DONE to be set, whether
* we actually finish the I/O or not. We don't want to
* change that interface.
*/
XFS_BUF_UNREAD(bp);
XFS_BUF_DONE(bp);
xfs_buf_stale(bp);
bp->b_iodone = NULL;
if (!(fl & XBF_ASYNC)) {
/*
* Mark b_error and B_ERROR _both_.
* Lot's of chunkcache code assumes that.
* There's no reason to mark error for
* ASYNC buffers.
*/
xfs_buf_ioerror(bp, EIO);
complete(&bp->b_iowait);
} else {
xfs_buf_relse(bp);
}
return EIO;
}
STATIC int
xfs_bdstrat_cb(
struct xfs_buf *bp)
{
if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) {
trace_xfs_bdstrat_shut(bp, _RET_IP_);
/*
* Metadata write that didn't get logged but
* written delayed anyway. These aren't associated
* with a transaction, and can be ignored.
*/
if (!bp->b_iodone && !XFS_BUF_ISREAD(bp))
return xfs_bioerror_relse(bp);
else
return xfs_bioerror(bp);
}
xfs_buf_iorequest(bp);
return 0;
}
int
xfs_bwrite(
struct xfs_buf *bp)
{
int error;
ASSERT(xfs_buf_islocked(bp));
bp->b_flags |= XBF_WRITE;
bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q | XBF_WRITE_FAIL);
xfs_bdstrat_cb(bp);
error = xfs_buf_iowait(bp);
if (error) {
xfs_force_shutdown(bp->b_target->bt_mount,
SHUTDOWN_META_IO_ERROR);
}
return error;
}
STATIC void
_xfs_buf_ioend(
xfs_buf_t *bp,
int schedule)
{
if (atomic_dec_and_test(&bp->b_io_remaining) == 1)
xfs_buf_ioend(bp, schedule);
}
STATIC void
xfs_buf_bio_end_io(
struct bio *bio,
int error)
{
xfs_buf_t *bp = (xfs_buf_t *)bio->bi_private;
/*
* don't overwrite existing errors - otherwise we can lose errors on
* buffers that require multiple bios to complete.
*/
if (!bp->b_error)
xfs_buf_ioerror(bp, -error);
if (!bp->b_error && xfs_buf_is_vmapped(bp) && (bp->b_flags & XBF_READ))
invalidate_kernel_vmap_range(bp->b_addr, xfs_buf_vmap_len(bp));
_xfs_buf_ioend(bp, 1);
bio_put(bio);
}
static void
xfs_buf_ioapply_map(
struct xfs_buf *bp,
int map,
int *buf_offset,
int *count,
int rw)
{
int page_index;
int total_nr_pages = bp->b_page_count;
int nr_pages;
struct bio *bio;
sector_t sector = bp->b_maps[map].bm_bn;
int size;
int offset;
total_nr_pages = bp->b_page_count;
/* skip the pages in the buffer before the start offset */
page_index = 0;
offset = *buf_offset;
while (offset >= PAGE_SIZE) {
page_index++;
offset -= PAGE_SIZE;
}
/*
* Limit the IO size to the length of the current vector, and update the
* remaining IO count for the next time around.
*/
size = min_t(int, BBTOB(bp->b_maps[map].bm_len), *count);
*count -= size;
*buf_offset += size;
next_chunk:
atomic_inc(&bp->b_io_remaining);
nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT);
if (nr_pages > total_nr_pages)
nr_pages = total_nr_pages;
bio = bio_alloc(GFP_NOIO, nr_pages);
bio->bi_bdev = bp->b_target->bt_bdev;
bio->bi_iter.bi_sector = sector;
bio->bi_end_io = xfs_buf_bio_end_io;
bio->bi_private = bp;
for (; size && nr_pages; nr_pages--, page_index++) {
int rbytes, nbytes = PAGE_SIZE - offset;
if (nbytes > size)
nbytes = size;
rbytes = bio_add_page(bio, bp->b_pages[page_index], nbytes,
offset);
if (rbytes < nbytes)
break;
offset = 0;
sector += BTOBB(nbytes);
size -= nbytes;
total_nr_pages--;
}
if (likely(bio->bi_iter.bi_size)) {
if (xfs_buf_is_vmapped(bp)) {
flush_kernel_vmap_range(bp->b_addr,
xfs_buf_vmap_len(bp));
}
submit_bio(rw, bio);
if (size)
goto next_chunk;
} else {
/*
* This is guaranteed not to be the last io reference count
* because the caller (xfs_buf_iorequest) holds a count itself.
*/
atomic_dec(&bp->b_io_remaining);
xfs_buf_ioerror(bp, EIO);
bio_put(bio);
}
}
STATIC void
_xfs_buf_ioapply(
struct xfs_buf *bp)
{
struct blk_plug plug;
int rw;
int offset;
int size;
int i;
/*
* Make sure we capture only current IO errors rather than stale errors
* left over from previous use of the buffer (e.g. failed readahead).
*/
bp->b_error = 0;
if (bp->b_flags & XBF_WRITE) {
if (bp->b_flags & XBF_SYNCIO)
rw = WRITE_SYNC;
else
rw = WRITE;
if (bp->b_flags & XBF_FUA)
rw |= REQ_FUA;
if (bp->b_flags & XBF_FLUSH)
rw |= REQ_FLUSH;
/*
* Run the write verifier callback function if it exists. If
* this function fails it will mark the buffer with an error and
* the IO should not be dispatched.
*/
if (bp->b_ops) {
bp->b_ops->verify_write(bp);
if (bp->b_error) {
xfs_force_shutdown(bp->b_target->bt_mount,
SHUTDOWN_CORRUPT_INCORE);
return;
}
}
} else if (bp->b_flags & XBF_READ_AHEAD) {
rw = READA;
} else {
rw = READ;
}
/* we only use the buffer cache for meta-data */
rw |= REQ_META;
/*
* Walk all the vectors issuing IO on them. Set up the initial offset
* into the buffer and the desired IO size before we start -
* _xfs_buf_ioapply_vec() will modify them appropriately for each
* subsequent call.
*/
offset = bp->b_offset;
size = BBTOB(bp->b_io_length);
blk_start_plug(&plug);
for (i = 0; i < bp->b_map_count; i++) {
xfs_buf_ioapply_map(bp, i, &offset, &size, rw);
if (bp->b_error)
break;
if (size <= 0)
break; /* all done */
}
blk_finish_plug(&plug);
}
void
xfs_buf_iorequest(
xfs_buf_t *bp)
{
trace_xfs_buf_iorequest(bp, _RET_IP_);
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
if (bp->b_flags & XBF_WRITE)
xfs_buf_wait_unpin(bp);
xfs_buf_hold(bp);
/*
* Set the count to 1 initially, this will stop an I/O
* completion callout which happens before we have started
* all the I/O from calling xfs_buf_ioend too early.
*/
atomic_set(&bp->b_io_remaining, 1);
_xfs_buf_ioapply(bp);
/*
* If _xfs_buf_ioapply failed, we'll get back here with
* only the reference we took above. _xfs_buf_ioend will
* drop it to zero, so we'd better not queue it for later,
* or we'll free it before it's done.
*/
_xfs_buf_ioend(bp, bp->b_error ? 0 : 1);
xfs_buf_rele(bp);
}
/*
* Waits for I/O to complete on the buffer supplied. It returns immediately if
* no I/O is pending or there is already a pending error on the buffer, in which
* case nothing will ever complete. It returns the I/O error code, if any, or
* 0 if there was no error.
*/
int
xfs_buf_iowait(
xfs_buf_t *bp)
{
trace_xfs_buf_iowait(bp, _RET_IP_);
if (!bp->b_error)
wait_for_completion(&bp->b_iowait);
trace_xfs_buf_iowait_done(bp, _RET_IP_);
return bp->b_error;
}
xfs_caddr_t
xfs_buf_offset(
xfs_buf_t *bp,
size_t offset)
{
struct page *page;
if (bp->b_addr)
return bp->b_addr + offset;
offset += bp->b_offset;
page = bp->b_pages[offset >> PAGE_SHIFT];
return (xfs_caddr_t)page_address(page) + (offset & (PAGE_SIZE-1));
}
/*
* Move data into or out of a buffer.
*/
void
xfs_buf_iomove(
xfs_buf_t *bp, /* buffer to process */
size_t boff, /* starting buffer offset */
size_t bsize, /* length to copy */
void *data, /* data address */
xfs_buf_rw_t mode) /* read/write/zero flag */
{
size_t bend;
bend = boff + bsize;
while (boff < bend) {
struct page *page;
int page_index, page_offset, csize;
page_index = (boff + bp->b_offset) >> PAGE_SHIFT;
page_offset = (boff + bp->b_offset) & ~PAGE_MASK;
page = bp->b_pages[page_index];
csize = min_t(size_t, PAGE_SIZE - page_offset,
BBTOB(bp->b_io_length) - boff);
ASSERT((csize + page_offset) <= PAGE_SIZE);
switch (mode) {
case XBRW_ZERO:
memset(page_address(page) + page_offset, 0, csize);
break;
case XBRW_READ:
memcpy(data, page_address(page) + page_offset, csize);
break;
case XBRW_WRITE:
memcpy(page_address(page) + page_offset, data, csize);
}
boff += csize;
data += csize;
}
}
/*
* Handling of buffer targets (buftargs).
*/
/*
* Wait for any bufs with callbacks that have been submitted but have not yet
* returned. These buffers will have an elevated hold count, so wait on those
* while freeing all the buffers only held by the LRU.
*/
static enum lru_status
xfs_buftarg_wait_rele(
struct list_head *item,
spinlock_t *lru_lock,
void *arg)
{
struct xfs_buf *bp = container_of(item, struct xfs_buf, b_lru);
struct list_head *dispose = arg;
if (atomic_read(&bp->b_hold) > 1) {
/* need to wait, so skip it this pass */
trace_xfs_buf_wait_buftarg(bp, _RET_IP_);
return LRU_SKIP;
}
if (!spin_trylock(&bp->b_lock))
return LRU_SKIP;
/*
* clear the LRU reference count so the buffer doesn't get
* ignored in xfs_buf_rele().
*/
atomic_set(&bp->b_lru_ref, 0);
bp->b_state |= XFS_BSTATE_DISPOSE;
list_move(item, dispose);
spin_unlock(&bp->b_lock);
return LRU_REMOVED;
}
void
xfs_wait_buftarg(
struct xfs_buftarg *btp)
{
LIST_HEAD(dispose);
int loop = 0;
/* loop until there is nothing left on the lru list. */
while (list_lru_count(&btp->bt_lru)) {
list_lru_walk(&btp->bt_lru, xfs_buftarg_wait_rele,
&dispose, LONG_MAX);
while (!list_empty(&dispose)) {
struct xfs_buf *bp;
bp = list_first_entry(&dispose, struct xfs_buf, b_lru);
list_del_init(&bp->b_lru);
if (bp->b_flags & XBF_WRITE_FAIL) {
xfs_alert(btp->bt_mount,
"Corruption Alert: Buffer at block 0x%llx had permanent write failures!\n"
"Please run xfs_repair to determine the extent of the problem.",
(long long)bp->b_bn);
}
xfs_buf_rele(bp);
}
if (loop++ != 0)
delay(100);
}
}
static enum lru_status
xfs_buftarg_isolate(
struct list_head *item,
spinlock_t *lru_lock,
void *arg)
{
struct xfs_buf *bp = container_of(item, struct xfs_buf, b_lru);
struct list_head *dispose = arg;
/*
* we are inverting the lru lock/bp->b_lock here, so use a trylock.
* If we fail to get the lock, just skip it.
*/
if (!spin_trylock(&bp->b_lock))
return LRU_SKIP;
/*
* Decrement the b_lru_ref count unless the value is already
* zero. If the value is already zero, we need to reclaim the
* buffer, otherwise it gets another trip through the LRU.
*/
if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
spin_unlock(&bp->b_lock);
return LRU_ROTATE;
}
bp->b_state |= XFS_BSTATE_DISPOSE;
list_move(item, dispose);
spin_unlock(&bp->b_lock);
return LRU_REMOVED;
}
static unsigned long
xfs_buftarg_shrink_scan(
struct shrinker *shrink,
struct shrink_control *sc)
{
struct xfs_buftarg *btp = container_of(shrink,
struct xfs_buftarg, bt_shrinker);
LIST_HEAD(dispose);
unsigned long freed;
unsigned long nr_to_scan = sc->nr_to_scan;
freed = list_lru_walk_node(&btp->bt_lru, sc->nid, xfs_buftarg_isolate,
&dispose, &nr_to_scan);
while (!list_empty(&dispose)) {
struct xfs_buf *bp;
bp = list_first_entry(&dispose, struct xfs_buf, b_lru);
list_del_init(&bp->b_lru);
xfs_buf_rele(bp);
}
return freed;
}
static unsigned long
xfs_buftarg_shrink_count(
struct shrinker *shrink,
struct shrink_control *sc)
{
struct xfs_buftarg *btp = container_of(shrink,
struct xfs_buftarg, bt_shrinker);
return list_lru_count_node(&btp->bt_lru, sc->nid);
}
void
xfs_free_buftarg(
struct xfs_mount *mp,
struct xfs_buftarg *btp)
{
unregister_shrinker(&btp->bt_shrinker);
list_lru_destroy(&btp->bt_lru);
if (mp->m_flags & XFS_MOUNT_BARRIER)
xfs_blkdev_issue_flush(btp);
kmem_free(btp);
}
int
xfs_setsize_buftarg(
xfs_buftarg_t *btp,
unsigned int sectorsize)
{
/* Set up metadata sector size info */
btp->bt_meta_sectorsize = sectorsize;
btp->bt_meta_sectormask = sectorsize - 1;
if (set_blocksize(btp->bt_bdev, sectorsize)) {
char name[BDEVNAME_SIZE];
bdevname(btp->bt_bdev, name);
xfs_warn(btp->bt_mount,
"Cannot set_blocksize to %u on device %s",
sectorsize, name);
return EINVAL;
}
/* Set up device logical sector size mask */
btp->bt_logical_sectorsize = bdev_logical_block_size(btp->bt_bdev);
btp->bt_logical_sectormask = bdev_logical_block_size(btp->bt_bdev) - 1;
return 0;
}
/*
* When allocating the initial buffer target we have not yet
* read in the superblock, so don't know what sized sectors
* are being used at this early stage. Play safe.
*/
STATIC int
xfs_setsize_buftarg_early(
xfs_buftarg_t *btp,
struct block_device *bdev)
{
return xfs_setsize_buftarg(btp, bdev_logical_block_size(bdev));
}
xfs_buftarg_t *
xfs_alloc_buftarg(
struct xfs_mount *mp,
struct block_device *bdev)
{
xfs_buftarg_t *btp;
btp = kmem_zalloc(sizeof(*btp), KM_SLEEP | KM_NOFS);
btp->bt_mount = mp;
btp->bt_dev = bdev->bd_dev;
btp->bt_bdev = bdev;
btp->bt_bdi = blk_get_backing_dev_info(bdev);
if (!btp->bt_bdi)
goto error;
if (xfs_setsize_buftarg_early(btp, bdev))
goto error;
if (list_lru_init(&btp->bt_lru))
goto error;
btp->bt_shrinker.count_objects = xfs_buftarg_shrink_count;
btp->bt_shrinker.scan_objects = xfs_buftarg_shrink_scan;
btp->bt_shrinker.seeks = DEFAULT_SEEKS;
btp->bt_shrinker.flags = SHRINKER_NUMA_AWARE;
register_shrinker(&btp->bt_shrinker);
return btp;
error:
kmem_free(btp);
return NULL;
}
/*
* Add a buffer to the delayed write list.
*
* This queues a buffer for writeout if it hasn't already been. Note that
* neither this routine nor the buffer list submission functions perform
* any internal synchronization. It is expected that the lists are thread-local
* to the callers.
*
* Returns true if we queued up the buffer, or false if it already had
* been on the buffer list.
*/
bool
xfs_buf_delwri_queue(
struct xfs_buf *bp,
struct list_head *list)
{
ASSERT(xfs_buf_islocked(bp));
ASSERT(!(bp->b_flags & XBF_READ));
/*
* If the buffer is already marked delwri it already is queued up
* by someone else for imediate writeout. Just ignore it in that
* case.
*/
if (bp->b_flags & _XBF_DELWRI_Q) {
trace_xfs_buf_delwri_queued(bp, _RET_IP_);
return false;
}
trace_xfs_buf_delwri_queue(bp, _RET_IP_);
/*
* If a buffer gets written out synchronously or marked stale while it
* is on a delwri list we lazily remove it. To do this, the other party
* clears the _XBF_DELWRI_Q flag but otherwise leaves the buffer alone.
* It remains referenced and on the list. In a rare corner case it
* might get readded to a delwri list after the synchronous writeout, in
* which case we need just need to re-add the flag here.
*/
bp->b_flags |= _XBF_DELWRI_Q;
if (list_empty(&bp->b_list)) {
atomic_inc(&bp->b_hold);
list_add_tail(&bp->b_list, list);
}
return true;
}
/*
* Compare function is more complex than it needs to be because
* the return value is only 32 bits and we are doing comparisons
* on 64 bit values
*/
static int
xfs_buf_cmp(
void *priv,
struct list_head *a,
struct list_head *b)
{
struct xfs_buf *ap = container_of(a, struct xfs_buf, b_list);
struct xfs_buf *bp = container_of(b, struct xfs_buf, b_list);
xfs_daddr_t diff;
diff = ap->b_maps[0].bm_bn - bp->b_maps[0].bm_bn;
if (diff < 0)
return -1;
if (diff > 0)
return 1;
return 0;
}
static void
xfs_buf_delwri_submit_buffers(
struct list_head *buffer_list,
bool wait)
{
struct xfs_buf *bp, *n;
struct blk_plug plug;
blk_start_plug(&plug);
list_for_each_entry_safe(bp, n, buffer_list, b_list) {
bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
bp->b_flags |= XBF_WRITE;
if (!wait) {
bp->b_flags |= XBF_ASYNC;
list_del_init(&bp->b_list);
}
xfs_bdstrat_cb(bp);
}
blk_finish_plug(&plug);
}
static int
__xfs_buf_delwri_submit(
struct list_head *buffer_list,
struct list_head *io_list,
bool wait)
{
struct xfs_buf *bp, *n;
int pinned = 0;
LIST_HEAD (submit_list);
int count = 0;
list_sort(NULL, buffer_list, xfs_buf_cmp);
list_for_each_entry_safe(bp, n, buffer_list, b_list) {
if (!wait) {
if (xfs_buf_ispinned(bp)) {
pinned++;
continue;
}
if (!xfs_buf_trylock(bp))
continue;
} else {
xfs_buf_lock(bp);
}
/*
* Someone else might have written the buffer synchronously or
* marked it stale in the meantime. In that case only the
* _XBF_DELWRI_Q flag got cleared, and we have to drop the
* reference and remove it from the list here.
*/
if (!(bp->b_flags & _XBF_DELWRI_Q)) {
list_del_init(&bp->b_list);
xfs_buf_relse(bp);
continue;
}
//list_move_tail(&bp->b_list, io_list);
list_move_tail(&bp->b_list, &submit_list);
trace_xfs_buf_delwri_split(bp, _RET_IP_);
if (count++ < 50) {
continue;
}
xfs_buf_delwri_submit_buffers(&submit_list, wait);
count = 0;
}
// list_sort(NULL, io_list, xfs_buf_cmp);
if (!list_empty(&submit_list)) {
xfs_buf_delwri_submit_buffers(&submit_list, wait);
}
return pinned;
}
/*
* Write out a buffer list asynchronously.
*
* This will take the @buffer_list, write all non-locked and non-pinned buffers
* out and not wait for I/O completion on any of the buffers. This interface
* is only safely useable for callers that can track I/O completion by higher
* level means, e.g. AIL pushing as the @buffer_list is consumed in this
* function.
*/
int
xfs_buf_delwri_submit_nowait(
struct list_head *buffer_list)
{
LIST_HEAD (io_list);
return __xfs_buf_delwri_submit(buffer_list, &io_list, false);
}
/*
* Write out a buffer list synchronously.
*
* This will take the @buffer_list, write all buffers out and wait for I/O
* completion on all of the buffers. @buffer_list is consumed by the function,
* so callers must have some other way of tracking buffers if they require such
* functionality.
*/
int
xfs_buf_delwri_submit(
struct list_head *buffer_list)
{
LIST_HEAD (io_list);
int error = 0, error2;
struct xfs_buf *bp;
__xfs_buf_delwri_submit(buffer_list, &io_list, true);
/* Wait for IO to complete. */
while (!list_empty(&io_list)) {
bp = list_first_entry(&io_list, struct xfs_buf, b_list);
list_del_init(&bp->b_list);
error2 = xfs_buf_iowait(bp);
xfs_buf_relse(bp);
if (!error)
error = error2;
}
return error;
}
int __init
xfs_buf_init(void)
{
xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf",
KM_ZONE_HWALIGN, NULL);
if (!xfs_buf_zone)
goto out;
xfslogd_workqueue = alloc_workqueue("xfslogd",
WQ_MEM_RECLAIM | WQ_HIGHPRI, 1);
if (!xfslogd_workqueue)
goto out_free_buf_zone;
return 0;
out_free_buf_zone:
kmem_zone_destroy(xfs_buf_zone);
out:
return -ENOMEM;
}
void
xfs_buf_terminate(void)
{
destroy_workqueue(xfslogd_workqueue);
kmem_zone_destroy(xfs_buf_zone);
}
[-- Attachment #4: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: XFS Syncd
2015-06-08 21:56 ` Shrinand Javadekar
@ 2015-06-09 23:12 ` Dave Chinner
0 siblings, 0 replies; 21+ messages in thread
From: Dave Chinner @ 2015-06-09 23:12 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: xfs
On Mon, Jun 08, 2015 at 02:56:10PM -0700, Shrinand Javadekar wrote:
> I gave this another shot after understanding it better. I now have a
> version that doesn't crash. I'm attaching the diff and the new
> xfs_buf.c file.
> However, in the new version the io_list list doesn't get populated at
> all in __xfs_buf_delwri_submit(). I haven't completely familiarized
> myself with what callers do with this list. Callers initialize this
> list and send a pointer to __xfs_buf_delwri_submit() and expect a
> populated list back.
Better to send diffs than the entire file...
I'd also prefer that you don't top post, because it makes it hard to
follow the thread and comment on the relevant issues...
> Nonetheless, I ran my experiments after building and inserting the XFS
> module with this change. Strangely enough, I see the performance going
> down by ~25% compared to the original XFS module.
So perhaps you are seeing a different problem.
....
> /*
> * Add a buffer to the delayed write list.
> *
> * This queues a buffer for writeout if it hasn't already been. Note that
> * neither this routine nor the buffer list submission functions perform
> * any internal synchronization. It is expected that the lists are thread-local
> * to the callers.
> *
> * Returns true if we queued up the buffer, or false if it already had
> * been on the buffer list.
> */
> bool
> xfs_buf_delwri_queue(
> struct xfs_buf *bp,
> struct list_head *list)
> {
> ASSERT(xfs_buf_islocked(bp));
> ASSERT(!(bp->b_flags & XBF_READ));
>
> /*
> * If the buffer is already marked delwri it already is queued up
> * by someone else for imediate writeout. Just ignore it in that
> * case.
> */
> if (bp->b_flags & _XBF_DELWRI_Q) {
> trace_xfs_buf_delwri_queued(bp, _RET_IP_);
> return false;
> }
>
> trace_xfs_buf_delwri_queue(bp, _RET_IP_);
>
> /*
> * If a buffer gets written out synchronously or marked stale while it
> * is on a delwri list we lazily remove it. To do this, the other party
> * clears the _XBF_DELWRI_Q flag but otherwise leaves the buffer alone.
> * It remains referenced and on the list. In a rare corner case it
> * might get readded to a delwri list after the synchronous writeout, in
> * which case we need just need to re-add the flag here.
> */
> bp->b_flags |= _XBF_DELWRI_Q;
> if (list_empty(&bp->b_list)) {
> atomic_inc(&bp->b_hold);
> list_add_tail(&bp->b_list, list);
> }
>
> return true;
> }
>
> /*
> * Compare function is more complex than it needs to be because
> * the return value is only 32 bits and we are doing comparisons
> * on 64 bit values
> */
> static int
> xfs_buf_cmp(
> void *priv,
> struct list_head *a,
> struct list_head *b)
> {
> struct xfs_buf *ap = container_of(a, struct xfs_buf, b_list);
> struct xfs_buf *bp = container_of(b, struct xfs_buf, b_list);
> xfs_daddr_t diff;
>
> diff = ap->b_maps[0].bm_bn - bp->b_maps[0].bm_bn;
> if (diff < 0)
> return -1;
> if (diff > 0)
> return 1;
> return 0;
> }
>
>
> static void
> xfs_buf_delwri_submit_buffers(
> struct list_head *buffer_list,
> bool wait)
> {
> struct xfs_buf *bp, *n;
> struct blk_plug plug;
>
> blk_start_plug(&plug);
> list_for_each_entry_safe(bp, n, buffer_list, b_list) {
> bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
> bp->b_flags |= XBF_WRITE;
>
> if (!wait) {
> bp->b_flags |= XBF_ASYNC;
> list_del_init(&bp->b_list);
> }
> xfs_bdstrat_cb(bp);
> }
> blk_finish_plug(&plug);
> }
You need to build the iolist here for the wait == true case, as per
the original patch I sent. Otherwise, you aren't clearing buffers
from the submit list correctly so iterated calls will attempt to
submit the same buffers repeatedly.
(FWIW, you've got some wacky whitepsace issues there...)
Other than the iolist building, there's nothing obviously wrong
here. But given you weren't able to capture anything blocked stack
traces when the delays were happening, this was really jus a shot in
the dark.
To move on, I need to know what is actually blocking on metadata
writeback, so I need blocked process stack traces from 'echo w >
/proc/sysrq-trigger' when the system is in that slow state.
> /*
> * Write out a buffer list asynchronously.
> *
> * This will take the @buffer_list, write all non-locked and non-pinned buffers
> * out and not wait for I/O completion on any of the buffers. This interface
> * is only safely useable for callers that can track I/O completion by higher
> * level means, e.g. AIL pushing as the @buffer_list is consumed in this
> * function.
> */
> int
> xfs_buf_delwri_submit_nowait(
> struct list_head *buffer_list)
> {
> LIST_HEAD (io_list);
> return __xfs_buf_delwri_submit(buffer_list, &io_list, false);
> }
This is where the AIL pushing enters, so not building the iolist
here isn't an issue. However...
> /*
> * Write out a buffer list synchronously.
> *
> * This will take the @buffer_list, write all buffers out and wait for I/O
> * completion on all of the buffers. @buffer_list is consumed by the function,
> * so callers must have some other way of tracking buffers if they require such
> * functionality.
> */
> int
> xfs_buf_delwri_submit(
> struct list_head *buffer_list)
> {
> LIST_HEAD (io_list);
> int error = 0, error2;
> struct xfs_buf *bp;
>
> __xfs_buf_delwri_submit(buffer_list, &io_list, true);
>
> /* Wait for IO to complete. */
> while (!list_empty(&io_list)) {
> bp = list_first_entry(&io_list, struct xfs_buf, b_list);
>
> list_del_init(&bp->b_list);
> error2 = xfs_buf_iowait(bp);
> xfs_buf_relse(bp);
> if (!error)
> error = error2;
> }
>
> return error;
unmount enters here, and so not waiting because the iolist is not
built will result in use-after-free bugs on unmount as IO is not
correctly waited for...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2015-06-09 23:13 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-10 4:23 XFS Syncd Shrinand Javadekar
2015-04-10 6:32 ` Dave Chinner
2015-04-10 6:51 ` Shrinand Javadekar
2015-04-10 7:21 ` Dave Chinner
2015-04-10 7:29 ` Shrinand Javadekar
2015-04-10 13:12 ` Dave Chinner
2015-06-02 18:43 ` Shrinand Javadekar
2015-06-03 3:57 ` Dave Chinner
2015-06-03 23:18 ` Shrinand Javadekar
2015-06-04 0:35 ` Dave Chinner
2015-06-04 0:58 ` Shrinand Javadekar
2015-06-04 1:55 ` Dave Chinner
2015-06-04 1:25 ` Dave Chinner
2015-06-04 2:03 ` Dave Chinner
2015-06-04 6:23 ` Dave Chinner
2015-06-04 7:26 ` Shrinand Javadekar
2015-06-04 22:08 ` Dave Chinner
2015-06-05 0:59 ` Shrinand Javadekar
2015-06-05 17:31 ` Shrinand Javadekar
2015-06-08 21:56 ` Shrinand Javadekar
2015-06-09 23:12 ` Dave Chinner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.