All of lore.kernel.org
 help / color / mirror / Atom feed
* Still seeing hangs in xlog_grant_log_space
@ 2012-04-23 12:09 Juerg Haefliger
  2012-04-23 14:38 ` Dave Chinner
  2012-05-24 20:18 ` Peter Watkins
  0 siblings, 2 replies; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-23 12:09 UTC (permalink / raw)
  To: xfs

Hi,

I have a test system that I'm using to try to force an XFS filesystem
hang since we're encountering that problem sporadically in production
running a 2.6.38-8 Natty kernel. The original idea was to use this
system to find the patches that fix the issue but I've tried a whole
bunch of kernels and they all hang eventually (anywhere from 5 to 45
mins) with the stack trace shown below. Only an emergency flush will
bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
3.3.2. From reading through the mail archives, I get the impression
that this should be fixed in 3.1.

What makes the test system special is:
1) The test partition uses 1024 block size and 576b log size.
2) The RAID controller cache is disabled.

I can't seem to hit the problem without the above modifications.

For the IO workload I pre-create 8000 files with random content and
sizes between 1k and 128k on the test partition. Then I run a tool
that spawns a bunch of threads which just copy these files to a
different directory on the same partition. At the same time there are
other threads that rename, remove and overwrite random files in the
destination directory keeping the file count at around 500.

Let me know what other information I can provide to pin this down.

Thanks
...Juerg


haefligerj@use0453rtk:/xfs-hang$ xfs_info /xfs-hang/
meta-data=/dev/mapper/vg00-tmp   isize=256    agcount=4, agsize=2441216 blks
         =                       sectsz=512   attr=2
data     =                       bsize=1024   blocks=9764864, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=1024   blocks=576, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

haefligerj@use0453rtk:/xfs-hang$ mount | grep xfs-hang
/dev/mapper/vg00-tmp on /xfs-hang type xfs (rw)

Apr 23 11:25:45 use0453rtk kernel: [  719.663591] INFO: task
kworker/6:2:367 blocked for more than 120 seconds.
Apr 23 11:25:45 use0453rtk kernel: [  719.663601] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 23 11:25:45 use0453rtk kernel: [  719.663609] kworker/6:2     D
ffffffff818080c0     0   367      2 0x00000000
Apr 23 11:25:45 use0453rtk kernel: [  719.663616]  ffff8817ffddfc30
0000000000000046 0000000000000000 0000000000000000
Apr 23 11:25:45 use0453rtk kernel: [  719.663622]  ffff8817ffdde000
ffff8817ffdde000 ffff880c03304100 ffff8818003fc040
Apr 23 11:25:45 use0453rtk kernel: [  719.663627]  ffff8817ffddfc20
ffff880bffc94c00 ffff8818002f7aa8 0000000000000ab4
Apr 23 11:25:45 use0453rtk kernel: [  719.663633] Call Trace:
Apr 23 11:25:45 use0453rtk kernel: [  719.663644]
[<ffffffff8164a4bf>] schedule+0x3f/0x60
Apr 23 11:25:45 use0453rtk kernel: [  719.663671]
[<ffffffffa00edf63>] xlog_reserveq_wait+0x103/0x270 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663679]
[<ffffffff81085300>] ? try_to_wake_up+0x2b0/0x2b0
Apr 23 11:25:45 use0453rtk kernel: [  719.663693]
[<ffffffffa00ee367>] xlog_grant_log_space+0x157/0x200 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663708]
[<ffffffffa00f0cbb>] xfs_log_reserve+0x14b/0x1c0 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663722]
[<ffffffffa00eba4c>] xfs_trans_reserve+0x9c/0x200 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663735]
[<ffffffffa00a7e90>] ? xfs_reclaim_inode_grab+0x90/0x90 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663748]
[<ffffffffa00a7e90>] ? xfs_reclaim_inode_grab+0x90/0x90 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663760]
[<ffffffffa009d363>] xfs_fs_log_dummy+0x43/0x90 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663773]
[<ffffffffa00a7f0c>] xfs_sync_worker+0x7c/0x80 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663778]
[<ffffffff810704ab>] process_one_work+0x11b/0x4a0
Apr 23 11:25:45 use0453rtk kernel: [  719.663783]
[<ffffffff81070be9>] worker_thread+0x169/0x350
Apr 23 11:25:45 use0453rtk kernel: [  719.663787]
[<ffffffff81070a80>] ? rescuer_thread+0x210/0x210
Apr 23 11:25:45 use0453rtk kernel: [  719.663792]
[<ffffffff810755ae>] kthread+0x9e/0xb0
Apr 23 11:25:45 use0453rtk kernel: [  719.663799]
[<ffffffff816540e4>] kernel_thread_helper+0x4/0x10
Apr 23 11:25:45 use0453rtk kernel: [  719.663804]
[<ffffffff81075510>] ? flush_kthread_worker+0xc0/0xc0
Apr 23 11:25:45 use0453rtk kernel: [  719.663808]
[<ffffffff816540e0>] ? gs_change+0x13/0x13
Apr 23 11:25:45 use0453rtk kernel: [  719.663822] INFO: task
flush-252:2:5916 blocked for more than 120 seconds.
Apr 23 11:25:45 use0453rtk kernel: [  719.663828] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 23 11:25:45 use0453rtk kernel: [  719.663836] flush-252:2     D
ffffffff818080c0     0  5916      2 0x00000000
Apr 23 11:25:45 use0453rtk kernel: [  719.663841]  ffff880c0135f650
0000000000000046 ffff880c0135f600 ffffffff8164b571
Apr 23 11:25:45 use0453rtk kernel: [  719.663846]  ffff880c0135e000
ffff880c0135e000 ffff880c03308870 ffff880c0087e870
Apr 23 11:25:45 use0453rtk kernel: [  719.663851]  ffff880c0135f640
ffff880bffc94c00 ffff881800a5c4a8 00000000000163f0
Apr 23 11:25:45 use0453rtk kernel: [  719.663857] Call Trace:
Apr 23 11:25:45 use0453rtk kernel: [  719.663861]
[<ffffffff8164b571>] ? _raw_spin_unlock_irq+0x21/0x50
Apr 23 11:25:45 use0453rtk kernel: [  719.663866]
[<ffffffff8164a4bf>] schedule+0x3f/0x60
Apr 23 11:25:45 use0453rtk kernel: [  719.663880]
[<ffffffffa00edf63>] xlog_reserveq_wait+0x103/0x270 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663884]
[<ffffffff81085300>] ? try_to_wake_up+0x2b0/0x2b0
Apr 23 11:25:45 use0453rtk kernel: [  719.663898]
[<ffffffffa00ee367>] xlog_grant_log_space+0x157/0x200 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663912]
[<ffffffffa00f0cbb>] xfs_log_reserve+0x14b/0x1c0 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663926]
[<ffffffffa00eba4c>] xfs_trans_reserve+0x9c/0x200 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663940]
[<ffffffffa00eb8c1>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663953]
[<ffffffffa00a1bd6>] xfs_iomap_write_allocate+0x1d6/0x370 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663960]
[<ffffffff812e7307>] ? generic_make_request+0xc7/0x100
Apr 23 11:25:45 use0453rtk kernel: [  719.663964]
[<ffffffff812e73c7>] ? submit_bio+0x87/0x110
Apr 23 11:25:45 use0453rtk kernel: [  719.663974]
[<ffffffffa0094559>] xfs_map_blocks+0x269/0x2b0 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663985]
[<ffffffffa009473f>] xfs_vm_writepage+0x19f/0x540 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.663994]
[<ffffffff81117d07>] __writepage+0x17/0x40
Apr 23 11:25:45 use0453rtk kernel: [  719.663998]
[<ffffffff81118468>] write_cache_pages+0x228/0x4f0
Apr 23 11:25:45 use0453rtk kernel: [  719.664003]
[<ffffffff81117cf0>] ? set_page_dirty+0x70/0x70
Apr 23 11:25:45 use0453rtk kernel: [  719.664008]
[<ffffffff8111877a>] generic_writepages+0x4a/0x70
Apr 23 11:25:45 use0453rtk kernel: [  719.664018]
[<ffffffffa00932bc>] xfs_vm_writepages+0x5c/0x80 [xfs]
Apr 23 11:25:45 use0453rtk kernel: [  719.664023]
[<ffffffff8111a201>] do_writepages+0x21/0x40
Apr 23 11:25:45 use0453rtk kernel: [  719.664029]
[<ffffffff811981a5>] writeback_single_inode+0x185/0x470
Apr 23 11:25:45 use0453rtk kernel: [  719.664034]
[<ffffffff8119888d>] writeback_sb_inodes+0x19d/0x270
Apr 23 11:25:45 use0453rtk kernel: [  719.664039]
[<ffffffff811989f6>] __writeback_inodes_wb+0x96/0xc0
Apr 23 11:25:45 use0453rtk kernel: [  719.664043]
[<ffffffff81198d33>] wb_writeback+0x313/0x340
Apr 23 11:25:45 use0453rtk kernel: [  719.664048]
[<ffffffff81188ae2>] ? get_nr_inodes+0x52/0x70
Apr 23 11:25:45 use0453rtk kernel: [  719.664053]
[<ffffffff811992f7>] wb_do_writeback+0x257/0x260
Apr 23 11:25:45 use0453rtk kernel: [  719.664058]
[<ffffffff81062e0a>] ? del_timer_sync+0x3a/0x60
Apr 23 11:25:45 use0453rtk kernel: [  719.664063]
[<ffffffff8119938c>] bdi_writeback_thread+0x8c/0x2c0
Apr 23 11:25:45 use0453rtk kernel: [  719.664067]
[<ffffffff81199300>] ? wb_do_writeback+0x260/0x260
Apr 23 11:25:45 use0453rtk kernel: [  719.664071]
[<ffffffff810755ae>] kthread+0x9e/0xb0
Apr 23 11:25:45 use0453rtk kernel: [  719.664076]
[<ffffffff816540e4>] kernel_thread_helper+0x4/0x10
Apr 23 11:25:45 use0453rtk kernel: [  719.664080]
[<ffffffff81075510>] ? flush_kthread_worker+0xc0/0xc0
Apr 23 11:25:45 use0453rtk kernel: [  719.664084]
[<ffffffff816540e0>] ? gs_change+0x13/0x13

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-23 12:09 Still seeing hangs in xlog_grant_log_space Juerg Haefliger
@ 2012-04-23 14:38 ` Dave Chinner
  2012-04-23 15:33   ` Juerg Haefliger
  2012-05-24 20:18 ` Peter Watkins
  1 sibling, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-23 14:38 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Mon, Apr 23, 2012 at 02:09:53PM +0200, Juerg Haefliger wrote:
> Hi,
> 
> I have a test system that I'm using to try to force an XFS filesystem
> hang since we're encountering that problem sporadically in production
> running a 2.6.38-8 Natty kernel. The original idea was to use this
> system to find the patches that fix the issue but I've tried a whole
> bunch of kernels and they all hang eventually (anywhere from 5 to 45
> mins) with the stack trace shown below.

If you kill the workload, does the file system recover normally?

> Only an emergency flush will
> bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
> 3.3.2. From reading through the mail archives, I get the impression
> that this should be fixed in 3.1.

What you see is not necessarily a hang. It may just be that you've
caused your IO subsystem to have so much IO queued up it's completely
overwhelmed. How much RAM do you have in the machine?

> What makes the test system special is:
> 1) The test partition uses 1024 block size and 576b log size.

So you've made the log as physically small as possible on a tiny
(9GB) filesystem. Why?

> 2) The RAID controller cache is disabled.

And you've made the storage subsystem as slow as possible. What type
of RAID are you using, how many disks in the RAID volume, which type
of disks, etc?

> I can't seem to hit the problem without the above modifications.

How on earth did you come up with this configuration?

> For the IO workload I pre-create 8000 files with random content and
> sizes between 1k and 128k on the test partition. Then I run a tool
> that spawns a bunch of threads which just copy these files to a
> different directory on the same partition.

So, your workload also has a significant amount parallelism and
concurrency on a filesytsem with only 4 AGs? 

> At the same time there are
> other threads that rename, remove and overwrite random files in the
> destination directory keeping the file count at around 500.

And you've added as much concurrent metadata modification as
possible, too, which makes me wonder.....

> Let me know what other information I can provide to pin this down.

.... exactly what are you trying to acheive with this test?  From my
point of view, you're doing something completely and utterly insane.
You filesystem config and workload is so far outside normal
configurations and workloads that I'm not surprised you're seeing
some kind of problem.....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-23 14:38 ` Dave Chinner
@ 2012-04-23 15:33   ` Juerg Haefliger
  2012-04-23 23:58     ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-23 15:33 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Hi Dave,


On Mon, Apr 23, 2012 at 4:38 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Apr 23, 2012 at 02:09:53PM +0200, Juerg Haefliger wrote:
>> Hi,
>>
>> I have a test system that I'm using to try to force an XFS filesystem
>> hang since we're encountering that problem sporadically in production
>> running a 2.6.38-8 Natty kernel. The original idea was to use this
>> system to find the patches that fix the issue but I've tried a whole
>> bunch of kernels and they all hang eventually (anywhere from 5 to 45
>> mins) with the stack trace shown below.
>
> If you kill the workload, does the file system recover normally?

The workload can't be killed.


>> Only an emergency flush will
>> bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
>> 3.3.2. From reading through the mail archives, I get the impression
>> that this should be fixed in 3.1.
>
> What you see is not necessarily a hang. It may just be that you've
> caused your IO subsystem to have so much IO queued up it's completely
> overwhelmed. How much RAM do you have in the machine?

When it hangs, there are zero IOs going to the disk. The machine has
100GB of RAM.


>> What makes the test system special is:
>> 1) The test partition uses 1024 block size and 576b log size.
>
> So you've made the log as physically small as possible on a tiny
> (9GB) filesystem. Why?

:-) Because that breaks it. Somebody on the list mentioned that he
experienced hangs with that configuration so I gave it a shot.


>> 2) The RAID controller cache is disabled.
>
> And you've made the storage subsystem as slow as possible. What type
> of RAID are you using, how many disks in the RAID volume, which type
> of disks, etc?

4 2TB SAS 6Gb 7.2K disks in a RAID10 config


>> I can't seem to hit the problem without the above modifications.
>
> How on earth did you come up with this configuration?

Just plain ol' luck. I was looking for a configuration that would
allow me to reproduce the hangs and I accidentally picked a machine
with a faulty controller battery which disabled the cache.


>> For the IO workload I pre-create 8000 files with random content and
>> sizes between 1k and 128k on the test partition. Then I run a tool
>> that spawns a bunch of threads which just copy these files to a
>> different directory on the same partition.
>
> So, your workload also has a significant amount parallelism and
> concurrency on a filesytsem with only 4 AGs?

Yes. Excuse my ignorance but what are AGs?


>> At the same time there are
>> other threads that rename, remove and overwrite random files in the
>> destination directory keeping the file count at around 500.
>
> And you've added as much concurrent metadata modification as
> possible, too, which makes me wonder.....
>
>> Let me know what other information I can provide to pin this down.
>
> .... exactly what are you trying to acheive with this test?  From my
> point of view, you're doing something completely and utterly insane.
> You filesystem config and workload is so far outside normal
> configurations and workloads that I'm not surprised you're seeing
> some kind of problem.....

No objection from my side. It's a silly configuration but it's the
only one I've found that lets me reproduce a hang at will. Here's the
deal. We see sporadic hangs in xlog_grant_log_space on production
machines. I cannot just roll out a new kernel on 1000+ production
machines impacting I don't know how many customers and just cross my
fingers hoping that it fixes the problem. I need to verify that the
new kernel indeed behaves better. I was hoping to use the above setup
to test a patched kernel but now all kernels up to the latest stable
one hang sooner or later. I agree that I should see problems with this
setup but the worst I would expect is horrible performance but
certainly not a filesystem hang. I'm more than open to any suggestions
for doing the verification differently.

Thanks, I sure appreciate the help.

...Juerg


> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-23 15:33   ` Juerg Haefliger
@ 2012-04-23 23:58     ` Dave Chinner
  2012-04-24  8:55       ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-23 23:58 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Mon, Apr 23, 2012 at 05:33:40PM +0200, Juerg Haefliger wrote:
> Hi Dave,
> 
> 
> On Mon, Apr 23, 2012 at 4:38 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Mon, Apr 23, 2012 at 02:09:53PM +0200, Juerg Haefliger wrote:
> >> Hi,
> >>
> >> I have a test system that I'm using to try to force an XFS filesystem
> >> hang since we're encountering that problem sporadically in production
> >> running a 2.6.38-8 Natty kernel. The original idea was to use this
> >> system to find the patches that fix the issue but I've tried a whole
> >> bunch of kernels and they all hang eventually (anywhere from 5 to 45
> >> mins) with the stack trace shown below.
> >
> > If you kill the workload, does the file system recover normally?
> 
> The workload can't be killed.

OK.

> >> Only an emergency flush will
> >> bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
> >> 3.3.2. From reading through the mail archives, I get the impression
> >> that this should be fixed in 3.1.
> >
> > What you see is not necessarily a hang. It may just be that you've
> > caused your IO subsystem to have so much IO queued up it's completely
> > overwhelmed. How much RAM do you have in the machine?
> 
> When it hangs, there are zero IOs going to the disk. The machine has
> 100GB of RAM.

Can you get an event trace across the period where the hang occurs?

....

> >> I can't seem to hit the problem without the above modifications.
> >
> > How on earth did you come up with this configuration?
> 
> Just plain ol' luck. I was looking for a configuration that would
> allow me to reproduce the hangs and I accidentally picked a machine
> with a faulty controller battery which disabled the cache.

Wonderful.

> >> For the IO workload I pre-create 8000 files with random content and
> >> sizes between 1k and 128k on the test partition. Then I run a tool
> >> that spawns a bunch of threads which just copy these files to a
> >> different directory on the same partition.
> >
> > So, your workload also has a significant amount parallelism and
> > concurrency on a filesytsem with only 4 AGs?
> 
> Yes. Excuse my ignorance but what are AGs?

Allocation groups.

> >> At the same time there are
> >> other threads that rename, remove and overwrite random files in the
> >> destination directory keeping the file count at around 500.
> >
> > And you've added as much concurrent metadata modification as
> > possible, too, which makes me wonder.....
> >
> >> Let me know what other information I can provide to pin this down.
> >
> > .... exactly what are you trying to acheive with this test?  From my
> > point of view, you're doing something completely and utterly insane.
> > You filesystem config and workload is so far outside normal
> > configurations and workloads that I'm not surprised you're seeing
> > some kind of problem.....
> 
> No objection from my side. It's a silly configuration but it's the
> only one I've found that lets me reproduce a hang at will.

Ok, that's fair enough - it's handy to tell us that up front,
though.  ;)

Alright, then I need all the usual information. I suspect an event
trace is the only way I'm going to see what is happening. I just
updated the FAQ entry, so all the necessary info for gathering a
trace should be there now.

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-23 23:58     ` Dave Chinner
@ 2012-04-24  8:55       ` Juerg Haefliger
  2012-04-24 12:07         ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-24  8:55 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Tue, Apr 24, 2012 at 1:58 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Apr 23, 2012 at 05:33:40PM +0200, Juerg Haefliger wrote:
>> Hi Dave,
>>
>>
>> On Mon, Apr 23, 2012 at 4:38 PM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Mon, Apr 23, 2012 at 02:09:53PM +0200, Juerg Haefliger wrote:
>> >> Hi,
>> >>
>> >> I have a test system that I'm using to try to force an XFS filesystem
>> >> hang since we're encountering that problem sporadically in production
>> >> running a 2.6.38-8 Natty kernel. The original idea was to use this
>> >> system to find the patches that fix the issue but I've tried a whole
>> >> bunch of kernels and they all hang eventually (anywhere from 5 to 45
>> >> mins) with the stack trace shown below.
>> >
>> > If you kill the workload, does the file system recover normally?
>>
>> The workload can't be killed.
>
> OK.
>
>> >> Only an emergency flush will
>> >> bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
>> >> 3.3.2. From reading through the mail archives, I get the impression
>> >> that this should be fixed in 3.1.
>> >
>> > What you see is not necessarily a hang. It may just be that you've
>> > caused your IO subsystem to have so much IO queued up it's completely
>> > overwhelmed. How much RAM do you have in the machine?
>>
>> When it hangs, there are zero IOs going to the disk. The machine has
>> 100GB of RAM.
>
> Can you get an event trace across the period where the hang occurs?
>
> ....
>
>> >> I can't seem to hit the problem without the above modifications.
>> >
>> > How on earth did you come up with this configuration?
>>
>> Just plain ol' luck. I was looking for a configuration that would
>> allow me to reproduce the hangs and I accidentally picked a machine
>> with a faulty controller battery which disabled the cache.
>
> Wonderful.
>
>> >> For the IO workload I pre-create 8000 files with random content and
>> >> sizes between 1k and 128k on the test partition. Then I run a tool
>> >> that spawns a bunch of threads which just copy these files to a
>> >> different directory on the same partition.
>> >
>> > So, your workload also has a significant amount parallelism and
>> > concurrency on a filesytsem with only 4 AGs?
>>
>> Yes. Excuse my ignorance but what are AGs?
>
> Allocation groups.
>
>> >> At the same time there are
>> >> other threads that rename, remove and overwrite random files in the
>> >> destination directory keeping the file count at around 500.
>> >
>> > And you've added as much concurrent metadata modification as
>> > possible, too, which makes me wonder.....
>> >
>> >> Let me know what other information I can provide to pin this down.
>> >
>> > .... exactly what are you trying to acheive with this test?  From my
>> > point of view, you're doing something completely and utterly insane.
>> > You filesystem config and workload is so far outside normal
>> > configurations and workloads that I'm not surprised you're seeing
>> > some kind of problem.....
>>
>> No objection from my side. It's a silly configuration but it's the
>> only one I've found that lets me reproduce a hang at will.
>
> Ok, that's fair enough - it's handy to tell us that up front,
> though.  ;)

Ah sorry for not being clear enough. I thought my intentions could be
deduced from the information that I provided :-)


> Alright, then I need all the usual information. I suspect an event
> trace is the only way I'm going to see what is happening. I just
> updated the FAQ entry, so all the necessary info for gathering a
> trace should be there now.
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

Very good. Will do. What kernel do you want me to run? I would prefer
our current production kernel (2.6.38-8-server) but I understand if
you want something newer.

...Juerg


> --
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-24  8:55       ` Juerg Haefliger
@ 2012-04-24 12:07         ` Dave Chinner
  2012-04-24 18:26           ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-24 12:07 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
> On Tue, Apr 24, 2012 at 1:58 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Mon, Apr 23, 2012 at 05:33:40PM +0200, Juerg Haefliger wrote:
> >> Hi Dave,
> >>
> >>
> >> On Mon, Apr 23, 2012 at 4:38 PM, Dave Chinner <david@fromorbit.com> wrote:
> >> > On Mon, Apr 23, 2012 at 02:09:53PM +0200, Juerg Haefliger wrote:
> >> >> Hi,
> >> >>
> >> >> I have a test system that I'm using to try to force an XFS filesystem
> >> >> hang since we're encountering that problem sporadically in production
> >> >> running a 2.6.38-8 Natty kernel. The original idea was to use this
> >> >> system to find the patches that fix the issue but I've tried a whole
> >> >> bunch of kernels and they all hang eventually (anywhere from 5 to 45
> >> >> mins) with the stack trace shown below.
> >> >
> >> > If you kill the workload, does the file system recover normally?
> >>
> >> The workload can't be killed.
> >
> > OK.
> >
> >> >> Only an emergency flush will
> >> >> bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
> >> >> 3.3.2. From reading through the mail archives, I get the impression
> >> >> that this should be fixed in 3.1.
> >> >
> >> > What you see is not necessarily a hang. It may just be that you've
> >> > caused your IO subsystem to have so much IO queued up it's completely
> >> > overwhelmed. How much RAM do you have in the machine?
> >>
> >> When it hangs, there are zero IOs going to the disk. The machine has
> >> 100GB of RAM.
> >
> > Can you get an event trace across the period where the hang occurs?
> >
> > ....
> >
> >> >> I can't seem to hit the problem without the above modifications.
> >> >
> >> > How on earth did you come up with this configuration?
> >>
> >> Just plain ol' luck. I was looking for a configuration that would
> >> allow me to reproduce the hangs and I accidentally picked a machine
> >> with a faulty controller battery which disabled the cache.
> >
> > Wonderful.
> >
> >> >> For the IO workload I pre-create 8000 files with random content and
> >> >> sizes between 1k and 128k on the test partition. Then I run a tool
> >> >> that spawns a bunch of threads which just copy these files to a
> >> >> different directory on the same partition.
> >> >
> >> > So, your workload also has a significant amount parallelism and
> >> > concurrency on a filesytsem with only 4 AGs?
> >>
> >> Yes. Excuse my ignorance but what are AGs?
> >
> > Allocation groups.
> >
> >> >> At the same time there are
> >> >> other threads that rename, remove and overwrite random files in the
> >> >> destination directory keeping the file count at around 500.
> >> >
> >> > And you've added as much concurrent metadata modification as
> >> > possible, too, which makes me wonder.....
> >> >
> >> >> Let me know what other information I can provide to pin this down.
> >> >
> >> > .... exactly what are you trying to acheive with this test?  From my
> >> > point of view, you're doing something completely and utterly insane.
> >> > You filesystem config and workload is so far outside normal
> >> > configurations and workloads that I'm not surprised you're seeing
> >> > some kind of problem.....
> >>
> >> No objection from my side. It's a silly configuration but it's the
> >> only one I've found that lets me reproduce a hang at will.
> >
> > Ok, that's fair enough - it's handy to tell us that up front,
> > though.  ;)
> 
> Ah sorry for not being clear enough. I thought my intentions could be
> deduced from the information that I provided :-)
> 
> 
> > Alright, then I need all the usual information. I suspect an event
> > trace is the only way I'm going to see what is happening. I just
> > updated the FAQ entry, so all the necessary info for gathering a
> > trace should be there now.
> >
> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> 
> Very good. Will do. What kernel do you want me to run? I would prefer
> our current production kernel (2.6.38-8-server) but I understand if
> you want something newer.

If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
be useful for debugging these sorts of problems...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-24 12:07         ` Dave Chinner
@ 2012-04-24 18:26           ` Juerg Haefliger
  2012-04-25 22:38             ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-24 18:26 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
>> On Tue, Apr 24, 2012 at 1:58 AM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Mon, Apr 23, 2012 at 05:33:40PM +0200, Juerg Haefliger wrote:
>> >> Hi Dave,
>> >>
>> >>
>> >> On Mon, Apr 23, 2012 at 4:38 PM, Dave Chinner <david@fromorbit.com> wrote:
>> >> > On Mon, Apr 23, 2012 at 02:09:53PM +0200, Juerg Haefliger wrote:
>> >> >> Hi,
>> >> >>
>> >> >> I have a test system that I'm using to try to force an XFS filesystem
>> >> >> hang since we're encountering that problem sporadically in production
>> >> >> running a 2.6.38-8 Natty kernel. The original idea was to use this
>> >> >> system to find the patches that fix the issue but I've tried a whole
>> >> >> bunch of kernels and they all hang eventually (anywhere from 5 to 45
>> >> >> mins) with the stack trace shown below.
>> >> >
>> >> > If you kill the workload, does the file system recover normally?
>> >>
>> >> The workload can't be killed.
>> >
>> > OK.
>> >
>> >> >> Only an emergency flush will
>> >> >> bring the filesystem back. I tried kernels 3.0.29, 3.1.10, 3.2.15,
>> >> >> 3.3.2. From reading through the mail archives, I get the impression
>> >> >> that this should be fixed in 3.1.
>> >> >
>> >> > What you see is not necessarily a hang. It may just be that you've
>> >> > caused your IO subsystem to have so much IO queued up it's completely
>> >> > overwhelmed. How much RAM do you have in the machine?
>> >>
>> >> When it hangs, there are zero IOs going to the disk. The machine has
>> >> 100GB of RAM.
>> >
>> > Can you get an event trace across the period where the hang occurs?
>> >
>> > ....
>> >
>> >> >> I can't seem to hit the problem without the above modifications.
>> >> >
>> >> > How on earth did you come up with this configuration?
>> >>
>> >> Just plain ol' luck. I was looking for a configuration that would
>> >> allow me to reproduce the hangs and I accidentally picked a machine
>> >> with a faulty controller battery which disabled the cache.
>> >
>> > Wonderful.
>> >
>> >> >> For the IO workload I pre-create 8000 files with random content and
>> >> >> sizes between 1k and 128k on the test partition. Then I run a tool
>> >> >> that spawns a bunch of threads which just copy these files to a
>> >> >> different directory on the same partition.
>> >> >
>> >> > So, your workload also has a significant amount parallelism and
>> >> > concurrency on a filesytsem with only 4 AGs?
>> >>
>> >> Yes. Excuse my ignorance but what are AGs?
>> >
>> > Allocation groups.
>> >
>> >> >> At the same time there are
>> >> >> other threads that rename, remove and overwrite random files in the
>> >> >> destination directory keeping the file count at around 500.
>> >> >
>> >> > And you've added as much concurrent metadata modification as
>> >> > possible, too, which makes me wonder.....
>> >> >
>> >> >> Let me know what other information I can provide to pin this down.
>> >> >
>> >> > .... exactly what are you trying to acheive with this test?  From my
>> >> > point of view, you're doing something completely and utterly insane.
>> >> > You filesystem config and workload is so far outside normal
>> >> > configurations and workloads that I'm not surprised you're seeing
>> >> > some kind of problem.....
>> >>
>> >> No objection from my side. It's a silly configuration but it's the
>> >> only one I've found that lets me reproduce a hang at will.
>> >
>> > Ok, that's fair enough - it's handy to tell us that up front,
>> > though.  ;)
>>
>> Ah sorry for not being clear enough. I thought my intentions could be
>> deduced from the information that I provided :-)
>>
>>
>> > Alright, then I need all the usual information. I suspect an event
>> > trace is the only way I'm going to see what is happening. I just
>> > updated the FAQ entry, so all the necessary info for gathering a
>> > trace should be there now.
>> >
>> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>>
>> Very good. Will do. What kernel do you want me to run? I would prefer
>> our current production kernel (2.6.38-8-server) but I understand if
>> you want something newer.
>
> If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
> not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
> be useful for debugging these sorts of problems...

OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
whopping 2GB (yes it's compressed):
https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar

...Juerg


> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-24 18:26           ` Juerg Haefliger
@ 2012-04-25 22:38             ` Dave Chinner
  2012-04-26 12:37               ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-25 22:38 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Tue, Apr 24, 2012 at 08:26:04PM +0200, Juerg Haefliger wrote:
> On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
> >> > Alright, then I need all the usual information. I suspect an event
> >> > trace is the only way I'm going to see what is happening. I just
> >> > updated the FAQ entry, so all the necessary info for gathering a
> >> > trace should be there now.
> >> >
> >> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> >>
> >> Very good. Will do. What kernel do you want me to run? I would prefer
> >> our current production kernel (2.6.38-8-server) but I understand if
> >> you want something newer.
> >
> > If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
> > not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
> > be useful for debugging these sorts of problems...
> 
> OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
> whopping 2GB (yes it's compressed):
> https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar

That's a bit big to be useful, and far bigger than I'm willing to
download given that I'm on the end of a wet piece of string, not a
big fat intarwebby pipe. I'm assuming it is the event trace
that is causing it to blow out? If so, just the 30-60s either side of
the hang first showing up is probaby necessary, and that should cut
the size down greatly....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-25 22:38             ` Dave Chinner
@ 2012-04-26 12:37               ` Juerg Haefliger
  2012-04-26 22:44                 ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-26 12:37 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Thu, Apr 26, 2012 at 12:38 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Tue, Apr 24, 2012 at 08:26:04PM +0200, Juerg Haefliger wrote:
>> On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
>> >> > Alright, then I need all the usual information. I suspect an event
>> >> > trace is the only way I'm going to see what is happening. I just
>> >> > updated the FAQ entry, so all the necessary info for gathering a
>> >> > trace should be there now.
>> >> >
>> >> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>> >>
>> >> Very good. Will do. What kernel do you want me to run? I would prefer
>> >> our current production kernel (2.6.38-8-server) but I understand if
>> >> you want something newer.
>> >
>> > If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
>> > not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
>> > be useful for debugging these sorts of problems...
>>
>> OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
>> whopping 2GB (yes it's compressed):
>> https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar
>
> That's a bit big to be useful, and far bigger than I'm willing to
> download given that I'm on the end of a wet piece of string, not a
> big fat intarwebby pipe.

Fair enough.


> I'm assuming it is the event trace
> that is causing it to blow out? If so, just the 30-60s either side of
> the hang first showing up is probaby necessary, and that should cut
> the size down greatly....

Can I shorten the existing trace.dat? I stopped the trace
automatically 10 secs after the the xlog_... trace showed up in syslog
so effectively some 130+ secs after the hang occured.

...Juerg


> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-26 12:37               ` Juerg Haefliger
@ 2012-04-26 22:44                 ` Dave Chinner
  2012-04-26 23:00                   ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-26 22:44 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Thu, Apr 26, 2012 at 02:37:50PM +0200, Juerg Haefliger wrote:
> On Thu, Apr 26, 2012 at 12:38 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Tue, Apr 24, 2012 at 08:26:04PM +0200, Juerg Haefliger wrote:
> >> On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
> >> > On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
> >> >> > Alright, then I need all the usual information. I suspect an event
> >> >> > trace is the only way I'm going to see what is happening. I just
> >> >> > updated the FAQ entry, so all the necessary info for gathering a
> >> >> > trace should be there now.
> >> >> >
> >> >> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> >> >>
> >> >> Very good. Will do. What kernel do you want me to run? I would prefer
> >> >> our current production kernel (2.6.38-8-server) but I understand if
> >> >> you want something newer.
> >> >
> >> > If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
> >> > not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
> >> > be useful for debugging these sorts of problems...
> >>
> >> OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
> >> whopping 2GB (yes it's compressed):
> >> https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar
> >
> > That's a bit big to be useful, and far bigger than I'm willing to
> > download given that I'm on the end of a wet piece of string, not a
> > big fat intarwebby pipe.
> 
> Fair enough.
> 
> 
> > I'm assuming it is the event trace
> > that is causing it to blow out? If so, just the 30-60s either side of
> > the hang first showing up is probaby necessary, and that should cut
> > the size down greatly....
> 
> Can I shorten the existing trace.dat?

No idea, but that's likely the problem - I don't want the binary
trace.dat file. I want the text output of the report command
generated from the binary trace.dat file...

> I stopped the trace
> automatically 10 secs after the the xlog_... trace showed up in syslog
> so effectively some 130+ secs after the hang occured.

Extract the text report from it, and compress that. For example, a
trace i've just done:

$ ~/trace-cmd/trace-cmd report > trace.out
$ ls -ltr |tail -4
-rw-r--r-- 1 root root  21430272 Apr 27 08:36 trace.dat
-rw-r--r-- 1 root root  10039296 Apr 27 08:36 trace.dat.cpu1
-rw-r--r-- 1 root root  10035200 Apr 27 08:36 trace.dat.cpu0
-rw-r--r-- 1 dave dave  48255670 Apr 27 08:37 trace.out
$ gzip trace.out
$ ls -ltr |tail -4
-rw-r--r-- 1 root root  21430272 Apr 27 08:36 trace.dat
-rw-r--r-- 1 root root  10039296 Apr 27 08:36 trace.dat.cpu1
-rw-r--r-- 1 root root  10035200 Apr 27 08:36 trace.dat.cpu0
-rw-r--r-- 1 dave dave   2500733 Apr 27 08:37 trace.out.gz

Has 200MB of binary trace data, which generates a 470MB text output
file, which compresses really well - down to 2.5MB in this case.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-26 22:44                 ` Dave Chinner
@ 2012-04-26 23:00                   ` Juerg Haefliger
  2012-04-26 23:07                     ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-26 23:00 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Fri, Apr 27, 2012 at 12:44 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Thu, Apr 26, 2012 at 02:37:50PM +0200, Juerg Haefliger wrote:
>> On Thu, Apr 26, 2012 at 12:38 AM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Tue, Apr 24, 2012 at 08:26:04PM +0200, Juerg Haefliger wrote:
>> >> On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
>> >> > On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
>> >> >> > Alright, then I need all the usual information. I suspect an event
>> >> >> > trace is the only way I'm going to see what is happening. I just
>> >> >> > updated the FAQ entry, so all the necessary info for gathering a
>> >> >> > trace should be there now.
>> >> >> >
>> >> >> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>> >> >>
>> >> >> Very good. Will do. What kernel do you want me to run? I would prefer
>> >> >> our current production kernel (2.6.38-8-server) but I understand if
>> >> >> you want something newer.
>> >> >
>> >> > If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
>> >> > not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
>> >> > be useful for debugging these sorts of problems...
>> >>
>> >> OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
>> >> whopping 2GB (yes it's compressed):
>> >> https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar
>> >
>> > That's a bit big to be useful, and far bigger than I'm willing to
>> > download given that I'm on the end of a wet piece of string, not a
>> > big fat intarwebby pipe.
>>
>> Fair enough.
>>
>>
>> > I'm assuming it is the event trace
>> > that is causing it to blow out? If so, just the 30-60s either side of
>> > the hang first showing up is probaby necessary, and that should cut
>> > the size down greatly....
>>
>> Can I shorten the existing trace.dat?
>
> No idea, but that's likely the problem - I don't want the binary
> trace.dat file. I want the text output of the report command
> generated from the binary trace.dat file...

Well yes. I did RTFM :-) trace.dat is 15GB.


>> I stopped the trace
>> automatically 10 secs after the the xlog_... trace showed up in syslog
>> so effectively some 130+ secs after the hang occured.
>
> Extract the text report from it, and compress that. For example, a
> trace i've just done:
>
> $ ~/trace-cmd/trace-cmd report > trace.out
> $ ls -ltr |tail -4
> -rw-r--r-- 1 root root  21430272 Apr 27 08:36 trace.dat
> -rw-r--r-- 1 root root  10039296 Apr 27 08:36 trace.dat.cpu1
> -rw-r--r-- 1 root root  10035200 Apr 27 08:36 trace.dat.cpu0
> -rw-r--r-- 1 dave dave  48255670 Apr 27 08:37 trace.out
> $ gzip trace.out
> $ ls -ltr |tail -4
> -rw-r--r-- 1 root root  21430272 Apr 27 08:36 trace.dat
> -rw-r--r-- 1 root root  10039296 Apr 27 08:36 trace.dat.cpu1
> -rw-r--r-- 1 root root  10035200 Apr 27 08:36 trace.dat.cpu0
> -rw-r--r-- 1 dave dave   2500733 Apr 27 08:37 trace.out.gz
>
> Has 200MB of binary trace data, which generates a 470MB text output
> file, which compresses really well - down to 2.5MB in this case.

Compressed trace_report.txt is 2GB.
Sorry, haven't had the time today to look into this. I'll cut the size
down somehow.

...Juerg


> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-26 23:00                   ` Juerg Haefliger
@ 2012-04-26 23:07                     ` Dave Chinner
  2012-04-27  9:04                       ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-26 23:07 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Fri, Apr 27, 2012 at 01:00:08AM +0200, Juerg Haefliger wrote:
> On Fri, Apr 27, 2012 at 12:44 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Thu, Apr 26, 2012 at 02:37:50PM +0200, Juerg Haefliger wrote:
> >> On Thu, Apr 26, 2012 at 12:38 AM, Dave Chinner <david@fromorbit.com> wrote:
> >> > On Tue, Apr 24, 2012 at 08:26:04PM +0200, Juerg Haefliger wrote:
> >> >> On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
> >> >> > On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
> >> >> >> > Alright, then I need all the usual information. I suspect an event
> >> >> >> > trace is the only way I'm going to see what is happening. I just
> >> >> >> > updated the FAQ entry, so all the necessary info for gathering a
> >> >> >> > trace should be there now.
> >> >> >> >
> >> >> >> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> >> >> >>
> >> >> >> Very good. Will do. What kernel do you want me to run? I would prefer
> >> >> >> our current production kernel (2.6.38-8-server) but I understand if
> >> >> >> you want something newer.
> >> >> >
> >> >> > If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
> >> >> > not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
> >> >> > be useful for debugging these sorts of problems...
> >> >>
> >> >> OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
> >> >> whopping 2GB (yes it's compressed):
> >> >> https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar
> >> >
> >> > That's a bit big to be useful, and far bigger than I'm willing to
> >> > download given that I'm on the end of a wet piece of string, not a
> >> > big fat intarwebby pipe.
> >>
> >> Fair enough.
> >>
> >>
> >> > I'm assuming it is the event trace
> >> > that is causing it to blow out? If so, just the 30-60s either side of
> >> > the hang first showing up is probaby necessary, and that should cut
> >> > the size down greatly....
> >>
> >> Can I shorten the existing trace.dat?
> >
> > No idea, but that's likely the problem - I don't want the binary
> > trace.dat file. I want the text output of the report command
> > generated from the binary trace.dat file...
> 
> Well yes. I did RTFM :-) trace.dat is 15GB.

OK, that's a lot larger than I expected for a hung filesystem....

> >> I stopped the trace
> >> automatically 10 secs after the the xlog_... trace showed up in syslog
> >> so effectively some 130+ secs after the hang occured.

Can you look at the last timestamp in the report file, and trim off
anything from the start that is older than, say, 180s before that?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-26 23:07                     ` Dave Chinner
@ 2012-04-27  9:04                       ` Juerg Haefliger
  2012-04-27 11:09                         ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-27  9:04 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Fri, Apr 27, 2012 at 1:07 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Fri, Apr 27, 2012 at 01:00:08AM +0200, Juerg Haefliger wrote:
>> On Fri, Apr 27, 2012 at 12:44 AM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Thu, Apr 26, 2012 at 02:37:50PM +0200, Juerg Haefliger wrote:
>> >> On Thu, Apr 26, 2012 at 12:38 AM, Dave Chinner <david@fromorbit.com> wrote:
>> >> > On Tue, Apr 24, 2012 at 08:26:04PM +0200, Juerg Haefliger wrote:
>> >> >> On Tue, Apr 24, 2012 at 2:07 PM, Dave Chinner <david@fromorbit.com> wrote:
>> >> >> > On Tue, Apr 24, 2012 at 10:55:22AM +0200, Juerg Haefliger wrote:
>> >> >> >> > Alright, then I need all the usual information. I suspect an event
>> >> >> >> > trace is the only way I'm going to see what is happening. I just
>> >> >> >> > updated the FAQ entry, so all the necessary info for gathering a
>> >> >> >> > trace should be there now.
>> >> >> >> >
>> >> >> >> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>> >> >> >>
>> >> >> >> Very good. Will do. What kernel do you want me to run? I would prefer
>> >> >> >> our current production kernel (2.6.38-8-server) but I understand if
>> >> >> >> you want something newer.
>> >> >> >
>> >> >> > If you can reproduce it on a current kernel - 3.4-rc4 if possible, if
>> >> >> > not a 3.3.x stable kernel would be best. 2.6.38 is simply too old to
>> >> >> > be useful for debugging these sorts of problems...
>> >> >>
>> >> >> OK, I reproduced a hang running 3.4-rc4. The data is here but it's a
>> >> >> whopping 2GB (yes it's compressed):
>> >> >> https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24.tar
>> >> >
>> >> > That's a bit big to be useful, and far bigger than I'm willing to
>> >> > download given that I'm on the end of a wet piece of string, not a
>> >> > big fat intarwebby pipe.
>> >>
>> >> Fair enough.
>> >>
>> >>
>> >> > I'm assuming it is the event trace
>> >> > that is causing it to blow out? If so, just the 30-60s either side of
>> >> > the hang first showing up is probaby necessary, and that should cut
>> >> > the size down greatly....
>> >>
>> >> Can I shorten the existing trace.dat?
>> >
>> > No idea, but that's likely the problem - I don't want the binary
>> > trace.dat file. I want the text output of the report command
>> > generated from the binary trace.dat file...
>>
>> Well yes. I did RTFM :-) trace.dat is 15GB.
>
> OK, that's a lot larger than I expected for a hung filesystem....
>
>> >> I stopped the trace
>> >> automatically 10 secs after the the xlog_... trace showed up in syslog
>> >> so effectively some 130+ secs after the hang occured.
>
> Can you look at the last timestamp in the report file, and trim off
> anything from the start that is older than, say, 180s before that?

Cut the trace down to 180 secs which brought the filesize down to
93MB: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24-180secs.tgz

...Juerg


> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-27  9:04                       ` Juerg Haefliger
@ 2012-04-27 11:09                         ` Dave Chinner
  2012-04-27 13:07                           ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-04-27 11:09 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Fri, Apr 27, 2012 at 11:04:33AM +0200, Juerg Haefliger wrote:
> On Fri, Apr 27, 2012 at 1:07 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Fri, Apr 27, 2012 at 01:00:08AM +0200, Juerg Haefliger wrote:
> >> On Fri, Apr 27, 2012 at 12:44 AM, Dave Chinner <david@fromorbit.com> wrote:
> >> > On Thu, Apr 26, 2012 at 02:37:50PM +0200, Juerg Haefliger wrote:
> >> >> > I'm assuming it is the event trace
> >> >> > that is causing it to blow out? If so, just the 30-60s either side of
> >> >> > the hang first showing up is probaby necessary, and that should cut
> >> >> > the size down greatly....
> >> >>
> >> >> Can I shorten the existing trace.dat?

Looks like you can - the "trace-cmd split" option.

> >> >
> >> > No idea, but that's likely the problem - I don't want the binary
> >> > trace.dat file. I want the text output of the report command
> >> > generated from the binary trace.dat file...
> >>
> >> Well yes. I did RTFM :-) trace.dat is 15GB.
> >
> > OK, that's a lot larger than I expected for a hung filesystem....
> >
> >> >> I stopped the trace
> >> >> automatically 10 secs after the the xlog_... trace showed up in syslog
> >> >> so effectively some 130+ secs after the hang occured.
> >
> > Can you look at the last timestamp in the report file, and trim off
> > anything from the start that is older than, say, 180s before that?
> 
> Cut the trace down to 180 secs which brought the filesize down to
> 93MB: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24-180secs.tgz

I see the problem - the trace.dat file is hosted on an XFS
filesystem, so all the writes to the trace.dat file are causing
events to be logged, which causes writes to the trace.dat file....

taking out al the trace-cmd events:

$ grep -v trace-cmd trace_report_180secs.txt > t.t
$ ls -l trace_report_180secs.txt t.t
-rw-r--r-- 1 dave dave 2136941443 Apr 27 18:52 trace_report_180secs.txt
-rw-r--r-- 1 dave dave    3280629 Apr 27 20:12 t.t

Brings the event trace for that 180s down ifrom 2.1GB to 3.2MB,
which is much more like I'd expect from a hung filesystem....

Ok, so it looks like there's lots of noise from other XFS
filesystems to, and from the info.log, the xfs-hang filesystem is on
device 252:2 (/dev/vg00/tmp):

$ grep "dev 252:2" t.t
$

And there are no events from that filesystem in the log at all. Ok,
so what you need to do is find out if there are *any* events from
that device in the larger log file you have.....

If not, then it is time for advanced trace-cmd mojo. We can tell it
to filter events only from the PID of the test script and all it's
children using:

# trace-cmd record -e xfs\* -P <parent-pid> -c

But better would be to use the device number of the relevant
filesystem to filter the events. The device is 252:2, which means in
kernel terms is it:

	dev = (major << 20) | minor
	    = 0xfc00002

So you should be able to get just the xfs-hang events via:

# trace-cmd record -e xfs\* -d 'dev == 0xfc00002'

and as long as you don't host log files on /xfs-hang, it'll only
record the workload running on the xfs-hang filesystem.

BTW, how often do you see this sort of thing:

[  220.571551] ------------[ cut here ]------------
[  220.571562] WARNING: at fs/inode.c:280 drop_nlink+0x49/0x50()
[  220.571564] Hardware name: SE2170s
[  220.571565] Modules linked in: ipmi_devintf ipmi_si ipmi_msghandler ip6table_filter ip6_tables ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp iptable_filter ip_tables x_tables bridge 8021q garp stp coretemp ghash_clmulni_intel aesni_intel cryptd usbhid i7core_edac lp edac_core hid aes_x86_64 parport serio_raw microcode xfs igb hpsa dca
[  220.571594] Pid: 4637, comm: copy-files Not tainted 3.4.0-rc4 #2
[  220.571595] Call Trace:
[  220.571603]  [<ffffffff810508cf>] warn_slowpath_common+0x7f/0xc0
[  220.571605]  [<ffffffff8105092a>] warn_slowpath_null+0x1a/0x20
[  220.571607]  [<ffffffff81193319>] drop_nlink+0x49/0x50
[  220.571628]  [<ffffffffa00701ef>] xfs_droplink+0x2f/0x60 [xfs]
[  220.571640]  [<ffffffffa0072d58>] xfs_remove+0x2e8/0x3c0 [xfs]
[  220.571645]  [<ffffffff8163aeee>] ? _raw_spin_lock+0xe/0x20
[  220.571656]  [<ffffffffa0068248>] xfs_vn_unlink+0x48/0x90 [xfs]
[  220.571659]  [<ffffffff8118684f>] vfs_unlink+0x9f/0x100
[  220.571662]  [<ffffffff811893ef>] do_unlinkat+0x1af/0x1e0
[  220.571668]  [<ffffffff810a8eab>] ? sys_futex+0x7b/0x180
[  220.571670]  [<ffffffff8118a9a6>] sys_unlink+0x16/0x20
[  220.571675]  [<ffffffff816431a9>] system_call_fastpath+0x16/0x1b

You might want to run xfs-repair over your filesystems to find out
how many inodes have bad link counts....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-27 11:09                         ` Dave Chinner
@ 2012-04-27 13:07                           ` Juerg Haefliger
  2012-05-05  7:44                             ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-04-27 13:07 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Fri, Apr 27, 2012 at 1:09 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Fri, Apr 27, 2012 at 11:04:33AM +0200, Juerg Haefliger wrote:
>> On Fri, Apr 27, 2012 at 1:07 AM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Fri, Apr 27, 2012 at 01:00:08AM +0200, Juerg Haefliger wrote:
>> >> On Fri, Apr 27, 2012 at 12:44 AM, Dave Chinner <david@fromorbit.com> wrote:
>> >> > On Thu, Apr 26, 2012 at 02:37:50PM +0200, Juerg Haefliger wrote:
>> >> >> > I'm assuming it is the event trace
>> >> >> > that is causing it to blow out? If so, just the 30-60s either side of
>> >> >> > the hang first showing up is probaby necessary, and that should cut
>> >> >> > the size down greatly....
>> >> >>
>> >> >> Can I shorten the existing trace.dat?
>
> Looks like you can - the "trace-cmd split" option.
>
>> >> >
>> >> > No idea, but that's likely the problem - I don't want the binary
>> >> > trace.dat file. I want the text output of the report command
>> >> > generated from the binary trace.dat file...
>> >>
>> >> Well yes. I did RTFM :-) trace.dat is 15GB.
>> >
>> > OK, that's a lot larger than I expected for a hung filesystem....
>> >
>> >> >> I stopped the trace
>> >> >> automatically 10 secs after the the xlog_... trace showed up in syslog
>> >> >> so effectively some 130+ secs after the hang occured.
>> >
>> > Can you look at the last timestamp in the report file, and trim off
>> > anything from the start that is older than, say, 180s before that?
>>
>> Cut the trace down to 180 secs which brought the filesize down to
>> 93MB: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24-180secs.tgz
>
> I see the problem - the trace.dat file is hosted on an XFS
> filesystem, so all the writes to the trace.dat file are causing
> events to be logged, which causes writes to the trace.dat file....
>
> taking out al the trace-cmd events:
>
> $ grep -v trace-cmd trace_report_180secs.txt > t.t
> $ ls -l trace_report_180secs.txt t.t
> -rw-r--r-- 1 dave dave 2136941443 Apr 27 18:52 trace_report_180secs.txt
> -rw-r--r-- 1 dave dave    3280629 Apr 27 20:12 t.t
>
> Brings the event trace for that 180s down ifrom 2.1GB to 3.2MB,
> which is much more like I'd expect from a hung filesystem....
>
> Ok, so it looks like there's lots of noise from other XFS
> filesystems to, and from the info.log, the xfs-hang filesystem is on
> device 252:2 (/dev/vg00/tmp):
>
> $ grep "dev 252:2" t.t
> $
>
> And there are no events from that filesystem in the log at all. Ok,
> so what you need to do is find out if there are *any* events from
> that device in the larger log file you have.....
>
> If not, then it is time for advanced trace-cmd mojo. We can tell it
> to filter events only from the PID of the test script and all it's
> children using:
>
> # trace-cmd record -e xfs\* -P <parent-pid> -c
>
> But better would be to use the device number of the relevant
> filesystem to filter the events. The device is 252:2, which means in
> kernel terms is it:
>
>        dev = (major << 20) | minor
>            = 0xfc00002
>
> So you should be able to get just the xfs-hang events via:
>
> # trace-cmd record -e xfs\* -d 'dev == 0xfc00002'
>
> and as long as you don't host log files on /xfs-hang, it'll only
> record the workload running on the xfs-hang filesystem.

Third try: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-27-180secs.tgz
Filtered by device, trace events go to a different filesystem.


> BTW, how often do you see this sort of thing:
>
> [  220.571551] ------------[ cut here ]------------
> [  220.571562] WARNING: at fs/inode.c:280 drop_nlink+0x49/0x50()
> [  220.571564] Hardware name: SE2170s
> [  220.571565] Modules linked in: ipmi_devintf ipmi_si ipmi_msghandler ip6table_filter ip6_tables ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp iptable_filter ip_tables x_tables bridge 8021q garp stp coretemp ghash_clmulni_intel aesni_intel cryptd usbhid i7core_edac lp edac_core hid aes_x86_64 parport serio_raw microcode xfs igb hpsa dca
> [  220.571594] Pid: 4637, comm: copy-files Not tainted 3.4.0-rc4 #2
> [  220.571595] Call Trace:
> [  220.571603]  [<ffffffff810508cf>] warn_slowpath_common+0x7f/0xc0
> [  220.571605]  [<ffffffff8105092a>] warn_slowpath_null+0x1a/0x20
> [  220.571607]  [<ffffffff81193319>] drop_nlink+0x49/0x50
> [  220.571628]  [<ffffffffa00701ef>] xfs_droplink+0x2f/0x60 [xfs]
> [  220.571640]  [<ffffffffa0072d58>] xfs_remove+0x2e8/0x3c0 [xfs]
> [  220.571645]  [<ffffffff8163aeee>] ? _raw_spin_lock+0xe/0x20
> [  220.571656]  [<ffffffffa0068248>] xfs_vn_unlink+0x48/0x90 [xfs]
> [  220.571659]  [<ffffffff8118684f>] vfs_unlink+0x9f/0x100
> [  220.571662]  [<ffffffff811893ef>] do_unlinkat+0x1af/0x1e0
> [  220.571668]  [<ffffffff810a8eab>] ? sys_futex+0x7b/0x180
> [  220.571670]  [<ffffffff8118a9a6>] sys_unlink+0x16/0x20
> [  220.571675]  [<ffffffff816431a9>] system_call_fastpath+0x16/0x1b
>
> You might want to run xfs-repair over your filesystems to find out
> how many inodes have bad link counts....

First time I saw it was when I started using 3.4-rc4. I repaired the
fs before I rerun the test that produced the above data.

...Juerg


> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-04-27 13:07                           ` Juerg Haefliger
@ 2012-05-05  7:44                             ` Juerg Haefliger
  2012-05-07 17:19                               ` Ben Myers
  2012-05-07 22:59                               ` Dave Chinner
  0 siblings, 2 replies; 58+ messages in thread
From: Juerg Haefliger @ 2012-05-05  7:44 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

>>> >> >> > I'm assuming it is the event trace
>>> >> >> > that is causing it to blow out? If so, just the 30-60s either side of
>>> >> >> > the hang first showing up is probaby necessary, and that should cut
>>> >> >> > the size down greatly....
>>> >> >>
>>> >> >> Can I shorten the existing trace.dat?
>>
>> Looks like you can - the "trace-cmd split" option.
>>
>>> >> >
>>> >> > No idea, but that's likely the problem - I don't want the binary
>>> >> > trace.dat file. I want the text output of the report command
>>> >> > generated from the binary trace.dat file...
>>> >>
>>> >> Well yes. I did RTFM :-) trace.dat is 15GB.
>>> >
>>> > OK, that's a lot larger than I expected for a hung filesystem....
>>> >
>>> >> >> I stopped the trace
>>> >> >> automatically 10 secs after the the xlog_... trace showed up in syslog
>>> >> >> so effectively some 130+ secs after the hang occured.
>>> >
>>> > Can you look at the last timestamp in the report file, and trim off
>>> > anything from the start that is older than, say, 180s before that?
>>>
>>> Cut the trace down to 180 secs which brought the filesize down to
>>> 93MB: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-24-180secs.tgz
>>
>> I see the problem - the trace.dat file is hosted on an XFS
>> filesystem, so all the writes to the trace.dat file are causing
>> events to be logged, which causes writes to the trace.dat file....
>>
>> taking out al the trace-cmd events:
>>
>> $ grep -v trace-cmd trace_report_180secs.txt > t.t
>> $ ls -l trace_report_180secs.txt t.t
>> -rw-r--r-- 1 dave dave 2136941443 Apr 27 18:52 trace_report_180secs.txt
>> -rw-r--r-- 1 dave dave    3280629 Apr 27 20:12 t.t
>>
>> Brings the event trace for that 180s down ifrom 2.1GB to 3.2MB,
>> which is much more like I'd expect from a hung filesystem....
>>
>> Ok, so it looks like there's lots of noise from other XFS
>> filesystems to, and from the info.log, the xfs-hang filesystem is on
>> device 252:2 (/dev/vg00/tmp):
>>
>> $ grep "dev 252:2" t.t
>> $
>>
>> And there are no events from that filesystem in the log at all. Ok,
>> so what you need to do is find out if there are *any* events from
>> that device in the larger log file you have.....
>>
>> If not, then it is time for advanced trace-cmd mojo. We can tell it
>> to filter events only from the PID of the test script and all it's
>> children using:
>>
>> # trace-cmd record -e xfs\* -P <parent-pid> -c
>>
>> But better would be to use the device number of the relevant
>> filesystem to filter the events. The device is 252:2, which means in
>> kernel terms is it:
>>
>>        dev = (major << 20) | minor
>>            = 0xfc00002
>>
>> So you should be able to get just the xfs-hang events via:
>>
>> # trace-cmd record -e xfs\* -d 'dev == 0xfc00002'
>>
>> and as long as you don't host log files on /xfs-hang, it'll only
>> record the workload running on the xfs-hang filesystem.
>
> Third try: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-27-180secs.tgz
> Filtered by device, trace events go to a different filesystem.

Did anybody have a chance to look at the data?

Thanks
...Juerg


>
>> BTW, how often do you see this sort of thing:
>>
>> [  220.571551] ------------[ cut here ]------------
>> [  220.571562] WARNING: at fs/inode.c:280 drop_nlink+0x49/0x50()
>> [  220.571564] Hardware name: SE2170s
>> [  220.571565] Modules linked in: ipmi_devintf ipmi_si ipmi_msghandler ip6table_filter ip6_tables ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp iptable_filter ip_tables x_tables bridge 8021q garp stp coretemp ghash_clmulni_intel aesni_intel cryptd usbhid i7core_edac lp edac_core hid aes_x86_64 parport serio_raw microcode xfs igb hpsa dca
>> [  220.571594] Pid: 4637, comm: copy-files Not tainted 3.4.0-rc4 #2
>> [  220.571595] Call Trace:
>> [  220.571603]  [<ffffffff810508cf>] warn_slowpath_common+0x7f/0xc0
>> [  220.571605]  [<ffffffff8105092a>] warn_slowpath_null+0x1a/0x20
>> [  220.571607]  [<ffffffff81193319>] drop_nlink+0x49/0x50
>> [  220.571628]  [<ffffffffa00701ef>] xfs_droplink+0x2f/0x60 [xfs]
>> [  220.571640]  [<ffffffffa0072d58>] xfs_remove+0x2e8/0x3c0 [xfs]
>> [  220.571645]  [<ffffffff8163aeee>] ? _raw_spin_lock+0xe/0x20
>> [  220.571656]  [<ffffffffa0068248>] xfs_vn_unlink+0x48/0x90 [xfs]
>> [  220.571659]  [<ffffffff8118684f>] vfs_unlink+0x9f/0x100
>> [  220.571662]  [<ffffffff811893ef>] do_unlinkat+0x1af/0x1e0
>> [  220.571668]  [<ffffffff810a8eab>] ? sys_futex+0x7b/0x180
>> [  220.571670]  [<ffffffff8118a9a6>] sys_unlink+0x16/0x20
>> [  220.571675]  [<ffffffff816431a9>] system_call_fastpath+0x16/0x1b
>>
>> You might want to run xfs-repair over your filesystems to find out
>> how many inodes have bad link counts....
>
> First time I saw it was when I started using 3.4-rc4. I repaired the
> fs before I rerun the test that produced the above data.
>
> ...Juerg
>
>
>> Cheers,
>>
>> Dave.
>> --
>> Dave Chinner
>> david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-05  7:44                             ` Juerg Haefliger
@ 2012-05-07 17:19                               ` Ben Myers
  2012-05-09  7:54                                 ` Juerg Haefliger
  2012-05-07 22:59                               ` Dave Chinner
  1 sibling, 1 reply; 58+ messages in thread
From: Ben Myers @ 2012-05-07 17:19 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

Hey Juerg,

On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
> Did anybody have a chance to look at the data?

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498

Here you indicate that you have created a reproducer.  Can you post it to the list?

Thanks,
	Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-05  7:44                             ` Juerg Haefliger
  2012-05-07 17:19                               ` Ben Myers
@ 2012-05-07 22:59                               ` Dave Chinner
  2012-05-09  7:35                                 ` Dave Chinner
  1 sibling, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-05-07 22:59 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
> >> But better would be to use the device number of the relevant
> >> filesystem to filter the events. The device is 252:2, which means in
> >> kernel terms is it:
> >>
> >>        dev = (major << 20) | minor
> >>            = 0xfc00002
> >>
> >> So you should be able to get just the xfs-hang events via:
> >>
> >> # trace-cmd record -e xfs\* -d 'dev == 0xfc00002'
> >>
> >> and as long as you don't host log files on /xfs-hang, it'll only
> >> record the workload running on the xfs-hang filesystem.
> >
> > Third try: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-27-180secs.tgz
> > Filtered by device, trace events go to a different filesystem.
> 
> Did anybody have a chance to look at the data?

I've had a quick look, but I need to write scripts to visualise it
(i.e. graph it) to determine if there's any pattern to the issue.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-07 22:59                               ` Dave Chinner
@ 2012-05-09  7:35                                 ` Dave Chinner
  2012-05-09 21:07                                   ` Mark Tinguely
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-05-09  7:35 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Tue, May 08, 2012 at 08:59:44AM +1000, Dave Chinner wrote:
> On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
> > >> But better would be to use the device number of the relevant
> > >> filesystem to filter the events. The device is 252:2, which means in
> > >> kernel terms is it:
> > >>
> > >>        dev = (major << 20) | minor
> > >>            = 0xfc00002
> > >>
> > >> So you should be able to get just the xfs-hang events via:
> > >>
> > >> # trace-cmd record -e xfs\* -d 'dev == 0xfc00002'
> > >>
> > >> and as long as you don't host log files on /xfs-hang, it'll only
> > >> record the workload running on the xfs-hang filesystem.
> > >
> > > Third try: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1.0/AUTH_9630ead2-6194-40df-afd3-7395448d4536/xfs-hang/report-2012-04-27-180secs.tgz
> > > Filtered by device, trace events go to a different filesystem.
> > 
> > Did anybody have a chance to look at the data?
> 
> I've had a quick look, but I need to write scripts to visualise it
> (i.e. graph it) to determine if there's any pattern to the issue.

And, as expected, something unexpected popped out.

Judicious use of awk on the log space grant events shows an
interesting pattern occuring from time to time:

Transaction	 Wait queues	 Grant head	 Write Head	 Log head	 Log tail
-----------------------------------------------------------------------------------------
INACTIVE         empty empty     118 438024      118 438024      118 802         118 697
INACTIVE         empty empty     119 20240       119 20240       118 802         118 697
REMOVE           empty empty     118 438456      118 438456      118 802         118 697
REMOVE           empty empty     118 438772      118 438772      118 802         118 697
CREATE           empty empty     119 35872       119 35872       118 802         118 697
FSYNC_TS         active empty    119 205428      119 205428      118 802         118 697
FSYNC_TS         active empty    119 202944      119 202944      118 802         118 697
FSYNC_TS         active empty    119 200664      119 200664      118 802         118 697
REMOVE           empty empty     118 380316      118 380316      118 724         118 652
INACTIVE         empty empty     118 552532      118 552532      118 724         118 652
FSYNC_TS         empty empty     118 382140      118 382140      118 724         118 652
INACTIVE         empty empty     118 565968      118 565968      118 724         118 652
REMOVE           empty empty     119 25404       119 25404       118 802         118 697
INACTIVE         active empty    119 25580       119 25580       118 802         118 697

Anyone notice something fishing with the log head and tail?

That's right, they go *backwards* when a particular REMOVE
transaction is executed. That's, well, completely unexpected, and
completely breaks the assumptions made in the log space reservation
code. Essentially, the log tail ismoving backwards, and the head is
relative to the tail so moves with it.

That is *nasty*. It means that if this occurs at just the right
(wrong) time (i.e. just before a checkpoint), we can overwrite log
metadata and essentially corrupt the log. If we were to crash with
something like this active in the log, then log reocvery would fail
and or corrupt the filesystem. It is however, very hard to trigger a
crash while this conditions exists because, as you can see, it only
existed for 4 transactions - about 100uS according to the traces in
this case.

I think this has to do with how inode allocation/unlink buffers are
handled - we handle their position (via their LSN) in the AIL
specially, and I think that's affecting the in memory view of the
log tail and hence how much space is really available in the log. I
need to do more investigation to really understand the way it fails,
and write a reproducable test case, but I think I've found the
smoking gun....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-07 17:19                               ` Ben Myers
@ 2012-05-09  7:54                                 ` Juerg Haefliger
  2012-05-10 16:11                                   ` Chris J Arges
  2012-05-18 17:19                                   ` Ben Myers
  0 siblings, 2 replies; 58+ messages in thread
From: Juerg Haefliger @ 2012-05-09  7:54 UTC (permalink / raw)
  To: Ben Myers; +Cc: xfs

Ben,

> Hey Juerg,
>
> On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
>> Did anybody have a chance to look at the data?
>
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498
>
> Here you indicate that you have created a reproducer.  Can you post it to the list?

Canonical attached them to the bug report that they filed yesterday:
http://oss.sgi.com/bugzilla/show_bug.cgi?id=922

...Juerg


> Thanks,
>        Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-09  7:35                                 ` Dave Chinner
@ 2012-05-09 21:07                                   ` Mark Tinguely
  2012-05-10  2:10                                     ` Mark Tinguely
  2012-05-18  9:31                                     ` Dave Chinner
  0 siblings, 2 replies; 58+ messages in thread
From: Mark Tinguely @ 2012-05-09 21:07 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Juerg Haefliger, xfs

On 05/09/12 02:35, Dave Chinner wrote:

>
> Transaction	 Wait queues	 Grant head	 Write Head	 Log head	 Log tail
> -----------------------------------------------------------------------------------------
> INACTIVE         empty empty     118 438024      118 438024      118 802         118 697
> INACTIVE         empty empty     119 20240       119 20240       118 802         118 697
> REMOVE           empty empty     118 438456      118 438456      118 802         118 697
> REMOVE           empty empty     118 438772      118 438772      118 802         118 697
> CREATE           empty empty     119 35872       119 35872       118 802         118 697
> FSYNC_TS         active empty    119 205428      119 205428      118 802         118 697
> FSYNC_TS         active empty    119 202944      119 202944      118 802         118 697
> FSYNC_TS         active empty    119 200664      119 200664      118 802         118 697
> REMOVE           empty empty     118 380316      118 380316      118 724         118 652
> INACTIVE         empty empty     118 552532      118 552532      118 724         118 652
> FSYNC_TS         empty empty     118 382140      118 382140      118 724         118 652
> INACTIVE         empty empty     118 565968      118 565968      118 724         118 652
> REMOVE           empty empty     119 25404       119 25404       118 802         118 697
> INACTIVE         active empty    119 25580       119 25580       118 802         118 697



Just so you don't go down a blind alley, the timestamp on the log went 
backwards there. If you resort on the time stamps this does not go 
backwards.

Not sorted:
grep ungrant_exit trace* | awk '{print $3, $8, $20, $22, $24, $26, $28, 
$30, $32, $34, $36, $38}' | less

3501.000611: FSYNC_TS active empty 119 205428 119 205428 118 802 118 697
3501.004513: FSYNC_TS active empty 119 202944 119 202944 118 802 118 697
3501.005210: FSYNC_TS active empty 119 200664 119 200664 118 802 118 697
3500.962328: REMOVE empty empty 118 380316 118 380316 118 724 118 652
3500.962458: INACTIVE empty empty 118 552532 118 552532 118 724 118 652
3500.962770: FSYNC_TS empty empty 118 382140 118 382140 118 724 118 652
3500.964781: INACTIVE empty empty 118 565968 118 565968 118 724 118 652
3500.971259: REMOVE empty empty 119 25404 119 25404 118 802 118 697
3500.971454: INACTIVE active empty 119 25580 119 25580 118 802 118 697

Sorted on the timestamps:
grep ungrant_exit trace* | awk '{print $3, $8, $20, $22, $24, $26, $28, 
$30, $32, $34, $36, $38}' | sort | less

3500.962328: REMOVE empty empty 118 380316 118 380316 118 724 118 652
3500.962386: INACTIVE empty empty 118 555684 118 555684 118 724 118 652
3500.962402: CREATE empty empty 118 380492 118 380492 118 724 118 652
3500.962458: INACTIVE empty empty 118 552532 118 552532 118 724 118 652
3500.962466: FSYNC_TS empty empty 118 552532 118 552532 118 724 118 652
3500.962476: REMOVE empty empty 118 380936 118 380936 118 724 118 652
3500.962534: INACTIVE empty empty 118 556304 118 556304 118 724 118 652
...
3500.979002: INACTIVE empty empty 118 437672 118 437672 118 802 118 697
3500.979185: CREATE empty empty 118 437848 118 437848 118 802 118 697
3500.979269: CREATE empty empty 118 438024 118 438024 118 802 118 697
3500.979462: INACTIVE empty empty 118 438024 118 438024 118 802 118 697
3500.979627: INACTIVE empty empty 119 20240 119 20240 118 802 118 697
3500.979657: REMOVE empty empty 118 438456 118 438456 118 802 118 697
3500.979713: REMOVE empty empty 118 438772 118 438772 118 802 118 697
3500.979815: CREATE empty empty 119 35872 119 35872 118 802 118 697
3501.000611: FSYNC_TS active empty 119 205428 119 205428 118 802 118 697


Maybe I have a corrupted version of his first trace, it looks like there 
are 2 series of log grant/write/head/tail sequences. These sequences are 
not even close to each other:

188.116687: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1 1655971
188.116939: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1 1655971
188.117755: CREATE empty empty 4440 166388 4440 166388 4440 312 4440 310
188.117784: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1 1655971
188.117902: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1 1655971
188.118249: CREATE empty empty 4440 166844 4440 166844 4440 312 4440 310
188.118350: CREATE empty empty 4440 167300 4440 167300 4440 312 4440 310
188.118628: FSYNC_TS empty empty 4440 167300 4440 167300 4440 312 4440 310
188.118837: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1 1655971
188.118868: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1 1655971

--Mark Tinguely.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-09 21:07                                   ` Mark Tinguely
@ 2012-05-10  2:10                                     ` Mark Tinguely
  2012-05-18  9:37                                       ` Dave Chinner
  2012-05-18  9:31                                     ` Dave Chinner
  1 sibling, 1 reply; 58+ messages in thread
From: Mark Tinguely @ 2012-05-10  2:10 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Juerg Haefliger, xfs

On 05/09/12 16:07, Mark Tinguely wrote:
>
>
> Maybe I have a corrupted version of his first trace, it looks like there
> are 2 series of log grant/write/head/tail sequences. These sequences are
> not even close to each other:
>
> 188.116687: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1
> 1655971
> 188.116939: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1
> 1655971
> 188.117755: CREATE empty empty 4440 166388 4440 166388 4440 312 4440 310
> 188.117784: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1
> 1655971
> 188.117902: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1
> 1655971
> 188.118249: CREATE empty empty 4440 166844 4440 166844 4440 312 4440 310
> 188.118350: CREATE empty empty 4440 167300 4440 167300 4440 312 4440 310
> 188.118628: FSYNC_TS empty empty 4440 167300 4440 167300 4440 312 4440 310
> 188.118837: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1
> 1655971
> 188.118868: FSYNC_TS empty empty 1 847894476 1 847894476 1 1655998 1
> 1655971

Oops, there are multiple devices in that trace.


I notice in the trace_report_180secs.txt file, lsn (ail_push and the 
ungrants) displays are not correct. It acts like the BLOCK_LSN() is 
shifting too much. Block sequence numbers never make it too far above 
1100 before incrementing the cycle. I think the problem is the trace not 
the sequence numbers.

--Mark Tinguely.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-09  7:54                                 ` Juerg Haefliger
@ 2012-05-10 16:11                                   ` Chris J Arges
  2012-05-10 21:53                                     ` Mark Tinguely
  2012-05-16 18:42                                     ` Ben Myers
  2012-05-18 17:19                                   ` Ben Myers
  1 sibling, 2 replies; 58+ messages in thread
From: Chris J Arges @ 2012-05-10 16:11 UTC (permalink / raw)
  To: linux-xfs

<snip>
> Canonical attached them to the bug report that they filed yesterday:
> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
> 
> ...Juerg
> 

Hello,
I am able to reproduce this bug with the instructions posted in this bug. Let me 
know what I can do to help.
--chris j arges

<snip>



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-10 16:11                                   ` Chris J Arges
@ 2012-05-10 21:53                                     ` Mark Tinguely
  2012-05-16 18:42                                     ` Ben Myers
  1 sibling, 0 replies; 58+ messages in thread
From: Mark Tinguely @ 2012-05-10 21:53 UTC (permalink / raw)
  To: Chris J Arges; +Cc: Juerg Haefliger, xfs-oss

On 05/10/12 11:11, Chris J Arges wrote:
> <snip>
>> Canonical attached them to the bug report that they filed yesterday:
>> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>>
>> ...Juerg
>>
>
> Hello,
> I am able to reproduce this bug with the instructions posted in this bug. Let me
> know what I can do to help.
> --chris j arges
>
> <snip>
>

I have been running the test for 8+ hours with top of tree sources and 
an additional log cleaner kicker patch without problem. I will leave 
that one running and start another machine with just the top of tree 
kernel without the additional patch and then go back to an earlier sources.

Mark Tinguely.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-10 16:11                                   ` Chris J Arges
  2012-05-10 21:53                                     ` Mark Tinguely
@ 2012-05-16 18:42                                     ` Ben Myers
  2012-05-16 19:03                                       ` Chris J Arges
  2012-05-17 20:55                                       ` Chris J Arges
  1 sibling, 2 replies; 58+ messages in thread
From: Ben Myers @ 2012-05-16 18:42 UTC (permalink / raw)
  To: Chris J Arges; +Cc: linux-xfs, tinguely

Hey Chris,

On Thu, May 10, 2012 at 04:11:27PM +0000, Chris J Arges wrote:
> <snip>
> > Canonical attached them to the bug report that they filed yesterday:
> > http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
> > 
> > ...Juerg
> > 
> 
> Hello,
> I am able to reproduce this bug with the instructions posted in this bug. Let me 
> know what I can do to help.

The bug shows:

|This has been tested on the following kernels which all exhibit the same
|failures:
|- 3.2.0-24 (Ubuntu Precise)
|- 3.4.0-rc4
|- 3.0.29
|- 3.1.10
|- 3.2.15
|- 3.3.2

Can you find an older kernel that isn't broken?

-Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-16 18:42                                     ` Ben Myers
@ 2012-05-16 19:03                                       ` Chris J Arges
  2012-05-16 21:29                                         ` Mark Tinguely
  2012-05-17 20:55                                       ` Chris J Arges
  1 sibling, 1 reply; 58+ messages in thread
From: Chris J Arges @ 2012-05-16 19:03 UTC (permalink / raw)
  To: Ben Myers; +Cc: linux-xfs, tinguely

On 05/16/2012 01:42 PM, Ben Myers wrote:
> Hey Chris,
> 
> On Thu, May 10, 2012 at 04:11:27PM +0000, Chris J Arges wrote:
>> <snip>
>>> Canonical attached them to the bug report that they filed yesterday:
>>> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>>>
>>> ...Juerg
>>>
>>
>> Hello,
>> I am able to reproduce this bug with the instructions posted in this bug. Let me 
>> know what I can do to help.
> 
> The bug shows:
> 
> |This has been tested on the following kernels which all exhibit the same
> |failures:
> |- 3.2.0-24 (Ubuntu Precise)
> |- 3.4.0-rc4
> |- 3.0.29
> |- 3.1.10
> |- 3.2.15
> |- 3.3.2
> 
> Can you find an older kernel that isn't broken?
> 

Sure, I can start digging further back.
Also 2.6.38-8-server was the original version that this bug was reported
on. So I can try testing circa 2.6.32 to see if that also fails.
--chris

> -Ben
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-16 19:03                                       ` Chris J Arges
@ 2012-05-16 21:29                                         ` Mark Tinguely
  2012-05-18 10:10                                           ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Mark Tinguely @ 2012-05-16 21:29 UTC (permalink / raw)
  To: Chris J Arges; +Cc: linux-xfs, Ben Myers

On 05/16/12 14:03, Chris J Arges wrote:
> On 05/16/2012 01:42 PM, Ben Myers wrote:
>> Hey Chris,
>>
>> On Thu, May 10, 2012 at 04:11:27PM +0000, Chris J Arges wrote:
>>> <snip>
>>>> Canonical attached them to the bug report that they filed yesterday:
>>>> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>>>>
>>>> ...Juerg
>>>>
>>>
>>> Hello,
>>> I am able to reproduce this bug with the instructions posted in this bug. Let me
>>> know what I can do to help.
>>
>> The bug shows:
>>
>> |This has been tested on the following kernels which all exhibit the same
>> |failures:
>> |- 3.2.0-24 (Ubuntu Precise)
>> |- 3.4.0-rc4
>> |- 3.0.29
>> |- 3.1.10
>> |- 3.2.15
>> |- 3.3.2
>>
>> Can you find an older kernel that isn't broken?
>>
>
> Sure, I can start digging further back.
> Also 2.6.38-8-server was the original version that this bug was reported
> on. So I can try testing circa 2.6.32 to see if that also fails.
> --chris
>
>> -Ben
>>
>

What I know so far:
I have log cleaner kicker added to xlog_grant_head_wake(). This kicker 
at best would prevent waiting for the next sync before starting the log 
cleaner.

I have one machine that has been running for 2 days without hanging. 
Actually, now I would prefer it to hurry up and hang.

Here is what see on the machine that is hung:

A few processes (4-5) are hung waiting to get space on the log. There 
isn't enough free space on the log for the first transaction and it 
waits. All other processes will have to wait behind the first process. 
251,811 bytes of the original 589,842 bytes should still be free (if my 
hand free space calculations are correct).

The AIL is empty. There is nothing to clean. Any new transaction at this 
point will kick the cleaner, and it still can't start the first waiter, 
so it joins the wait list.

The only XFS traffic at this point is inode reclaim worker. This is to 
be expected.

The CIL has entries, nothing is waiting on the CIL. xc_current_sequence 
= 117511 xc_push_seq = 117510. So there is nothing for the CIL worker to do.

117511 is the largest sequence number that I have found so far in the 
xfs_log_item list. There are a few entries with smaller sequence numbers 
and the following strange entry:

77th entry in the xfs_log_item list:

crash> struct xfs_log_item ffff88083222b5b8
struct xfs_log_item {
   li_ail = {
     next = 0xffff88083222b5b0,
     prev = 0x0
   },
   li_lsn = 0,
   li_desc = 0x9f5d9f5d,
   li_mountp = 0xffff88083116e300,
   li_ailp = 0x0,
   li_type = 0,
   li_flags = 0,
   li_bio_list = 0x0,
   li_cb = 0,
   li_ops = 0xffff88083105de00,
   li_cil = {
     next = 0xffff880832ad9f08,
     prev = 0xffff880831751448
   },
   li_lv = 0xc788c788,
   li_seq = -131906182637504
}

Everything in this entry is bad except the li_cil.next and li_cil.prev. 
It looks like li_ail.next is really part of a list that starts at 
0xffff88083222b5b0. The best explanation is a junk addresses was 
inserted into the li_cil chain.

This is a single data point which could be anything including bad 
hardware. I will continue to traverse this list until I can get the 
other box to hang. If someone want to traverse their xfs_log_item list ...

--Mark Tinguely.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-16 18:42                                     ` Ben Myers
  2012-05-16 19:03                                       ` Chris J Arges
@ 2012-05-17 20:55                                       ` Chris J Arges
  2012-05-18 16:53                                         ` Chris J Arges
  1 sibling, 1 reply; 58+ messages in thread
From: Chris J Arges @ 2012-05-17 20:55 UTC (permalink / raw)
  To: Ben Myers; +Cc: linux-xfs, tinguely

On 05/16/2012 01:42 PM, Ben Myers wrote:
> Hey Chris,
> 
> On Thu, May 10, 2012 at 04:11:27PM +0000, Chris J Arges wrote:
>> <snip>
>>> Canonical attached them to the bug report that they filed yesterday:
>>> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>>>
>>> ...Juerg
>>>
>>
>> Hello,
>> I am able to reproduce this bug with the instructions posted in this bug. Let me 
>> know what I can do to help.
> 
> The bug shows:
> 
> |This has been tested on the following kernels which all exhibit the same
> |failures:
> |- 3.2.0-24 (Ubuntu Precise)
> |- 3.4.0-rc4
> |- 3.0.29
> |- 3.1.10
> |- 3.2.15
> |- 3.3.2
> 
> Can you find an older kernel that isn't broken?
> 
Tested with Ubuntu Lucid 2.6.32-38-generic #38 (upstream 2.6.32.52), so
far I am able to run the test for 5 hours, which on the same machine I
have typically been able to reproduce in 2 hours. Will continue to run
this, and on another system to verify.

--chris

> -Ben
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-09 21:07                                   ` Mark Tinguely
  2012-05-10  2:10                                     ` Mark Tinguely
@ 2012-05-18  9:31                                     ` Dave Chinner
  1 sibling, 0 replies; 58+ messages in thread
From: Dave Chinner @ 2012-05-18  9:31 UTC (permalink / raw)
  To: Mark Tinguely; +Cc: Juerg Haefliger, xfs

On Wed, May 09, 2012 at 04:07:52PM -0500, Mark Tinguely wrote:
> On 05/09/12 02:35, Dave Chinner wrote:
> 
> >
> >Transaction	 Wait queues	 Grant head	 Write Head	 Log head	 Log tail
> >-----------------------------------------------------------------------------------------
> >INACTIVE         empty empty     118 438024      118 438024      118 802         118 697
> >INACTIVE         empty empty     119 20240       119 20240       118 802         118 697
> >REMOVE           empty empty     118 438456      118 438456      118 802         118 697
> >REMOVE           empty empty     118 438772      118 438772      118 802         118 697
> >CREATE           empty empty     119 35872       119 35872       118 802         118 697
> >FSYNC_TS         active empty    119 205428      119 205428      118 802         118 697
> >FSYNC_TS         active empty    119 202944      119 202944      118 802         118 697
> >FSYNC_TS         active empty    119 200664      119 200664      118 802         118 697
> >REMOVE           empty empty     118 380316      118 380316      118 724         118 652
> >INACTIVE         empty empty     118 552532      118 552532      118 724         118 652
> >FSYNC_TS         empty empty     118 382140      118 382140      118 724         118 652
> >INACTIVE         empty empty     118 565968      118 565968      118 724         118 652
> >REMOVE           empty empty     119 25404       119 25404       118 802         118 697
> >INACTIVE         active empty    119 25580       119 25580       118 802         118 697
> 
> 
> 
> Just so you don't go down a blind alley, the timestamp on the log
> went backwards there. If you resort on the time stamps this does not
> go backwards.

Yup, you are right, timestamps jump around, so that may not be the cause.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-10  2:10                                     ` Mark Tinguely
@ 2012-05-18  9:37                                       ` Dave Chinner
  0 siblings, 0 replies; 58+ messages in thread
From: Dave Chinner @ 2012-05-18  9:37 UTC (permalink / raw)
  To: Mark Tinguely; +Cc: Juerg Haefliger, xfs

On Wed, May 09, 2012 at 09:10:05PM -0500, Mark Tinguely wrote:
> On 05/09/12 16:07, Mark Tinguely wrote:
> 
> I notice in the trace_report_180secs.txt file, lsn (ail_push and the
> ungrants) displays are not correct. It acts like the BLOCK_LSN() is
> shifting too much.

What do you mean by "not correct"?

> Block sequence numbers never make it too far
> above 1100 before incrementing the cycle. I think the problem is the
> trace not the sequence numbers.

The log is only 576 filesystem blocks (which are 1k in size) long,
which means 1152 sectors, so the LSN block number portion will never
go above 1152. It looks correct to me....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-16 21:29                                         ` Mark Tinguely
@ 2012-05-18 10:10                                           ` Dave Chinner
  2012-05-18 14:42                                             ` Mark Tinguely
  2012-06-06 15:00                                             ` Chris J Arges
  0 siblings, 2 replies; 58+ messages in thread
From: Dave Chinner @ 2012-05-18 10:10 UTC (permalink / raw)
  To: Mark Tinguely; +Cc: linux-xfs, Ben Myers, Chris J Arges

On Wed, May 16, 2012 at 04:29:01PM -0500, Mark Tinguely wrote:
> On 05/16/12 14:03, Chris J Arges wrote:
> >On 05/16/2012 01:42 PM, Ben Myers wrote:
> >>Hey Chris,
> >>
> >>On Thu, May 10, 2012 at 04:11:27PM +0000, Chris J Arges wrote:
> >>><snip>
> >>>>Canonical attached them to the bug report that they filed yesterday:
> >>>>http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
> >>>>
> >>>>...Juerg
> >>>>
> >>>
> >>>Hello,
> >>>I am able to reproduce this bug with the instructions posted in this bug. Let me
> >>>know what I can do to help.
> >>
> >>The bug shows:
> >>
> >>|This has been tested on the following kernels which all exhibit the same
> >>|failures:
> >>|- 3.2.0-24 (Ubuntu Precise)
> >>|- 3.4.0-rc4
> >>|- 3.0.29
> >>|- 3.1.10
> >>|- 3.2.15
> >>|- 3.3.2
> >>
> >>Can you find an older kernel that isn't broken?
> >>
> >
> >Sure, I can start digging further back.
> >Also 2.6.38-8-server was the original version that this bug was reported
> >on. So I can try testing circa 2.6.32 to see if that also fails.
> >--chris
> >
> >>-Ben
> >>
> >
> 
> What I know so far:
> I have log cleaner kicker added to xlog_grant_head_wake(). This

What's a "log cleaner"? I've never heard that term before use for
XFS, so I can only assume you are talking about waking the xfsaild.
If that's the case, can you just say "pushing the AIL" rather
than making up new terminology?

> kicker at best would prevent waiting for the next sync before
> starting the log cleaner.

Can you post the patch?

> I have one machine that has been running for 2 days without hanging.
> Actually, now I would prefer it to hurry up and hang.
> 
> Here is what see on the machine that is hung:
> 
> A few processes (4-5) are hung waiting to get space on the log.
> There isn't enough free space on the log for the first transaction
> and it waits. All other processes will have to wait behind the first
> process. 251,811 bytes of the original 589,842 bytes should still be
> free (if my hand free space calculations are correct).
> 
> The AIL is empty.

So the CIL has consumed ~50% of the log, and the background flusher
has not triggered? Why didn't the background CIL flush fire?

> There is nothing to clean. Any new transaction at
> this point will kick the cleaner, and it still can't start the first
> waiter, so it joins the wait list.

Assuming you mean the cleaner is the xfsaild, then if the AIL is
empty, waking the xfsaild will do nothing because there is nothing
for it to act on.

> The only XFS traffic at this point is inode reclaim worker. This is
> to be expected.
> 
> The CIL has entries, nothing is waiting on the CIL.
> xc_current_sequence = 117511 xc_push_seq = 117510. So there is
> nothing for the CIL worker to do.

It means that we're stalled on the CIL, not on the AIL. The question
is, as I asked above, why didn't the background flusher fire when
the last transaction completed and saw the CIL over the hard flush
threshold?

If you dump the CIL context structure, you can find out how much
space has been consumed by the items queued on the CIL.

> 
> 117511 is the largest sequence number that I have found so far in
> the xfs_log_item list. There are a few entries with smaller sequence
> numbers and the following strange entry:
> 
> 77th entry in the xfs_log_item list:
> 
> crash> struct xfs_log_item ffff88083222b5b8
> struct xfs_log_item {
>   li_ail = {
>     next = 0xffff88083222b5b0,

that points 8 bytes backwards.

>     prev = 0x0

And that indicates that this was never part of a log item as the
list head is always initialised.

>   },
>   li_lsn = 0,

never committed

>   li_desc = 0x9f5d9f5d,

that's supposed to be a pointer

>   li_mountp = 0xffff88083116e300,
>   li_ailp = 0x0,

Never initialised as a log item.

>   li_type = 0,

Impossible.

>   li_flags = 0,
>   li_bio_list = 0x0,
>   li_cb = 0,
>   li_ops = 0xffff88083105de00,
>   li_cil = {
>     next = 0xffff880832ad9f08,
>     prev = 0xffff880831751448
>   },
>   li_lv = 0xc788c788,

supposed to be a pointer

>   li_seq = -131906182637504

That's a pointer: 0xFFFF880832D71840

So this looks like it some form of memory corruption - either a use
after free, the memory has been been overwritten or the list has
been pointed off into lala land.

> Everything in this entry is bad except the li_cil.next and
> li_cil.prev. It looks like li_ail.next is really part of a list that
> starts at 0xffff88083222b5b0. The best explanation is a junk
> addresses was inserted into the li_cil chain.

It seems unlikely, but if you turn on kmemleak it might find a
memory leak or overwrite that is causing this.

> 
> This is a single data point which could be anything including bad
> hardware. I will continue to traverse this list until I can get the
> other box to hang. If someone want to traverse their xfs_log_item
> list ...

Given how little it looks like a log item, I'm not sure you can
follow those pointers - do they even link up with other log items?

Still, this doesn't explain the hang at all - the CIL forms a new
list every time a checkpoint occurs, and this corruption would cause
a crash trying to walk the li_lv list when pushed. So it comes back
to why hasn't the CIL been pushed? what does the CIL context
structure look like?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-18 10:10                                           ` Dave Chinner
@ 2012-05-18 14:42                                             ` Mark Tinguely
  2012-05-22 22:59                                               ` Dave Chinner
  2012-06-06 15:00                                             ` Chris J Arges
  1 sibling, 1 reply; 58+ messages in thread
From: Mark Tinguely @ 2012-05-18 14:42 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs, Ben Myers, Chris J Arges

[-- Attachment #1: Type: text/plain, Size: 6897 bytes --]

On 05/18/12 05:10, Dave Chinner wrote:

> On Wed, May 16, 2012 at 04:29:01PM -0500, Mark Tinguely wrote:

>> What I know so far:
>> I have log cleaner kicker added to xlog_grant_head_wake(). This

The reserve space waiter list has perl scripts and the flusher.

>
> What's a "log cleaner"? I've never heard that term before use for
> XFS, so I can only assume you are talking about waking the xfsaild.
> If that's the case, can you just say "pushing the AIL" rather
> than making up new terminology?

You are right. I borrowed the term from the comments in the code to 
"kick the dirty buffers out".

>
>> kicker at best would prevent waiting for the next sync before
>> starting the log cleaner.
>
> Can you post the patch?

Sure. This test has a small log, the transactions are large proportional 
to the log, and the processes are related. This results in the perl 
scripts quickly filling the log but the processes in the test become 
blocked. With no transactions coming in to push the cleaning of the AIL, 
the IO stalls until the next sync. After the sync, the backlog of 
waiters will quickly fill the log and everything stalls again until the 
next sync. We knew this could happen theory, but logs are generally not 
this small to see it in reality. The patch just keeps cleaning while 
there are waiters for log space. If another transaction has already 
started the cleaning process, this new request to clean is ignored.

>
>> I have one machine that has been running for 2 days without hanging.
>> Actually, now I would prefer it to hurry up and hang.
>>

The machine running the test for 3+ day finally hung. It has the same 
pattern as the other hangs: 200-250K of the reserved space left, an 
empty AIL and small amount of reserved space on the CIL.

>> Here is what see on the machine that is hung:
>>
>> A few processes (4-5) are hung waiting to get space on the log.
>> There isn't enough free space on the log for the first transaction
>> and it waits. All other processes will have to wait behind the first
>> process. 251,811 bytes of the original 589,842 bytes should still be
>> free (if my hand free space calculations are correct).
>>
>> The AIL is empty.
>
> So the CIL has consumed ~50% of the log, and the background flusher
> has not triggered? Why didn't the background CIL flush fire?
>
>> There is nothing to clean. Any new transaction at
>> this point will kick the cleaner, and it still can't start the first
>> waiter, so it joins the wait list.
>
> Assuming you mean the cleaner is the xfsaild, then if the AIL is
> empty, waking the xfsaild will do nothing because there is nothing
> for it to act on.

Yes, the AIL is empty.


>> The only XFS traffic at this point is inode reclaim worker. This is
>> to be expected.
>>
>> The CIL has entries, nothing is waiting on the CIL.
>> xc_current_sequence = 117511 xc_push_seq = 117510. So there is
>> nothing for the CIL worker to do.
>
> It means that we're stalled on the CIL, not on the AIL. The question
> is, as I asked above, why didn't the background flusher fire when
> the last transaction completed and saw the CIL over the hard flush
> threshold?
>
> If you dump the CIL context structure, you can find out how much
> space has been consumed by the items queued on the CIL.

The CIL seems to have only 30-70K reserved (depending on which hang you
look at). it does not meet the XLOG_CIL_SPACE_LIMIT for the background
worker to push the CIL.

Since this does not match up with the approx 300-350K that is reported 
to be reserved, I traversed the log_item list to see if that CIL amount 
seemed reasonable, and it does.

>>
>> 117511 is the largest sequence number that I have found so far in
>> the xfs_log_item list. There are a few entries with smaller sequence
>> numbers and the following strange entry:
>>
>> 77th entry in the xfs_log_item list:
>>
>> crash>  struct xfs_log_item ffff88083222b5b8
>> struct xfs_log_item {
>>    li_ail = {
>>      next = 0xffff88083222b5b0,
>
> that points 8 bytes backwards.
>
>>      prev = 0x0
>
> And that indicates that this was never part of a log item as the
> list head is always initialised.
>
>>    },
>>    li_lsn = 0,
>
> never committed
>
>>    li_desc = 0x9f5d9f5d,
>
> that's supposed to be a pointer
>
>>    li_mountp = 0xffff88083116e300,
>>    li_ailp = 0x0,
>
> Never initialised as a log item.
>
>>    li_type = 0,
>
> Impossible.
>
>>    li_flags = 0,
>>    li_bio_list = 0x0,
>>    li_cb = 0,
>>    li_ops = 0xffff88083105de00,
>>    li_cil = {
>>      next = 0xffff880832ad9f08,
>>      prev = 0xffff880831751448
>>    },
>>    li_lv = 0xc788c788,
>
> supposed to be a pointer
>
>>    li_seq = -131906182637504
>
> That's a pointer: 0xFFFF880832D71840
>
> So this looks like it some form of memory corruption - either a use
> after free, the memory has been been overwritten or the list has
> been pointed off into lala land.

>> Everything in this entry is bad except the li_cil.next and
>> li_cil.prev. It looks like li_ail.next is really part of a list that
>> starts at 0xffff88083222b5b0. The best explanation is a junk
>> addresses was inserted into the li_cil chain.
>
> It seems unlikely, but if you turn on kmemleak it might find a
> memory leak or overwrite that is causing this.
>
>>
>> This is a single data point which could be anything including bad
>> hardware. I will continue to traverse this list until I can get the
>> other box to hang. If someone want to traverse their xfs_log_item
>> list ...
>
> Given how little it looks like a log item, I'm not sure you can
> follow those pointers - do they even link up with other log items?

Yes, this was the second to last entry. The next/previous links were 
correct, but everything else in this structure is bad. That is why I 
suspect the wrong address was inserted into the log_item list.

I came to the same conclusion that this does not explain the problem, 
the other hangs do not have a corrupted entry. As strange as this entry 
is, I decided to ignore this entry and go back and try to account for 
all the reserved space.


> Still, this doesn't explain the hang at all - the CIL forms a new
> list every time a checkpoint occurs, and this corruption would cause
> a crash trying to walk the li_lv list when pushed. So it comes back
> to why hasn't the CIL been pushed? what does the CIL context
> structure look like?

The CIL context on the machine that was running 3+ days before hanging.

struct xfs_cil_ctx {
   cil = 0xffff88034a8c5240,
   sequence = 1241833,
   start_lsn = 0,
   commit_lsn = 0,
   ticket = 0xffff88034e0ebc08,
   nvecs = 237,
   space_used = 39964,
   busy_extents = {
     next = 0xffff88034b287958,
     prev = 0xffff88034d10c698
   },
   lv_chain = 0x0,
   log_cb = {
     cb_next = 0x0,
     cb_func = 0,
     cb_arg = 0x0
   },
   committing = {
     next = 0xffff88034c84d120,
     prev = 0xffff88034c84d120
   }
}


--Mark Tinguely.



[-- Attachment #2: xfs_ail_clean.patch --]
[-- Type: text/plain, Size: 650 bytes --]

Start the cleaning of the log when still full after last clean.
---
 fs/xfs/xfs_log.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Index: b/fs/xfs/xfs_log.c
===================================================================
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -191,8 +191,10 @@ xlog_grant_head_wake(
 
 	list_for_each_entry(tic, &head->waiters, t_queue) {
 		need_bytes = xlog_ticket_reservation(log, head, tic);
-		if (*free_bytes < need_bytes)
+		if (*free_bytes < need_bytes) {
+			xlog_grant_push_ail(log, need_bytes);
 			return false;
+		}
 
 		*free_bytes -= need_bytes;
 		trace_xfs_log_grant_wake_up(log, tic);

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-17 20:55                                       ` Chris J Arges
@ 2012-05-18 16:53                                         ` Chris J Arges
  0 siblings, 0 replies; 58+ messages in thread
From: Chris J Arges @ 2012-05-18 16:53 UTC (permalink / raw)
  To: Ben Myers; +Cc: linux-xfs, tinguely

[-- Attachment #1: Type: text/plain, Size: 1403 bytes --]

On 05/17/2012 03:55 PM, Chris J Arges wrote:
> On 05/16/2012 01:42 PM, Ben Myers wrote:
>> Hey Chris,
>>
>> On Thu, May 10, 2012 at 04:11:27PM +0000, Chris J Arges wrote:
>>> <snip>
>>>> Canonical attached them to the bug report that they filed yesterday:
>>>> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>>>>
>>>> ...Juerg
>>>>
>>>
>>> Hello,
>>> I am able to reproduce this bug with the instructions posted in this bug. Let me 
>>> know what I can do to help.
>>
>> The bug shows:
>>
>> |This has been tested on the following kernels which all exhibit the same
>> |failures:
>> |- 3.2.0-24 (Ubuntu Precise)
>> |- 3.4.0-rc4
>> |- 3.0.29
>> |- 3.1.10
>> |- 3.2.15
>> |- 3.3.2
>>
>> Can you find an older kernel that isn't broken?
>>
> Tested with Ubuntu Lucid 2.6.32-38-generic #38 (upstream 2.6.32.52), so
> far I am able to run the test for 5 hours, which on the same machine I
> have typically been able to reproduce in 2 hours. Will continue to run
> this, and on another system to verify.
> 

Tested with  Ubuntu Lucid 2.6.32-38-generic #38 (upstream 2.6.32.52).
This also fails, attaching the dmesg output as the backtrace looks
similar. Let me know if you'd like me to try any other versions.

The first test I did with lucid, it ran for 7 hours, and I stopped the
test. This morning I re-ran and it failed within 10 minutes. I guess I
got lucky the second time.

> --chris
> 
>> -Ben
>>
> 


[-- Attachment #2: lucid-2.6.32-xfs-hang.dmesg --]
[-- Type: text/plain, Size: 68856 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 2.6.32-38-generic (buildd@allspice) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #83-Ubuntu SMP Wed Jan 4 11:12:07 UTC 2012 (Ubuntu 2.6.32-38.83-generic 2.6.32.52+drm33.21)
[    0.000000] Command line: noprompt cdrom-detect/try-usb=true file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- BOOT_IMAGE=/casper/vmlinuz 
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] BIOS-provided physical RAM map:
[    0.000000]  BIOS-e820: 0000000000000000 - 000000000009d800 (usable)
[    0.000000]  BIOS-e820: 000000000009d800 - 00000000000a0000 (reserved)
[    0.000000]  BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved)
[    0.000000]  BIOS-e820: 0000000000100000 - 0000000020000000 (usable)
[    0.000000]  BIOS-e820: 0000000020000000 - 0000000020200000 (reserved)
[    0.000000]  BIOS-e820: 0000000020200000 - 0000000040000000 (usable)
[    0.000000]  BIOS-e820: 0000000040000000 - 0000000040200000 (reserved)
[    0.000000]  BIOS-e820: 0000000040200000 - 00000000da99f000 (usable)
[    0.000000]  BIOS-e820: 00000000da99f000 - 00000000dae9f000 (reserved)
[    0.000000]  BIOS-e820: 00000000dae9f000 - 00000000daf9f000 (ACPI NVS)
[    0.000000]  BIOS-e820: 00000000daf9f000 - 00000000dafff000 (ACPI data)
[    0.000000]  BIOS-e820: 00000000dafff000 - 00000000db000000 (usable)
[    0.000000]  BIOS-e820: 00000000db000000 - 00000000dfa00000 (reserved)
[    0.000000]  BIOS-e820: 00000000f8000000 - 00000000fc000000 (reserved)
[    0.000000]  BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved)
[    0.000000]  BIOS-e820: 00000000fed08000 - 00000000fed09000 (reserved)
[    0.000000]  BIOS-e820: 00000000fed10000 - 00000000fed1a000 (reserved)
[    0.000000]  BIOS-e820: 00000000fed1c000 - 00000000fed20000 (reserved)
[    0.000000]  BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved)
[    0.000000]  BIOS-e820: 00000000ffd20000 - 0000000100000000 (reserved)
[    0.000000]  BIOS-e820: 0000000100000000 - 000000021e600000 (usable)
[    0.000000]  BIOS-e820: 000000021e600000 - 000000021e800000 (reserved)
[    0.000000] DMI 2.6 present.
[    0.000000] last_pfn = 0x21e600 max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: uncachable
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 0FFC00000 mask FFFC00000 write-protect
[    0.000000]   1 base 000000000 mask F80000000 write-back
[    0.000000]   2 base 080000000 mask FC0000000 write-back
[    0.000000]   3 base 0C0000000 mask FE0000000 write-back
[    0.000000]   4 base 0DC000000 mask FFC000000 uncachable
[    0.000000]   5 base 0DB000000 mask FFF000000 uncachable
[    0.000000]   6 base 100000000 mask F00000000 write-back
[    0.000000]   7 base 200000000 mask FE0000000 write-back
[    0.000000]   8 base 21F000000 mask FFF000000 uncachable
[    0.000000]   9 base 21E800000 mask FFF800000 uncachable
[    0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[    0.000000] last_pfn = 0xdb000 max_arch_pfn = 0x400000000
[    0.000000] e820 update range: 0000000000001000 - 0000000000006000 (usable) ==> (reserved)
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] modified physical RAM map:
[    0.000000]  modified: 0000000000000000 - 0000000000001000 (usable)
[    0.000000]  modified: 0000000000001000 - 0000000000006000 (reserved)
[    0.000000]  modified: 0000000000006000 - 000000000009d800 (usable)
[    0.000000]  modified: 000000000009d800 - 00000000000a0000 (reserved)
[    0.000000]  modified: 00000000000e0000 - 0000000000100000 (reserved)
[    0.000000]  modified: 0000000000100000 - 0000000020000000 (usable)
[    0.000000]  modified: 0000000020000000 - 0000000020200000 (reserved)
[    0.000000]  modified: 0000000020200000 - 0000000040000000 (usable)
[    0.000000]  modified: 0000000040000000 - 0000000040200000 (reserved)
[    0.000000]  modified: 0000000040200000 - 00000000da99f000 (usable)
[    0.000000]  modified: 00000000da99f000 - 00000000dae9f000 (reserved)
[    0.000000]  modified: 00000000dae9f000 - 00000000daf9f000 (ACPI NVS)
[    0.000000]  modified: 00000000daf9f000 - 00000000dafff000 (ACPI data)
[    0.000000]  modified: 00000000dafff000 - 00000000db000000 (usable)
[    0.000000]  modified: 00000000db000000 - 00000000dfa00000 (reserved)
[    0.000000]  modified: 00000000f8000000 - 00000000fc000000 (reserved)
[    0.000000]  modified: 00000000fec00000 - 00000000fec01000 (reserved)
[    0.000000]  modified: 00000000fed08000 - 00000000fed09000 (reserved)
[    0.000000]  modified: 00000000fed10000 - 00000000fed1a000 (reserved)
[    0.000000]  modified: 00000000fed1c000 - 00000000fed20000 (reserved)
[    0.000000]  modified: 00000000fee00000 - 00000000fee01000 (reserved)
[    0.000000]  modified: 00000000ffd20000 - 0000000100000000 (reserved)
[    0.000000]  modified: 0000000100000000 - 000000021e600000 (usable)
[    0.000000]  modified: 000000021e600000 - 000000021e800000 (reserved)
[    0.000000] initial memory mapped : 0 - 20000000
[    0.000000] init_memory_mapping: 0000000000000000-00000000db000000
[    0.000000] NX (Execute Disable) protection: active
[    0.000000]  0000000000 - 00db000000 page 2M
[    0.000000] kernel direct mapping tables up to db000000 @ 8000-d000
[    0.000000] init_memory_mapping: 0000000100000000-000000021e600000
[    0.000000] NX (Execute Disable) protection: active
[    0.000000]  0100000000 - 021e600000 page 2M
[    0.000000] kernel direct mapping tables up to 21e600000 @ b000-15000
[    0.000000] RAMDISK: 1f690000 - 1fffe884
[    0.000000] ACPI: RSDP 00000000000f00e0 00024 (v02 LENOVO)
[    0.000000] ACPI: XSDT 00000000daffe120 000AC (v01 LENOVO TP-8D    00001250 PTEC 00000002)
[    0.000000] ACPI: FACP 00000000dafe7000 000F4 (v04 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: DSDT 00000000dafea000 0F6A7 (v01 LENOVO TP-8D    00001250 INTL 20061109)
[    0.000000] ACPI: FACS 00000000daf2d000 00040
[    0.000000] ACPI: SLIC 00000000daffd000 00176 (v01 LENOVO TP-8D    00001250 PTEC 00000001)
[    0.000000] ACPI: SSDT 00000000daffc000 00249 (v01 LENOVO TP-SSDT2 00000200 INTL 20061109)
[    0.000000] ACPI: SSDT 00000000daffb000 00033 (v01 LENOVO TP-SSDT1 00000100 INTL 20061109)
[    0.000000] ACPI: SSDT 00000000daffa000 007D1 (v01 LENOVO SataAhci 00001000 INTL 20061109)
[    0.000000] ACPI: HPET 00000000dafe6000 00038 (v01 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: APIC 00000000dafe5000 00098 (v01 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: MCFG 00000000dafe4000 0003C (v01 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: ECDT 00000000dafe3000 00052 (v01 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: ASF! 00000000dafe9000 000A5 (v32 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: TCPA 00000000dafe2000 00032 (v02    PTL   LENOVO 06040000 LNVO 00000001)
[    0.000000] ACPI: SSDT 00000000dafe1000 00A27 (v01  PmRef  Cpu0Ist 00003000 INTL 20061109)
[    0.000000] ACPI: SSDT 00000000dafe0000 00996 (v01  PmRef    CpuPm 00003000 INTL 20061109)
[    0.000000] ACPI: DMAR 00000000dafdf000 000E8 (v01 INTEL      SNB  00000001 INTL 00000001)
[    0.000000] ACPI: UEFI 00000000dafde000 0003E (v01 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: UEFI 00000000dafdd000 00042 (v01 PTL      COMBUF 00000001 PTL  00000001)
[    0.000000] ACPI: UEFI 00000000dafdc000 00292 (v01 LENOVO TP-8D    00001250 PTL  00000002)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at 0000000000000000-000000021e600000
[    0.000000] Bootmem setup node 0 0000000000000000-000000021e600000
[    0.000000]   NODE_DATA [0000000000010000 - 0000000000014fff]
[    0.000000]   bootmap [0000000000015000 -  0000000000058cbf] pages 44
[    0.000000] (8 early reservations) ==> bootmem [0000000000 - 021e600000]
[    0.000000]   #0 [0000000000 - 0000001000]   BIOS data page ==> [0000000000 - 0000001000]
[    0.000000]   #1 [0000006000 - 0000008000]       TRAMPOLINE ==> [0000006000 - 0000008000]
[    0.000000]   #2 [0001000000 - 0001a35ac4]    TEXT DATA BSS ==> [0001000000 - 0001a35ac4]
[    0.000000]   #3 [001f690000 - 001fffe884]          RAMDISK ==> [001f690000 - 001fffe884]
[    0.000000]   #4 [000009d800 - 0000100000]    BIOS reserved ==> [000009d800 - 0000100000]
[    0.000000]   #5 [0001a36000 - 0001a360ed]              BRK ==> [0001a36000 - 0001a360ed]
[    0.000000]   #6 [0000008000 - 000000b000]          PGTABLE ==> [0000008000 - 000000b000]
[    0.000000]   #7 [000000b000 - 0000010000]          PGTABLE ==> [000000b000 - 0000010000]
[    0.000000]  [ffffea0000000000-ffffea00077fffff] PMD -> [ffff88002c600000-ffff8800337fffff] on node 0
[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00000000 -> 0x00001000
[    0.000000]   DMA32    0x00001000 -> 0x00100000
[    0.000000]   Normal   0x00100000 -> 0x0021e600
[    0.000000] Movable zone start PFN for each node
[    0.000000] early_node_map[7] active PFN ranges
[    0.000000]     0: 0x00000000 -> 0x00000001
[    0.000000]     0: 0x00000006 -> 0x0000009d
[    0.000000]     0: 0x00000100 -> 0x00020000
[    0.000000]     0: 0x00020200 -> 0x00040000
[    0.000000]     0: 0x00040200 -> 0x000da99f
[    0.000000]     0: 0x000dafff -> 0x000db000
[    0.000000]     0: 0x00100000 -> 0x0021e600
[    0.000000] On node 0 totalpages: 2067256
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 109 pages reserved
[    0.000000]   DMA zone: 3827 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 14280 pages used for memmap
[    0.000000]   DMA32 zone: 875992 pages, LIFO batch:31
[    0.000000]   Normal zone: 16037 pages used for memmap
[    0.000000]   Normal zone: 1156955 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x408
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000
[    0.000000] SMP: Allowing 8 CPUs, 4 hotplug CPUs
[    0.000000] nr_irqs_gsi: 24
[    0.000000] PM: Registered nosave memory: 0000000000001000 - 0000000000006000
[    0.000000] PM: Registered nosave memory: 000000000009d000 - 000000000009e000
[    0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000
[    0.000000] PM: Registered nosave memory: 00000000000e0000 - 0000000000100000
[    0.000000] PM: Registered nosave memory: 0000000020000000 - 0000000020200000
[    0.000000] PM: Registered nosave memory: 0000000040000000 - 0000000040200000
[    0.000000] PM: Registered nosave memory: 00000000da99f000 - 00000000dae9f000
[    0.000000] PM: Registered nosave memory: 00000000dae9f000 - 00000000daf9f000
[    0.000000] PM: Registered nosave memory: 00000000daf9f000 - 00000000dafff000
[    0.000000] PM: Registered nosave memory: 00000000db000000 - 00000000dfa00000
[    0.000000] PM: Registered nosave memory: 00000000dfa00000 - 00000000f8000000
[    0.000000] PM: Registered nosave memory: 00000000f8000000 - 00000000fc000000
[    0.000000] PM: Registered nosave memory: 00000000fc000000 - 00000000fec00000
[    0.000000] PM: Registered nosave memory: 00000000fec00000 - 00000000fec01000
[    0.000000] PM: Registered nosave memory: 00000000fec01000 - 00000000fed08000
[    0.000000] PM: Registered nosave memory: 00000000fed08000 - 00000000fed09000
[    0.000000] PM: Registered nosave memory: 00000000fed09000 - 00000000fed10000
[    0.000000] PM: Registered nosave memory: 00000000fed10000 - 00000000fed1a000
[    0.000000] PM: Registered nosave memory: 00000000fed1a000 - 00000000fed1c000
[    0.000000] PM: Registered nosave memory: 00000000fed1c000 - 00000000fed20000
[    0.000000] PM: Registered nosave memory: 00000000fed20000 - 00000000fee00000
[    0.000000] PM: Registered nosave memory: 00000000fee00000 - 00000000fee01000
[    0.000000] PM: Registered nosave memory: 00000000fee01000 - 00000000ffd20000
[    0.000000] PM: Registered nosave memory: 00000000ffd20000 - 0000000100000000
[    0.000000] Allocating PCI resources starting at dfa00000 (gap: dfa00000:18600000)
[    0.000000] Booting paravirtualized kernel on bare hardware
[    0.000000] NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:8 nr_node_ids:1
[    0.000000] PERCPU: Embedded 30 pages/cpu @ffff88002c200000 s91608 r8192 d23080 u262144
[    0.000000] pcpu-alloc: s91608 r8192 d23080 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 2036774
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: noprompt cdrom-detect/try-usb=true file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- BOOT_IMAGE=/casper/vmlinuz 
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Initializing CPU#0
[    0.000000] xsave/xrstor: enabled xstate_bv 0x7, cntxt size 0x340
[    0.000000] Checking aperture...
[    0.000000] No AGP bridge found
[    0.000000] Calgary: detecting Calgary via BIOS EBDA area
[    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[    0.000000] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.000000] Placing 64MB software IO TLB between ffff880024000000 - ffff880028000000
[    0.000000] software IO TLB at phys 0x24000000 - 0x28000000
[    0.000000] Memory: 8064904k/8886272k available (5436k kernel code, 617248k absent, 204120k reserved, 2982k data, 884k init)
[    0.000000] SLUB: Genslabs=14, HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] NR_IRQS:4352 nr_irqs:472
[    0.000000] Extended CMOS year: 2000
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [tty0] enabled
[    0.000000] allocated 83886080 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[    0.000000] hpet clockevent registered
[    0.000000] Fast TSC calibration using PIT
[    0.010000] Detected 2491.853 MHz processor.
[    0.000003] Calibrating delay loop (skipped), value calculated using timer frequency.. 4983.70 BogoMIPS (lpj=24918530)
[    0.000017] Security Framework initialized
[    0.000028] AppArmor: AppArmor initialized
[    0.000595] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[    0.002213] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.002888] Mount-cache hash table entries: 256
[    0.002970] Initializing cgroup subsys ns
[    0.002973] Initializing cgroup subsys cpuacct
[    0.002975] Initializing cgroup subsys memory
[    0.002979] Initializing cgroup subsys devices
[    0.002981] Initializing cgroup subsys freezer
[    0.002983] Initializing cgroup subsys net_cls
[    0.002997] CPU: Physical Processor ID: 0
[    0.002998] CPU: Processor Core ID: 0
[    0.003002] CPU: L1 I cache: 32K, L1 D cache: 32K
[    0.003003] CPU: L2 cache: 256K
[    0.003004] CPU: L3 cache: 3072K
[    0.003007] CPU 0/0x0 -> Node 0
[    0.003009] mce: CPU supports 7 MCE banks
[    0.003019] CPU0: Thermal monitoring enabled (TM1)
[    0.003021] CPU 0 MCA banks CMCI:0 CMCI:1 CMCI:3 CMCI:5 CMCI:6
[    0.003030] using mwait in idle threads.
[    0.003031] Performance Events: Nehalem/Corei7 events, Intel PMU driver.
[    0.003035] ... version:                3
[    0.003036] ... bit width:              48
[    0.003037] ... generic registers:      4
[    0.003038] ... value mask:             0000ffffffffffff
[    0.003040] ... max period:             000000007fffffff
[    0.003041] ... fixed-purpose events:   3
[    0.003042] ... event mask:             000000070000000f
[    0.005291] ACPI: Core revision 20090903
[    0.023306] ftrace: converting mcount calls to 0f 1f 44 00 00
[    0.023309] ftrace: allocating 22567 entries in 89 pages
[    0.031442] Not enabling x2apic, Intr-remapping init failed.
[    0.031445] Setting APIC routing to flat
[    0.031789] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.131568] CPU0: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz stepping 07
[    0.247700] Booting processor 1 APIC 0x1 ip 0x6000
[    0.258022] Initializing CPU#1
[    0.407246] CPU: Physical Processor ID: 0
[    0.407248] CPU: Processor Core ID: 0
[    0.407250] CPU: L1 I cache: 32K, L1 D cache: 32K
[    0.407252] CPU: L2 cache: 256K
[    0.407253] CPU: L3 cache: 3072K
[    0.407256] CPU 1/0x1 -> Node 0
[    0.407268] CPU1: Thermal monitoring enabled (TM1)
[    0.407270] CPU 1 MCA banks SHD:0 SHD:1 SHD:3 SHD:5 SHD:6
[    0.407348] CPU1: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz stepping 07
[    0.407357] checking TSC synchronization [CPU#0 -> CPU#1]: passed.
[    0.427385] Booting processor 2 APIC 0x2 ip 0x6000
[    0.437684] Initializing CPU#2
[    0.586837] CPU: Physical Processor ID: 0
[    0.586838] CPU: Processor Core ID: 1
[    0.586840] CPU: L1 I cache: 32K, L1 D cache: 32K
[    0.586842] CPU: L2 cache: 256K
[    0.586842] CPU: L3 cache: 3072K
[    0.586844] CPU 2/0x2 -> Node 0
[    0.586855] CPU2: Thermal monitoring enabled (TM1)
[    0.586857] CPU 2 MCA banks CMCI:0 CMCI:1 CMCI:3 SHD:5 SHD:6
[    0.586914] CPU2: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz stepping 07
[    0.586921] checking TSC synchronization [CPU#0 -> CPU#2]: passed.
[    0.606946] Booting processor 3 APIC 0x3 ip 0x6000
[    0.617245] Initializing CPU#3
[    0.766427] CPU: Physical Processor ID: 0
[    0.766428] CPU: Processor Core ID: 1
[    0.766430] CPU: L1 I cache: 32K, L1 D cache: 32K
[    0.766431] CPU: L2 cache: 256K
[    0.766432] CPU: L3 cache: 3072K
[    0.766434] CPU 3/0x3 -> Node 0
[    0.766444] CPU3: Thermal monitoring enabled (TM1)
[    0.766446] CPU 3 MCA banks SHD:0 SHD:1 SHD:3 SHD:5 SHD:6
[    0.766469] CPU3: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz stepping 07
[    0.766476] checking TSC synchronization [CPU#0 -> CPU#3]: passed.
[    0.786448] Brought up 4 CPUs
[    0.786449] Total of 4 processors activated (19935.08 BogoMIPS).
[    0.788648] CPU0 attaching sched-domain:
[    0.788651]  domain 0: span 0-1 level SIBLING
[    0.788653]   groups: 0 (cpu_power = 589) 1 (cpu_power = 589)
[    0.788657]   domain 1: span 0-3 level MC
[    0.788658]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[    0.788664] CPU1 attaching sched-domain:
[    0.788665]  domain 0: span 0-1 level SIBLING
[    0.788666]   groups: 1 (cpu_power = 589) 0 (cpu_power = 589)
[    0.788670]   domain 1: span 0-3 level MC
[    0.788671]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[    0.788674] CPU2 attaching sched-domain:
[    0.788676]  domain 0: span 2-3 level SIBLING
[    0.788677]   groups: 2 (cpu_power = 589) 3 (cpu_power = 589)
[    0.788680]   domain 1: span 0-3 level MC
[    0.788681]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[    0.788685] CPU3 attaching sched-domain:
[    0.788686]  domain 0: span 2-3 level SIBLING
[    0.788687]   groups: 3 (cpu_power = 589) 2 (cpu_power = 589)
[    0.788690]   domain 1: span 0-3 level MC
[    0.788692]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[    0.788883] devtmpfs: initialized
[    0.789117] regulator: core version 0.5
[    0.789141] Time: 16:40:10  Date: 05/18/12
[    0.789171] NET: Registered protocol family 16
[    0.789240] Trying to unpack rootfs image as initramfs...
[    0.789252] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[    0.789254] ACPI: bus type pci registered
[    0.789472] PCI: MCFG configuration 0: base f8000000 segment 0 buses 0 - 63
[    0.789474] PCI: MCFG area at f8000000 reserved in E820
[    0.791029] PCI: Using MMCONFIG at f8000000 - fbffffff
[    0.791030] PCI: Using configuration type 1 for base access
[    0.791609] bio: create slab <bio-0> at 0
[    0.792672] ACPI: EC: EC description table is found, configuring boot EC
[    0.798776] ACPI: BIOS _OSI(Linux) query ignored
[    0.802120] ACPI: Interpreter enabled
[    0.802123] ACPI: (supports S0 S3 S4 S5)
[    0.802144] ACPI: Using IOAPIC for interrupt routing
[    0.808802] ACPI: EC: GPE = 0x11, I/O: command/status = 0x66, data = 0x62
[    0.808966] ACPI: Power Resource [PUBS] (on)
[    0.810841] ACPI: ACPI Dock Station Driver: 3 docks/bays found
[    0.811028] ACPI: PCI Root Bridge [PCI0] (0000:00)
[    0.811095] pci 0000:00:02.0: reg 10 64bit mmio: [0xf0000000-0xf03fffff]
[    0.811100] pci 0000:00:02.0: reg 18 64bit mmio pref: [0xe0000000-0xefffffff]
[    0.811103] pci 0000:00:02.0: reg 20 io port: [0x5000-0x503f]
[    0.811176] pci 0000:00:16.0: reg 10 64bit mmio: [0xf2525000-0xf252500f]
[    0.811222] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[    0.811226] pci 0000:00:16.0: PME# disabled
[    0.811277] pci 0000:00:19.0: reg 10 32bit mmio: [0xf2500000-0xf251ffff]
[    0.811283] pci 0000:00:19.0: reg 14 32bit mmio: [0xf252b000-0xf252bfff]
[    0.811289] pci 0000:00:19.0: reg 18 io port: [0x5080-0x509f]
[    0.811327] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    0.811331] pci 0000:00:19.0: PME# disabled
[    0.811378] pci 0000:00:1a.0: reg 10 32bit mmio: [0xf252a000-0xf252a3ff]
[    0.811431] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    0.811435] pci 0000:00:1a.0: PME# disabled
[    0.811475] pci 0000:00:1b.0: reg 10 64bit mmio: [0xf2520000-0xf2523fff]
[    0.811516] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    0.811520] pci 0000:00:1b.0: PME# disabled
[    0.811587] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    0.811591] pci 0000:00:1c.0: PME# disabled
[    0.811659] pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold
[    0.811663] pci 0000:00:1c.1: PME# disabled
[    0.811733] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    0.811736] pci 0000:00:1c.3: PME# disabled
[    0.811839] pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
[    0.811842] pci 0000:00:1c.4: PME# disabled
[    0.811904] pci 0000:00:1d.0: reg 10 32bit mmio: [0xf2529000-0xf25293ff]
[    0.811956] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    0.811960] pci 0000:00:1d.0: PME# disabled
[    0.812094] pci 0000:00:1f.2: reg 10 io port: [0x50a8-0x50af]
[    0.812100] pci 0000:00:1f.2: reg 14 io port: [0x50bc-0x50bf]
[    0.812106] pci 0000:00:1f.2: reg 18 io port: [0x50a0-0x50a7]
[    0.812111] pci 0000:00:1f.2: reg 1c io port: [0x50b8-0x50bb]
[    0.812117] pci 0000:00:1f.2: reg 20 io port: [0x5060-0x507f]
[    0.812123] pci 0000:00:1f.2: reg 24 32bit mmio: [0xf2528000-0xf25287ff]
[    0.812152] pci 0000:00:1f.2: PME# supported from D3hot
[    0.812156] pci 0000:00:1f.2: PME# disabled
[    0.812184] pci 0000:00:1f.3: reg 10 64bit mmio: [0xf2524000-0xf25240ff]
[    0.812197] pci 0000:00:1f.3: reg 20 io port: [0xefa0-0xefbf]
[    0.812359] pci 0000:03:00.0: reg 10 64bit mmio: [0xf2400000-0xf2401fff]
[    0.812486] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
[    0.812493] pci 0000:03:00.0: PME# disabled
[    0.812562] pci 0000:00:1c.1: bridge 32bit mmio: [0xf2400000-0xf24fffff]
[    0.812605] pci 0000:00:1c.3: bridge io port: [0x4000-0x4fff]
[    0.812609] pci 0000:00:1c.3: bridge 32bit mmio: [0xf1c00000-0xf23fffff]
[    0.812615] pci 0000:00:1c.3: bridge 64bit mmio pref: [0xf0400000-0xf0bfffff]
[    0.812837] pci 0000:0d:00.0: reg 10 32bit mmio: [0xf1400000-0xf14000ff]
[    0.812953] pci 0000:0d:00.0: supports D1 D2
[    0.812954] pci 0000:0d:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.812960] pci 0000:0d:00.0: PME# disabled
[    0.813039] pci 0000:00:1c.4: bridge io port: [0x3000-0x3fff]
[    0.813043] pci 0000:00:1c.4: bridge 32bit mmio: [0xf1400000-0xf1bfffff]
[    0.813052] pci 0000:00:1c.4: bridge 64bit mmio pref: [0xf0c00000-0xf13fffff]
[    0.813077] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
[    0.813199] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.EXP1._PRT]
[    0.813255] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.EXP2._PRT]
[    0.813309] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.EXP4._PRT]
[    0.813372] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.EXP5._PRT]
[    0.816139] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 10 *11)
[    0.816293] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *7 9 10 11)
[    0.816447] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 9 10 *11)
[    0.816596] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 10 *11)
[    0.816743] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 9 *10 11)
[    0.816880] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 9 10 11) *0, disabled.
[    0.817026] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 *7 9 10 11)
[    0.817173] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 9 *10 11)
[    0.817270] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[    0.817276] vgaarb: loaded
[    0.817351] SCSI subsystem initialized
[    0.817434] libata version 3.00 loaded.
[    0.817490] usbcore: registered new interface driver usbfs
[    0.817497] usbcore: registered new interface driver hub
[    0.817517] usbcore: registered new device driver usb
[    0.817633] ACPI: WMI: Mapper loaded
[    0.817634] PCI: Using ACPI for IRQ routing
[    0.818034] NetLabel: Initializing
[    0.818035] NetLabel:  domain hash size = 128
[    0.818036] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.818045] NetLabel:  unlabeled traffic allowed by default
[    0.818079] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[    0.818084] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
[    0.820093] Switching to clocksource tsc
[    2.689543] AppArmor: AppArmor Filesystem Enabled
[    2.689555] pnp: PnP ACPI init
[    2.689567] ACPI: bus type pnp registered
[    2.691958] pnp: PnP ACPI: found 11 devices
[    2.691961] ACPI: ACPI bus type pnp unregistered
[    2.691976] system 00:00: iomem range 0x0-0x9ffff could not be reserved
[    2.691978] system 00:00: iomem range 0xc0000-0xc3fff has been reserved
[    2.691980] system 00:00: iomem range 0xc4000-0xc7fff has been reserved
[    2.691982] system 00:00: iomem range 0xc8000-0xcbfff has been reserved
[    2.691984] system 00:00: iomem range 0xcc000-0xcffff has been reserved
[    2.691985] system 00:00: iomem range 0xd0000-0xd3fff has been reserved
[    2.691987] system 00:00: iomem range 0xd4000-0xd7fff has been reserved
[    2.691992] system 00:00: iomem range 0xd8000-0xdbfff has been reserved
[    2.691994] system 00:00: iomem range 0xdc000-0xdffff has been reserved
[    2.691996] system 00:00: iomem range 0xe0000-0xe3fff could not be reserved
[    2.691998] system 00:00: iomem range 0xe4000-0xe7fff could not be reserved
[    2.691999] system 00:00: iomem range 0xe8000-0xebfff could not be reserved
[    2.692001] system 00:00: iomem range 0xec000-0xeffff could not be reserved
[    2.692003] system 00:00: iomem range 0xf0000-0xfffff could not be reserved
[    2.692005] system 00:00: iomem range 0x100000-0xdf9fffff could not be reserved
[    2.692008] system 00:00: iomem range 0xfec00000-0xfed3ffff could not be reserved
[    2.692010] system 00:00: iomem range 0xfed4c000-0xffffffff could not be reserved
[    2.692018] system 00:02: ioport range 0x400-0x47f has been reserved
[    2.692020] system 00:02: ioport range 0x500-0x57f has been reserved
[    2.692022] system 00:02: ioport range 0x800-0x80f has been reserved
[    2.692024] system 00:02: ioport range 0x15e0-0x15ef has been reserved
[    2.692026] system 00:02: ioport range 0x1600-0x167f has been reserved
[    2.692028] system 00:02: iomem range 0xf8000000-0xfbffffff has been reserved
[    2.692030] system 00:02: iomem range 0x0-0xfff could not be reserved
[    2.692052] system 00:02: iomem range 0xfed1c000-0xfed1ffff has been reserved
[    2.692054] system 00:02: iomem range 0xfed10000-0xfed13fff has been reserved
[    2.692056] system 00:02: iomem range 0xfed18000-0xfed18fff has been reserved
[    2.692059] system 00:02: iomem range 0xfed19000-0xfed19fff has been reserved
[    2.692061] system 00:02: iomem range 0xfed45000-0xfed4bfff has been reserved
[    2.696756] pci 0000:00:1c.0: PCI bridge, secondary bus 0000:02
[    2.696758] pci 0000:00:1c.0:   IO window: disabled
[    2.696763] pci 0000:00:1c.0:   MEM window: disabled
[    2.696767] pci 0000:00:1c.0:   PREFETCH window: disabled
[    2.696774] pci 0000:00:1c.1: PCI bridge, secondary bus 0000:03
[    2.696775] pci 0000:00:1c.1:   IO window: disabled
[    2.696780] pci 0000:00:1c.1:   MEM window: 0xf2400000-0xf24fffff
[    2.696784] pci 0000:00:1c.1:   PREFETCH window: disabled
[    2.696790] pci 0000:00:1c.3: PCI bridge, secondary bus 0000:05
[    2.696793] pci 0000:00:1c.3:   IO window: 0x4000-0x4fff
[    2.696798] pci 0000:00:1c.3:   MEM window: 0xf1c00000-0xf23fffff
[    2.696803] pci 0000:00:1c.3:   PREFETCH window: 0x000000f0400000-0x000000f0bfffff
[    2.696809] pci 0000:00:1c.4: PCI bridge, secondary bus 0000:0d
[    2.696813] pci 0000:00:1c.4:   IO window: 0x3000-0x3fff
[    2.696819] pci 0000:00:1c.4:   MEM window: 0xf1400000-0xf1bfffff
[    2.696824] pci 0000:00:1c.4:   PREFETCH window: 0x000000f0c00000-0x000000f13fffff
[    2.696844]   alloc irq_desc for 16 on node -1
[    2.696846]   alloc kstat_irqs on node -1
[    2.696852] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[    2.696859] pci 0000:00:1c.0: setting latency timer to 64
[    2.696868]   alloc irq_desc for 17 on node -1
[    2.696870]   alloc kstat_irqs on node -1
[    2.696872] pci 0000:00:1c.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
[    2.696876] pci 0000:00:1c.1: setting latency timer to 64
[    2.696885]   alloc irq_desc for 19 on node -1
[    2.696886]   alloc kstat_irqs on node -1
[    2.696888] pci 0000:00:1c.3: PCI INT D -> GSI 19 (level, low) -> IRQ 19
[    2.696892] pci 0000:00:1c.3: setting latency timer to 64
[    2.696903] pci 0000:00:1c.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[    2.696908] pci 0000:00:1c.4: setting latency timer to 64
[    2.696912] pci_bus 0000:00: resource 0 io:  [0x00-0xffff]
[    2.696914] pci_bus 0000:00: resource 1 mem: [0x000000-0xffffffffffffffff]
[    2.696916] pci_bus 0000:03: resource 1 mem: [0xf2400000-0xf24fffff]
[    2.696918] pci_bus 0000:05: resource 0 io:  [0x4000-0x4fff]
[    2.696919] pci_bus 0000:05: resource 1 mem: [0xf1c00000-0xf23fffff]
[    2.696921] pci_bus 0000:05: resource 2 pref mem [0xf0400000-0xf0bfffff]
[    2.696923] pci_bus 0000:0d: resource 0 io:  [0x3000-0x3fff]
[    2.696924] pci_bus 0000:0d: resource 1 mem: [0xf1400000-0xf1bfffff]
[    2.696926] pci_bus 0000:0d: resource 2 pref mem [0xf0c00000-0xf13fffff]
[    2.696951] NET: Registered protocol family 2
[    2.697145] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes)
[    2.698847] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
[    2.700221] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    2.700376] TCP: Hash tables configured (established 524288 bind 65536)
[    2.700378] TCP reno registered
[    2.700452] NET: Registered protocol family 1
[    2.700464] pci 0000:00:02.0: Boot video device
[    2.700752] Scanning for low memory corruption every 60 seconds
[    2.700847] audit: initializing netlink socket (disabled)
[    2.700854] type=2000 audit(1337359212.553:1): initialized
[    2.707785] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    2.708756] VFS: Disk quotas dquot_6.5.2
[    2.708794] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    2.709181] fuse init (API version 7.13)
[    2.709235] msgmni has been set to 15751
[    2.709399] alg: No test for stdrng (krng)
[    2.709436] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[    2.709438] io scheduler noop registered
[    2.709439] io scheduler anticipatory registered
[    2.709440] io scheduler deadline registered
[    2.709481] io scheduler cfq registered (default)
[    2.709587]   alloc irq_desc for 24 on node -1
[    2.709589]   alloc kstat_irqs on node -1
[    2.709599] pcieport 0000:00:1c.0: irq 24 for MSI/MSI-X
[    2.709609] pcieport 0000:00:1c.0: setting latency timer to 64
[    2.709708]   alloc irq_desc for 25 on node -1
[    2.709709]   alloc kstat_irqs on node -1
[    2.709716] pcieport 0000:00:1c.1: irq 25 for MSI/MSI-X
[    2.709724] pcieport 0000:00:1c.1: setting latency timer to 64
[    2.709821]   alloc irq_desc for 26 on node -1
[    2.709822]   alloc kstat_irqs on node -1
[    2.709829] pcieport 0000:00:1c.3: irq 26 for MSI/MSI-X
[    2.709837] pcieport 0000:00:1c.3: setting latency timer to 64
[    2.709968]   alloc irq_desc for 27 on node -1
[    2.709970]   alloc kstat_irqs on node -1
[    2.709979] pcieport 0000:00:1c.4: irq 27 for MSI/MSI-X
[    2.709990] pcieport 0000:00:1c.4: setting latency timer to 64
[    2.710079] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    2.710092] Firmware did not grant requested _OSC control
[    2.710111] Firmware did not grant requested _OSC control
[    2.710134] Firmware did not grant requested _OSC control
[    2.710149] Firmware did not grant requested _OSC control
[    2.710163] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    2.710355] ACPI: AC Adapter [AC] (on-line)
[    2.710408] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input0
[    2.710569] ACPI: Lid Switch [LID]
[    2.710596] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
[    2.710600] ACPI: Sleep Button [SLPB]
[    2.710634] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
[    2.710636] ACPI: Power Button [PWRF]
[    2.711402] ACPI: SSDT 00000000dae8c018 008C0 (v01  PmRef  Cpu0Cst 00003001 INTL 20061109)
[    2.712328] processor LNXCPU:00: registered as cooling_device0
[    2.712749] ACPI: SSDT 00000000dae8da98 00303 (v01  PmRef    ApIst 00003000 INTL 20061109)
[    2.713173] ACPI: SSDT 00000000dae8bd98 00119 (v01  PmRef    ApCst 00003000 INTL 20061109)
[    2.714024] processor LNXCPU:01: registered as cooling_device1
[    2.974193] processor LNXCPU:02: registered as cooling_device2
[    2.975242] processor LNXCPU:03: registered as cooling_device3
[    2.977488] Freeing initrd memory: 9658k freed
[    2.979467] thermal LNXTHERM:01: registered as thermal_zone0
[    2.979474] ACPI: Thermal Zone [THM0] (64 C)
[    2.981085] Linux agpgart interface v0.103
[    2.981107] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    2.982442] brd: module loaded
[    2.982881] loop: module loaded
[    2.982960] input: Macintosh mouse button emulation as /devices/virtual/input/input3
[    2.983444] Fixed MDIO Bus: probed
[    2.983512] PPP generic driver version 2.4.2
[    2.983533] tun: Universal TUN/TAP device driver, 1.6
[    2.983534] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    2.983631] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    2.984871] ehci_hcd 0000:00:1a.0: power state changed by ACPI to D0
[    2.986671] ehci_hcd 0000:00:1a.0: power state changed by ACPI to D0
[    2.986679] ehci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[    2.986714] ehci_hcd 0000:00:1a.0: setting latency timer to 64
[    2.986736] ehci_hcd 0000:00:1a.0: EHCI Host Controller
[    2.986792] ehci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 1
[    2.986821] ehci_hcd 0000:00:1a.0: debug port 2
[    2.990694] ehci_hcd 0000:00:1a.0: cache line size of 32 is not supported
[    2.990707] ehci_hcd 0000:00:1a.0: irq 16, io mem 0xf252a000
[    2.994328] ACPI: Battery Slot [BAT0] (battery present)
[    3.007668] ehci_hcd 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[    3.007819] usb usb1: configuration #1 chosen from 1 choice
[    3.007838] hub 1-0:1.0: USB hub found
[    3.007844] hub 1-0:1.0: 3 ports detected
[    3.008079] ehci_hcd 0000:00:1d.0: power state changed by ACPI to D0
[    3.008241] ehci_hcd 0000:00:1d.0: power state changed by ACPI to D0
[    3.008247]   alloc irq_desc for 23 on node -1
[    3.008249]   alloc kstat_irqs on node -1
[    3.008253] ehci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23
[    3.008263] ehci_hcd 0000:00:1d.0: setting latency timer to 64
[    3.008266] ehci_hcd 0000:00:1d.0: EHCI Host Controller
[    3.008291] ehci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2
[    3.008314] ehci_hcd 0000:00:1d.0: debug port 2
[    3.012191] ehci_hcd 0000:00:1d.0: cache line size of 32 is not supported
[    3.012201] ehci_hcd 0000:00:1d.0: irq 23, io mem 0xf2529000
[    3.027616] ehci_hcd 0000:00:1d.0: USB 2.0 started, EHCI 1.00
[    3.027761] usb usb2: configuration #1 chosen from 1 choice
[    3.027777] hub 2-0:1.0: USB hub found
[    3.027781] hub 2-0:1.0: 3 ports detected
[    3.027815] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    3.027823] uhci_hcd: USB Universal Host Controller Interface driver
[    3.027865] PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    3.031439] serio: i8042 KBD port at 0x60,0x64 irq 1
[    3.031444] serio: i8042 AUX port at 0x60,0x64 irq 12
[    3.031512] mice: PS/2 mouse device common for all mice
[    3.031586] rtc_cmos 00:07: RTC can wake from S4
[    3.031612] rtc_cmos 00:07: rtc core: registered rtc_cmos as rtc0
[    3.031639] rtc0: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
[    3.031715] device-mapper: uevent: version 1.0.3
[    3.031779] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-devel@redhat.com
[    3.031845] device-mapper: multipath: version 1.1.0 loaded
[    3.031847] device-mapper: multipath round-robin: version 1.0.0 loaded
[    3.032059] cpuidle: using governor ladder
[    3.032169] cpuidle: using governor menu
[    3.032378] TCP cubic registered
[    3.032476] NET: Registered protocol family 10
[    3.032920] NET: Registered protocol family 17
[    3.035676] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input4
[    3.040284] PM: Resume from disk failed.
[    3.040292] registered taskstats version 1
[    3.040711]   Magic number: 12:176:692
[    3.040792] rtc_cmos 00:07: setting system clock to 2012-05-18 16:40:13 UTC (1337359213)
[    3.040794] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[    3.040795] EDD information not available.
[    3.040858] Freeing unused kernel memory: 884k freed
[    3.040957] Write protecting the kernel read-only data: 7716k
[    3.052794] udev: starting version 151
[    3.064733] vga16fb: initializing
[    3.064736] vga16fb: mapped to 0xffff8800000a0000
[    3.064779] fb0: VGA16 VGA frame buffer device
[    3.069101] ahci 0000:00:1f.2: version 3.0
[    3.069118] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19
[    3.069163]   alloc irq_desc for 28 on node -1
[    3.069165]   alloc kstat_irqs on node -1
[    3.069175] ahci 0000:00:1f.2: irq 28 for MSI/MSI-X
[    3.069203] ahci: SSS flag set, parallel bus scan disabled
[    3.087539] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x13 impl SATA mode
[    3.087543] ahci 0000:00:1f.2: flags: 64bit ncq sntf ilck stag pm led clo pio slum part ems sxs apst 
[    3.087549] ahci 0000:00:1f.2: setting latency timer to 64
[    3.128065] scsi0 : ahci
[    3.128162] scsi1 : ahci
[    3.128215] scsi2 : ahci
[    3.128267] scsi3 : ahci
[    3.128317] scsi4 : ahci
[    3.128363] scsi5 : ahci
[    3.129401] ata1: SATA max UDMA/133 abar m2048@0xf2528000 port 0xf2528100 irq 28
[    3.129404] ata2: SATA max UDMA/133 abar m2048@0xf2528000 port 0xf2528180 irq 28
[    3.129405] ata3: DUMMY
[    3.129406] ata4: DUMMY
[    3.129408] ata5: SATA max UDMA/133 abar m2048@0xf2528000 port 0xf2528300 irq 28
[    3.129409] ata6: DUMMY
[    3.161700] Console: switching to colour frame buffer device 80x30
[    3.327618] usb 1-1: new high speed USB device using ehci_hcd and address 2
[    3.477234] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[    3.477242] usb 1-1: configuration #1 chosen from 1 choice
[    3.477334] hub 1-1:1.0: USB hub found
[    3.477468] hub 1-1:1.0: 6 ports detected
[    3.478134] ata1.00: ACPI cmd ef/02:00:00:00:00:a0 (SET FEATURES) succeeded
[    3.478137] ata1.00: ACPI cmd f5/00:00:00:00:00:a0 (SECURITY FREEZE LOCK) filtered out
[    3.478139] ata1.00: ACPI cmd ef/10:03:00:00:00:a0 (SET FEATURES) filtered out
[    3.479017] ata1.00: ATA-8: HITACHI HTS723232A7A364, EC2ZB70R, max UDMA/100
[    3.479026] ata1.00: 625142448 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
[    3.480131] ata1.00: ACPI cmd ef/02:00:00:00:00:a0 (SET FEATURES) succeeded
[    3.480139] ata1.00: ACPI cmd f5/00:00:00:00:00:a0 (SECURITY FREEZE LOCK) filtered out
[    3.480146] ata1.00: ACPI cmd ef/10:03:00:00:00:a0 (SET FEATURES) filtered out
[    3.480995] ata1.00: configured for UDMA/100
[    3.506732] scsi 0:0:0:0: Direct-Access     ATA      HITACHI HTS72323 EC2Z PQ: 0 ANSI: 5
[    3.506855] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    3.506877] sd 0:0:0:0: [sda] 625142448 512-byte logical blocks: (320 GB/298 GiB)
[    3.506916] sd 0:0:0:0: [sda] Write Protect is off
[    3.506918] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    3.506930] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    3.507003]  sda:
[    3.606332] usb 2-1: new high speed USB device using ehci_hcd and address 2
[    3.756652] usb 2-1: configuration #1 chosen from 1 choice
[    3.756855] hub 2-1:1.0: USB hub found
[    3.756921] hub 2-1:1.0: 8 ports detected
[    3.835931] usb 1-1.3: new full speed USB device using ehci_hcd and address 3
[    3.835972]  sda1 sda2 < sda5 > sda3
[    3.860292] sd 0:0:0:0: [sda] Attached SCSI disk
[    3.875737] ata2: SATA link down (SStatus 0 SControl 300)
[    3.956971] usb 1-1.3: configuration #1 chosen from 1 choice
[    4.045430] usb 1-1.6: new high speed USB device using ehci_hcd and address 4
[    4.171817] usb 1-1.6: configuration #1 chosen from 1 choice
[    4.244894] ata5: SATA link down (SStatus 0 SControl 300)
[    4.245135] usb 2-1.2: new high speed USB device using ehci_hcd and address 3
[    4.366453] usb 2-1.2: configuration #1 chosen from 1 choice
[    4.370262] Initializing USB Mass Storage driver...
[    4.370442] scsi6 : SCSI emulation for USB Mass Storage devices
[    4.370545] usb-storage: device found at 3
[    4.370546] usb-storage: waiting for device to settle before scanning
[    4.370553] usbcore: registered new interface driver usb-storage
[    4.370555] USB Mass Storage support registered.
[    4.727336] xor: automatically using best checksumming function: generic_sse
[    4.774230]    generic_sse: 14178.800 MB/sec
[    4.774232] xor: using function: generic_sse (14178.800 MB/sec)
[    4.775554] device-mapper: dm-raid45: initialized v0.2594b
[    4.989448] EXT4-fs (sda1): mounted filesystem with ordered data mode
[    9.353456] usb-storage: device scan complete
[    9.353975] scsi 6:0:0:0: Direct-Access     Real-Way RW8021 PENDRIVE  1.0  PQ: 0 ANSI: 2
[    9.354686] sd 6:0:0:0: Attached scsi generic sg1 type 0
[    9.355404] sd 6:0:0:0: [sdb] 2004992 512-byte logical blocks: (1.02 GB/979 MiB)
[    9.356006] sd 6:0:0:0: [sdb] Write Protect is off
[    9.356013] sd 6:0:0:0: [sdb] Mode Sense: 0b 00 00 08
[    9.356018] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[    9.358354] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[    9.358362]  sdb: sdb1
[    9.361517] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[    9.361529] sd 6:0:0:0: [sdb] Attached SCSI removable disk
[    9.757460] aufs 2-standalone.tree-20091207
[    9.766680] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[   17.825928] udev: starting version 151
[   18.125424] cfg80211: Calling CRDA to update world regulatory domain
[   18.209240] Linux video capture interface: v2.00
[   18.274884] uvcvideo: Found UVC 1.00 device Integrated Camera (04f2:b217)
[   18.275312] Non-volatile memory driver v1.3
[   18.276577] input: Integrated Camera as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.6/1-1.6:1.0/input/input5
[   18.276616] usbcore: registered new interface driver uvcvideo
[   18.276618] USB Video Class driver (v0.1.0)
[   18.332858] iwlagn: Intel(R) Wireless WiFi Link AGN driver for Linux, 1.3.27k
[   18.332860] iwlagn: Copyright(c) 2003-2009 Intel Corporation
[   18.332905] iwlagn 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
[   18.332913] iwlagn 0000:03:00.0: setting latency timer to 64
[   18.332954] iwlagn 0000:03:00.0: Detected Intel Wireless WiFi Link 1000 Series BGN REV=0x6C
[   18.340168] tpm_tis 00:0a: 1.2 TPM (device-id 0x0, rev-id 78)
[   18.341325] thinkpad_acpi: ThinkPad ACPI Extras v0.24
[   18.341328] thinkpad_acpi: http://ibm-acpi.sf.net/
[   18.341329] thinkpad_acpi: ThinkPad BIOS 8DET55WW (1.25 ), EC unknown
[   18.341331] thinkpad_acpi: Lenovo ThinkPad X220, model 42872WU
[   18.341920] thinkpad_acpi: radio switch found; radios are enabled
[   18.342176] thinkpad_acpi: possible tablet mode switch found; ThinkPad in laptop mode
[   18.342457] thinkpad_acpi: This ThinkPad has standard ACPI backlight brightness control, supported by the ACPI video driver
[   18.342459] thinkpad_acpi: Disabling thinkpad-acpi brightness events by default...
[   18.350832] thinkpad_acpi: rfkill switch tpacpi_bluetooth_sw: radio is blocked
[   18.351738] Registered led device: tpacpi::thinklight
[   18.351849] Registered led device: tpacpi::power
[   18.352149] Registered led device: tpacpi::standby
[   18.352268] Registered led device: tpacpi::thinkvantage
[   18.352544] thinkpad_acpi: Standard ACPI backlight interface available, not loading native one.
[   18.353091] thinkpad_acpi: Console audio control enabled, mode: monitor (read only)
[   18.357607] input: ThinkPad Extra Buttons as /devices/platform/thinkpad_acpi/input/input6
[   18.366035] iwlagn 0000:03:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels
[   18.366089]   alloc irq_desc for 29 on node -1
[   18.366091]   alloc kstat_irqs on node -1
[   18.366107] iwlagn 0000:03:00.0: irq 29 for MSI/MSI-X
[   18.385167] cfg80211: World regulatory domain updated:
[   18.385170] 	(start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp)
[   18.385173] 	(2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
[   18.385175] 	(2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm)
[   18.385177] 	(2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm)
[   18.385179] 	(5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
[   18.385181] 	(5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
[   18.421475] phy0: Selected rate control algorithm 'iwl-agn-rs'
[   18.479524] iwlagn 0000:03:00.0: firmware: requesting iwlwifi-1000-3.ucode
[   18.533209] iwlagn 0000:03:00.0: loaded firmware version 128.50.3.1
[   18.614789]   alloc irq_desc for 22 on node -1
[   18.614792]   alloc kstat_irqs on node -1
[   18.614798] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
[   18.614849] HDA Intel 0000:00:1b.0: setting latency timer to 64
[   18.626989] Registered led device: iwl-phy0::radio
[   18.627041] Registered led device: iwl-phy0::assoc
[   18.627055] Registered led device: iwl-phy0::RX
[   18.627081] Registered led device: iwl-phy0::TX
[   18.652850] ADDRCONF(NETDEV_UP): wlan0: link is not ready
[   18.757860] lp: driver loaded but no devices found
[   18.769510] ppdev: user-space parallel port driver
[   18.801947] Unable to query Synaptics hardware.
[   18.831644] usb 1-1.4: new full speed USB device using ehci_hcd and address 5
[   18.946926] usb 1-1.4: configuration #1 chosen from 1 choice
[   18.972175] Bluetooth: Core ver 2.15
[   18.972229] NET: Registered protocol family 31
[   18.972231] Bluetooth: HCI device and connection manager initialized
[   18.972233] Bluetooth: HCI socket layer initialized
[   18.988322] Bluetooth: Generic Bluetooth USB driver ver 0.6
[   18.988773] usbcore: registered new interface driver btusb
[   19.044668] Bluetooth: L2CAP ver 2.14
[   19.044670] Bluetooth: L2CAP socket layer initialized
[   19.104588] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
[   19.104590] Bluetooth: BNEP filters: protocol multicast
[   19.153873] Bridge firewalling registered
[   19.170539] Bluetooth: SCO (Voice Link) ver 0.6
[   19.170541] Bluetooth: SCO socket layer initialized
[   19.245834] Bluetooth: RFCOMM TTY layer initialized
[   19.245839] Bluetooth: RFCOMM socket layer initialized
[   19.245840] Bluetooth: RFCOMM ver 1.11
[   19.351209] CPU0 attaching NULL sched-domain.
[   19.351214] CPU1 attaching NULL sched-domain.
[   19.351217] CPU2 attaching NULL sched-domain.
[   19.351219] CPU3 attaching NULL sched-domain.
[   19.376986] input: PS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input7
[   19.422107] CPU0 attaching sched-domain:
[   19.422110]  domain 0: span 0-1 level SIBLING
[   19.422112]   groups: 0 (cpu_power = 589) 1 (cpu_power = 589)
[   19.422115]   domain 1: span 0-3 level MC
[   19.422116]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   19.422120] CPU1 attaching sched-domain:
[   19.422121]  domain 0: span 0-1 level SIBLING
[   19.422123]   groups: 1 (cpu_power = 589) 0 (cpu_power = 589)
[   19.422125]   domain 1: span 0-3 level MC
[   19.422126]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   19.422130] CPU2 attaching sched-domain:
[   19.422131]  domain 0: span 2-3 level SIBLING
[   19.422132]   groups: 2 (cpu_power = 589) 3 (cpu_power = 589)
[   19.422134]   domain 1: span 0-3 level MC
[   19.422135]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   19.422138] CPU3 attaching sched-domain:
[   19.422139]  domain 0: span 2-3 level SIBLING
[   19.422140]   groups: 3 (cpu_power = 589) 2 (cpu_power = 589)
[   19.422143]   domain 1: span 0-3 level MC
[   19.422144]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   19.422403] CPU0 attaching NULL sched-domain.
[   19.422405] CPU1 attaching NULL sched-domain.
[   19.422406] CPU2 attaching NULL sched-domain.
[   19.422407] CPU3 attaching NULL sched-domain.
[   19.471883] CPU0 attaching sched-domain:
[   19.471886]  domain 0: span 0-1 level SIBLING
[   19.471889]   groups: 0 (cpu_power = 589) 1 (cpu_power = 589)
[   19.471894]   domain 1: span 0-3 level MC
[   19.471896]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   19.471902] CPU1 attaching sched-domain:
[   19.471904]  domain 0: span 0-1 level SIBLING
[   19.471906]   groups: 1 (cpu_power = 589) 0 (cpu_power = 589)
[   19.471911]   domain 1: span 0-3 level MC
[   19.471912]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   19.471918] CPU2 attaching sched-domain:
[   19.471919]  domain 0: span 2-3 level SIBLING
[   19.471921]   groups: 2 (cpu_power = 589) 3 (cpu_power = 589)
[   19.471926]   domain 1: span 0-3 level MC
[   19.471928]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   19.471933] CPU3 attaching sched-domain:
[   19.471934]  domain 0: span 2-3 level SIBLING
[   19.471936]   groups: 3 (cpu_power = 589) 2 (cpu_power = 589)
[   19.471941]   domain 1: span 0-3 level MC
[   19.471942]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   23.838033] CPU0 attaching NULL sched-domain.
[   23.838036] CPU1 attaching NULL sched-domain.
[   23.838037] CPU2 attaching NULL sched-domain.
[   23.838039] CPU3 attaching NULL sched-domain.
[   23.890608] CPU0 attaching sched-domain:
[   23.890611]  domain 0: span 0-1 level SIBLING
[   23.890613]   groups: 0 (cpu_power = 589) 1 (cpu_power = 589)
[   23.890616]   domain 1: span 0-3 level MC
[   23.890617]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   23.890622] CPU1 attaching sched-domain:
[   23.890623]  domain 0: span 0-1 level SIBLING
[   23.890624]   groups: 1 (cpu_power = 589) 0 (cpu_power = 589)
[   23.890627]   domain 1: span 0-3 level MC
[   23.890628]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   23.890631] CPU2 attaching sched-domain:
[   23.890632]  domain 0: span 2-3 level SIBLING
[   23.890633]   groups: 2 (cpu_power = 589) 3 (cpu_power = 589)
[   23.890636]   domain 1: span 0-3 level MC
[   23.890637]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   23.890640] CPU3 attaching sched-domain:
[   23.890641]  domain 0: span 2-3 level SIBLING
[   23.890642]   groups: 3 (cpu_power = 589) 2 (cpu_power = 589)
[   23.890645]   domain 1: span 0-3 level MC
[   23.890646]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   23.890907] CPU0 attaching NULL sched-domain.
[   23.890909] CPU1 attaching NULL sched-domain.
[   23.890910] CPU2 attaching NULL sched-domain.
[   23.890911] CPU3 attaching NULL sched-domain.
[   23.950560] CPU0 attaching sched-domain:
[   23.950564]  domain 0: span 0-1 level SIBLING
[   23.950567]   groups: 0 (cpu_power = 589) 1 (cpu_power = 589)
[   23.950572]   domain 1: span 0-3 level MC
[   23.950574]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   23.950581] CPU1 attaching sched-domain:
[   23.950583]  domain 0: span 0-1 level SIBLING
[   23.950585]   groups: 1 (cpu_power = 589) 0 (cpu_power = 589)
[   23.950589]   domain 1: span 0-3 level MC
[   23.950591]    groups: 0-1 (cpu_power = 1178) 2-3 (cpu_power = 1178)
[   23.950597] CPU2 attaching sched-domain:
[   23.950598]  domain 0: span 2-3 level SIBLING
[   23.950600]   groups: 2 (cpu_power = 589) 3 (cpu_power = 589)
[   23.950605]   domain 1: span 0-3 level MC
[   23.950607]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   23.950612] CPU3 attaching sched-domain:
[   23.950614]  domain 0: span 2-3 level SIBLING
[   23.950616]   groups: 3 (cpu_power = 589) 2 (cpu_power = 589)
[   23.950620]   domain 1: span 0-3 level MC
[   23.950622]    groups: 2-3 (cpu_power = 1178) 0-1 (cpu_power = 1178)
[   71.172915] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
[   71.173819] SGI XFS Quota Management subsystem
[   71.241675] XFS mounting filesystem sda3
[   71.602949] Ending clean XFS mount for filesystem: sda3
[  242.149938] INFO: task copy-files:3386 blocked for more than 120 seconds.
[  242.149946] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.149951] copy-files    D 0000000000000000     0  3386   3337 0x00000000
[  242.149961]  ffff880214b17c88 0000000000000086 0000000000015c00 0000000000015c00
[  242.149970]  ffff880208e43198 ffff880214b17fd8 0000000000015c00 ffff880208e42de0
[  242.149978]  0000000000015c00 ffff880214b17fd8 0000000000015c00 ffff880208e43198
[  242.149986] Call Trace:
[  242.150027]  [<ffffffffa034d103>] xlog_grant_log_space+0x173/0x3f0 [xfs]
[  242.150055]  [<ffffffffa035fa4a>] ? kmem_zone_zalloc+0x3a/0x50 [xfs]
[  242.150066]  [<ffffffff8105cd70>] ? default_wake_function+0x0/0x20
[  242.150095]  [<ffffffffa034d454>] xfs_log_reserve+0xd4/0xe0 [xfs]
[  242.150124]  [<ffffffffa0357b20>] xfs_trans_reserve+0xa0/0x210 [xfs]
[  242.150151]  [<ffffffffa035ca45>] xfs_free_eofblocks+0x185/0x2a0 [xfs]
[  242.150179]  [<ffffffffa035d578>] xfs_release+0x128/0x1f0 [xfs]
[  242.150207]  [<ffffffffa0364f35>] xfs_file_release+0x15/0x20 [xfs]
[  242.150217]  [<ffffffff81145ea5>] __fput+0xf5/0x210
[  242.150225]  [<ffffffff81145fe5>] fput+0x25/0x30
[  242.150232]  [<ffffffff8114210d>] filp_close+0x5d/0x90
[  242.150239]  [<ffffffff811421f7>] sys_close+0xb7/0x120
[  242.150249]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.150255] INFO: task copy-files:3387 blocked for more than 120 seconds.
[  242.150259] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.150263] copy-files    D 0000000000000000     0  3387   3337 0x00000000
[  242.150271]  ffff8802149cbd38 0000000000000086 0000000000015c00 0000000000015c00
[  242.150279]  ffff880208e44888 ffff8802149cbfd8 0000000000015c00 ffff880208e444d0
[  242.150286]  0000000000015c00 ffff8802149cbfd8 0000000000015c00 ffff880208e44888
[  242.150293] Call Trace:
[  242.150304]  [<ffffffff81546477>] __mutex_lock_slowpath+0xf7/0x180
[  242.150313]  [<ffffffff81145c1a>] ? get_empty_filp+0x7a/0x170
[  242.150321]  [<ffffffff8154635b>] mutex_lock+0x2b/0x50
[  242.150329]  [<ffffffff81152d49>] do_filp_open+0x3d9/0xba0
[  242.150336]  [<ffffffff81148b74>] ? cp_new_stat+0xe4/0x100
[  242.150345]  [<ffffffff8115e7fa>] ? alloc_fd+0x10a/0x150
[  242.150352]  [<ffffffff811422c9>] do_sys_open+0x69/0x170
[  242.150359]  [<ffffffff81142410>] sys_open+0x20/0x30
[  242.150367]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.150373] INFO: task copy-files:3390 blocked for more than 120 seconds.
[  242.150377] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.150381] copy-files    D 0000000000000000     0  3390   3337 0x00000000
[  242.150388]  ffff880212051c88 0000000000000086 0000000000015c00 0000000000015c00
[  242.150395]  ffff8801ed7903b8 ffff880212051fd8 0000000000015c00 ffff8801ed790000
[  242.150403]  0000000000015c00 ffff880212051fd8 0000000000015c00 ffff8801ed7903b8
[  242.150410] Call Trace:
[  242.150438]  [<ffffffffa034d103>] xlog_grant_log_space+0x173/0x3f0 [xfs]
[  242.150465]  [<ffffffffa035fa4a>] ? kmem_zone_zalloc+0x3a/0x50 [xfs]
[  242.150472]  [<ffffffff8105cd70>] ? default_wake_function+0x0/0x20
[  242.150500]  [<ffffffffa034d454>] xfs_log_reserve+0xd4/0xe0 [xfs]
[  242.150529]  [<ffffffffa0357b20>] xfs_trans_reserve+0xa0/0x210 [xfs]
[  242.150556]  [<ffffffffa035ca45>] xfs_free_eofblocks+0x185/0x2a0 [xfs]
[  242.150583]  [<ffffffffa035d578>] xfs_release+0x128/0x1f0 [xfs]
[  242.150610]  [<ffffffffa0364f35>] xfs_file_release+0x15/0x20 [xfs]
[  242.150618]  [<ffffffff81145ea5>] __fput+0xf5/0x210
[  242.150626]  [<ffffffff81145fe5>] fput+0x25/0x30
[  242.150632]  [<ffffffff8114210d>] filp_close+0x5d/0x90
[  242.150639]  [<ffffffff811421f7>] sys_close+0xb7/0x120
[  242.150647]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.150653] INFO: task copy-files:3391 blocked for more than 120 seconds.
[  242.150657] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.150661] copy-files    D 00000000ffffffff     0  3391   3337 0x00000000
[  242.150668]  ffff880208eafd38 0000000000000086 0000000000015c00 0000000000015c00
[  242.150675]  ffff8802103ec888 ffff880208eaffd8 0000000000015c00 ffff8802103ec4d0
[  242.150683]  0000000000015c00 ffff880208eaffd8 0000000000015c00 ffff8802103ec888
[  242.150690] Call Trace:
[  242.150699]  [<ffffffff81546477>] __mutex_lock_slowpath+0xf7/0x180
[  242.150707]  [<ffffffff81145c1a>] ? get_empty_filp+0x7a/0x170
[  242.150715]  [<ffffffff8154635b>] mutex_lock+0x2b/0x50
[  242.150723]  [<ffffffff81152d49>] do_filp_open+0x3d9/0xba0
[  242.150729]  [<ffffffff81148b74>] ? cp_new_stat+0xe4/0x100
[  242.150737]  [<ffffffff8115e7fa>] ? alloc_fd+0x10a/0x150
[  242.150744]  [<ffffffff811422c9>] do_sys_open+0x69/0x170
[  242.150751]  [<ffffffff81142410>] sys_open+0x20/0x30
[  242.150759]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.150765] INFO: task copy-files:3394 blocked for more than 120 seconds.
[  242.150768] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.150773] copy-files    D ffff8801f2b08f10     0  3394   3337 0x00000000
[  242.150780]  ffff8801ed697d38 0000000000000086 0000000000015c00 0000000000015c00
[  242.150787]  ffff88021402c888 ffff8801ed697fd8 0000000000015c00 ffff88021402c4d0
[  242.150794]  0000000000015c00 ffff8801ed697fd8 0000000000015c00 ffff88021402c888
[  242.150801] Call Trace:
[  242.150809]  [<ffffffff81546477>] __mutex_lock_slowpath+0xf7/0x180
[  242.150818]  [<ffffffff8154635b>] mutex_lock+0x2b/0x50
[  242.150825]  [<ffffffff81152d49>] do_filp_open+0x3d9/0xba0
[  242.150831]  [<ffffffff81148b74>] ? cp_new_stat+0xe4/0x100
[  242.150839]  [<ffffffff8115e7fa>] ? alloc_fd+0x10a/0x150
[  242.150846]  [<ffffffff811422c9>] do_sys_open+0x69/0x170
[  242.150853]  [<ffffffff81142410>] sys_open+0x20/0x30
[  242.150861]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.150866] INFO: task copy-files:3395 blocked for more than 120 seconds.
[  242.150870] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.150874] copy-files    D 0000000000000000     0  3395   3337 0x00000000
[  242.150881]  ffff8801ed68f7a8 0000000000000086 0000000000015c00 0000000000015c00
[  242.150888]  ffff880214b44888 ffff8801ed68ffd8 0000000000015c00 ffff880214b444d0
[  242.150895]  0000000000015c00 ffff8801ed68ffd8 0000000000015c00 ffff880214b44888
[  242.150902] Call Trace:
[  242.150930]  [<ffffffffa034d103>] xlog_grant_log_space+0x173/0x3f0 [xfs]
[  242.150957]  [<ffffffffa035fa4a>] ? kmem_zone_zalloc+0x3a/0x50 [xfs]
[  242.150963]  [<ffffffff8105cd70>] ? default_wake_function+0x0/0x20
[  242.150991]  [<ffffffffa034d454>] xfs_log_reserve+0xd4/0xe0 [xfs]
[  242.151019]  [<ffffffffa0357b20>] xfs_trans_reserve+0xa0/0x210 [xfs]
[  242.151048]  [<ffffffffa0357e8f>] ? xfs_trans_alloc+0x9f/0xb0 [xfs]
[  242.151077]  [<ffffffffa03472cc>] xfs_iomap_write_allocate+0x25c/0x3c0 [xfs]
[  242.151107]  [<ffffffffa0358929>] ? xfs_trans_unlocked_item+0x39/0x60 [xfs]
[  242.151136]  [<ffffffffa0347f8b>] xfs_iomap+0x2ab/0x2e0 [xfs]
[  242.151163]  [<ffffffffa036067d>] xfs_map_blocks+0x2d/0x40 [xfs]
[  242.151190]  [<ffffffffa0361a7a>] xfs_page_state_convert+0x3da/0x720 [xfs]
[  242.151200]  [<ffffffff812b9f05>] ? radix_tree_gang_lookup_tag_slot+0x95/0xe0
[  242.151209]  [<ffffffff810f4581>] ? generic_perform_write+0x161/0x1d0
[  242.151235]  [<ffffffffa0361f2a>] xfs_vm_writepage+0x7a/0x130 [xfs]
[  242.151244]  [<ffffffff8110e0f5>] ? __dec_zone_page_state+0x35/0x40
[  242.151253]  [<ffffffff810fcf07>] __writepage+0x17/0x40
[  242.151259]  [<ffffffff810fe08f>] write_cache_pages+0x1df/0x3e0
[  242.151268]  [<ffffffff810fcef0>] ? __writepage+0x0/0x40
[  242.151275]  [<ffffffff810fe2b4>] generic_writepages+0x24/0x30
[  242.151301]  [<ffffffffa0360d1d>] xfs_vm_writepages+0x5d/0x80 [xfs]
[  242.151307]  [<ffffffff810fe2e1>] do_writepages+0x21/0x40
[  242.151315]  [<ffffffff810f556b>] __filemap_fdatawrite_range+0x5b/0x60
[  242.151323]  [<ffffffff810f589f>] filemap_fdatawrite+0x1f/0x30
[  242.151349]  [<ffffffffa0365199>] xfs_flush_pages+0xa9/0xc0 [xfs]
[  242.151375]  [<ffffffffa035d5cf>] xfs_release+0x17f/0x1f0 [xfs]
[  242.151401]  [<ffffffffa0364f35>] xfs_file_release+0x15/0x20 [xfs]
[  242.151410]  [<ffffffff81145ea5>] __fput+0xf5/0x210
[  242.151417]  [<ffffffff81145fe5>] fput+0x25/0x30
[  242.151424]  [<ffffffff8114210d>] filp_close+0x5d/0x90
[  242.151431]  [<ffffffff811421f7>] sys_close+0xb7/0x120
[  242.151439]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.151445] INFO: task copy-files:3398 blocked for more than 120 seconds.
[  242.151449] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.151453] copy-files    D 0000000000000000     0  3398   3337 0x00000000
[  242.151460]  ffff8801ed4adc88 0000000000000086 0000000000015c00 0000000000015c00
[  242.151467]  ffff8802105b9aa8 ffff8801ed4adfd8 0000000000015c00 ffff8802105b96f0
[  242.151474]  0000000000015c00 ffff8801ed4adfd8 0000000000015c00 ffff8802105b9aa8
[  242.151481] Call Trace:
[  242.151509]  [<ffffffffa034d103>] xlog_grant_log_space+0x173/0x3f0 [xfs]
[  242.151535]  [<ffffffffa035fa4a>] ? kmem_zone_zalloc+0x3a/0x50 [xfs]
[  242.151542]  [<ffffffff8105cd70>] ? default_wake_function+0x0/0x20
[  242.151570]  [<ffffffffa034d454>] xfs_log_reserve+0xd4/0xe0 [xfs]
[  242.151599]  [<ffffffffa0357b20>] xfs_trans_reserve+0xa0/0x210 [xfs]
[  242.151625]  [<ffffffffa035ca45>] xfs_free_eofblocks+0x185/0x2a0 [xfs]
[  242.151652]  [<ffffffffa035d578>] xfs_release+0x128/0x1f0 [xfs]
[  242.151679]  [<ffffffffa0364f35>] xfs_file_release+0x15/0x20 [xfs]
[  242.151687]  [<ffffffff81145ea5>] __fput+0xf5/0x210
[  242.151694]  [<ffffffff81145fe5>] fput+0x25/0x30
[  242.151701]  [<ffffffff8114210d>] filp_close+0x5d/0x90
[  242.151728]  [<ffffffff811421f7>] sys_close+0xb7/0x120
[  242.151737]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.151743] INFO: task copy-files:3399 blocked for more than 120 seconds.
[  242.151746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.151751] copy-files    D 0000000000000000     0  3399   3337 0x00000000
[  242.151758]  ffff8802149afd38 0000000000000086 0000000000015c00 0000000000015c00
[  242.151765]  ffff8802105bb198 ffff8802149affd8 0000000000015c00 ffff8802105bade0
[  242.151776]  0000000000015c00 ffff8802149affd8 0000000000015c00 ffff8802105bb198
[  242.151794] Call Trace:
[  242.151806]  [<ffffffff81546477>] __mutex_lock_slowpath+0xf7/0x180
[  242.151818]  [<ffffffff81145c1a>] ? get_empty_filp+0x7a/0x170
[  242.151831]  [<ffffffff8154635b>] mutex_lock+0x2b/0x50
[  242.151843]  [<ffffffff81152d49>] do_filp_open+0x3d9/0xba0
[  242.151853]  [<ffffffff81148b74>] ? cp_new_stat+0xe4/0x100
[  242.151866]  [<ffffffff8115e7fa>] ? alloc_fd+0x10a/0x150
[  242.151878]  [<ffffffff811422c9>] do_sys_open+0x69/0x170
[  242.151890]  [<ffffffff81142410>] sys_open+0x20/0x30
[  242.151902]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.151912] INFO: task copy-files:3401 blocked for more than 120 seconds.
[  242.151919] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.151928] copy-files    D 0000000000000001     0  3401   3337 0x00000008
[  242.151942]  ffff8801ed4efcf8 0000000000000086 0000000000015c00 0000000000015c00
[  242.151960]  ffff8802105bdf78 ffff8801ed4effd8 0000000000015c00 ffff8802105bdbc0
[  242.151978]  0000000000015c00 ffff8801ed4effd8 0000000000015c00 ffff8802105bdf78
[  242.151995] Call Trace:
[  242.152006]  [<ffffffff81546477>] __mutex_lock_slowpath+0xf7/0x180
[  242.152019]  [<ffffffff8154635b>] mutex_lock+0x2b/0x50
[  242.152030]  [<ffffffff8114e923>] lock_rename+0xd3/0xe0
[  242.152043]  [<ffffffff81151d13>] sys_renameat+0x113/0x280
[  242.152055]  [<ffffffff81155280>] ? filldir+0x0/0xe0
[  242.152069]  [<ffffffff81145fe5>] ? fput+0x25/0x30
[  242.152080]  [<ffffffff8114210d>] ? filp_close+0x5d/0x90
[  242.152092]  [<ffffffff81151e9b>] sys_rename+0x1b/0x20
[  242.152104]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b
[  242.152113] INFO: task copy-files:3402 blocked for more than 120 seconds.
[  242.152121] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.152129] copy-files    D 0000000000000000     0  3402   3337 0x00000000
[  242.152143]  ffff880208e09c88 0000000000000086 0000000000015c00 0000000000015c00
[  242.152161]  ffff8801ed6b03b8 ffff880208e09fd8 0000000000015c00 ffff8801ed6b0000
[  242.152178]  0000000000015c00 ffff880208e09fd8 0000000000015c00 ffff8801ed6b03b8
[  242.152196] Call Trace:
[  242.152227]  [<ffffffffa034d103>] xlog_grant_log_space+0x173/0x3f0 [xfs]
[  242.152258]  [<ffffffffa035fa4a>] ? kmem_zone_zalloc+0x3a/0x50 [xfs]
[  242.152270]  [<ffffffff8105cd70>] ? default_wake_function+0x0/0x20
[  242.152302]  [<ffffffffa034d454>] xfs_log_reserve+0xd4/0xe0 [xfs]
[  242.152335]  [<ffffffffa0357b20>] xfs_trans_reserve+0xa0/0x210 [xfs]
[  242.152367]  [<ffffffffa035ca45>] xfs_free_eofblocks+0x185/0x2a0 [xfs]
[  242.152398]  [<ffffffffa035d578>] xfs_release+0x128/0x1f0 [xfs]
[  242.152429]  [<ffffffffa0364f35>] xfs_file_release+0x15/0x20 [xfs]
[  242.152442]  [<ffffffff81145ea5>] __fput+0xf5/0x210
[  242.152454]  [<ffffffff81145fe5>] fput+0x25/0x30
[  242.152464]  [<ffffffff8114210d>] filp_close+0x5d/0x90
[  242.152475]  [<ffffffff811421f7>] sys_close+0xb7/0x120
[  242.152488]  [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-09  7:54                                 ` Juerg Haefliger
  2012-05-10 16:11                                   ` Chris J Arges
@ 2012-05-18 17:19                                   ` Ben Myers
  2012-05-19  7:28                                     ` Juerg Haefliger
  1 sibling, 1 reply; 58+ messages in thread
From: Ben Myers @ 2012-05-18 17:19 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

Hey Juerg,

On Wed, May 09, 2012 at 09:54:08AM +0200, Juerg Haefliger wrote:
> > On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
> >> Did anybody have a chance to look at the data?
> >
> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498
> >
> > Here you indicate that you have created a reproducer.  Can you post it to the list?
> 
> Canonical attached them to the bug report that they filed yesterday:
> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922

I'm interested in understanding to what extent the hang you see in production
on 2.6.38 is similar to the hang of the reproducer.  Mark is seeing a situation
where there is nothing on the AIL and is clogged up in the CIL, others are
seeing items on the AIL that don't seem to be making progress.  Could you
provide a dump or traces from a hang on a filesystem with a normal sized log?
Can the reproducer hit the hang eventually without resorting to the tiny log?

Regards,
	Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-18 17:19                                   ` Ben Myers
@ 2012-05-19  7:28                                     ` Juerg Haefliger
  2012-05-21 17:11                                       ` Ben Myers
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-05-19  7:28 UTC (permalink / raw)
  To: Ben Myers; +Cc: xfs

Hi Ben,

> Hey Juerg,
>
> On Wed, May 09, 2012 at 09:54:08AM +0200, Juerg Haefliger wrote:
>> > On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
>> >> Did anybody have a chance to look at the data?
>> >
>> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498
>> >
>> > Here you indicate that you have created a reproducer.  Can you post it to the list?
>>
>> Canonical attached them to the bug report that they filed yesterday:
>> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>
> I'm interested in understanding to what extent the hang you see in production
> on 2.6.38 is similar to the hang of the reproducer.  Mark is seeing a situation
> where there is nothing on the AIL and is clogged up in the CIL, others are
> seeing items on the AIL that don't seem to be making progress.  Could you
> provide a dump or traces from a hang on a filesystem with a normal sized log?
> Can the reproducer hit the hang eventually without resorting to the tiny log?

I'm not certain that the reproducer hang is identical to the
production hang. One difference that I've noticed is that a reproducer
hang can be cleared with an emergency sync while a production hang
can't. I'm working on trying to get a trace from a production machine.
Any ideas how to do the tracing without filling up the filesystem with
trace data? I need to run it for at least a week to catch a hang. I
was thinking of tracing in 15 min batches and just keeping 30 mins
worth of trace data but that will leave some gaps when I stop/restart
the tracing. I'm not familiar with ftrace, maybe it provides the
functionality to do some sort of ring buffer dumping to only keep the
last x mins of data?

Thanks
...Juerg


> Regards,
>        Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-19  7:28                                     ` Juerg Haefliger
@ 2012-05-21 17:11                                       ` Ben Myers
  2012-05-24  5:45                                         ` Juerg Haefliger
  0 siblings, 1 reply; 58+ messages in thread
From: Ben Myers @ 2012-05-21 17:11 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

Hey Juerg,

On Sat, May 19, 2012 at 09:28:55AM +0200, Juerg Haefliger wrote:
> > On Wed, May 09, 2012 at 09:54:08AM +0200, Juerg Haefliger wrote:
> >> > On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
> >> >> Did anybody have a chance to look at the data?
> >> >
> >> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498
> >> >
> >> > Here you indicate that you have created a reproducer.  Can you post it to the list?
> >>
> >> Canonical attached them to the bug report that they filed yesterday:
> >> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
> >
> > I'm interested in understanding to what extent the hang you see in production
> > on 2.6.38 is similar to the hang of the reproducer.  Mark is seeing a situation
> > where there is nothing on the AIL and is clogged up in the CIL, others are
> > seeing items on the AIL that don't seem to be making progress.  Could you
> > provide a dump or traces from a hang on a filesystem with a normal sized log?
> > Can the reproducer hit the hang eventually without resorting to the tiny log?
> 
> I'm not certain that the reproducer hang is identical to the
> production hang. One difference that I've noticed is that a reproducer
> hang can be cleared with an emergency sync while a production hang
> can't. I'm working on trying to get a trace from a production machine.

Hit this on a filesystem with a regular sized log over the weekend.  If you see
this again in production could you gather up task states?

echo t > /proc/sysrq-trigger

Mark and I have been looking at the dump.  There are few interesting items to point out.

1) xfs_sync_worker is blocked trying to get log reservation:

PID: 25374  TASK: ffff88013481c6c0  CPU: 3   COMMAND: "kworker/3:83"
 #0 [ffff88013481fb50] __schedule at ffffffff813aacac
 #1 [ffff88013481fc98] schedule at ffffffff813ab0c4
 #2 [ffff88013481fca8] xlog_grant_head_wait at ffffffffa0347b78 [xfs]
 #3 [ffff88013481fcf8] xlog_grant_head_check at ffffffffa03483e6 [xfs]
 #4 [ffff88013481fd38] xfs_log_reserve at ffffffffa034852c [xfs]
 #5 [ffff88013481fd88] xfs_trans_reserve at ffffffffa0344e64 [xfs]
 #6 [ffff88013481fdd8] xfs_fs_log_dummy at ffffffffa02ec138 [xfs]
 #7 [ffff88013481fdf8] xfs_sync_worker at ffffffffa02f7be4 [xfs]
 #8 [ffff88013481fe18] process_one_work at ffffffff8104c53b
 #9 [ffff88013481fe68] worker_thread at ffffffff8104f0e3
#10 [ffff88013481fee8] kthread at ffffffff8105395e
#11 [ffff88013481ff48] kernel_thread_helper at ffffffff813b3ae4

This means that it is not in a position to push the AIL.  It is clear that the
AIL has plenty of entries which can be pushed.

crash> xfs_ail 0xffff88022112b7c0,
struct xfs_ail {
...
  xa_ail = {
    next = 0xffff880144d1c318,
    prev = 0xffff880170a02078
  },
  xa_target = 0x1f00003063,

Here's the first item on the AIL:

ffff880144d1c318
struct xfs_log_item_t {
  li_ail = {
    next = 0xffff880196ea0858,
    prev = 0xffff88022112b7d0
  },
  li_lsn = 0x1f00001c63,		<--- less than xa_target
  li_desc = 0x0,
  li_mountp = 0xffff88016adee000,
  li_ailp = 0xffff88022112b7c0,
  li_type = 0x123b,
  li_flags = 0x1,
  li_bio_list = 0xffff88016afa5cb8,
  li_cb = 0xffffffffa034de00 <xfs_istale_done>,
  li_ops = 0xffffffffa035f620,
  li_cil = {
    next = 0xffff880144d1c368,
    prev = 0xffff880144d1c368
  },
  li_lv = 0x0,
  li_seq = 0x3b
}

So if xfs_sync_worker were not blocked on log reservation it would push these
items.

2) The CIL is waiting around too:

crash> xfs_cil_ctx 0xffff880144d1a9c0,
struct xfs_cil_ctx {
...
  space_used = 0x135f68, 

struct log {
...
  l_logsize = 0xa00000,

A00000/8
140000						<--- XLOG_CIL_SPACE_LIMIT

140000 - 135F68
A098

Looks like xlog_cil_push_background will not push the CIL while space used is
less than XLOG_CIL_SPACE_LIMIT, so that's not going anywhere either.

3) It may be unrelated to this bug, but we do have a race in the log
reservation code that hasn't been resolved... between when log_space_left
samples the grant heads and when the space is actually granted a bit later.
Maybe we can grant more space than intended.

If you can provide output of 'echo t > /proc/sysrq-trigger' it may be enough
information to determine if you're seeing the same problem we hit on Saturday.

Thanks,

Ben & Mark

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-18 14:42                                             ` Mark Tinguely
@ 2012-05-22 22:59                                               ` Dave Chinner
  0 siblings, 0 replies; 58+ messages in thread
From: Dave Chinner @ 2012-05-22 22:59 UTC (permalink / raw)
  To: Mark Tinguely; +Cc: linux-xfs, Ben Myers, Chris J Arges

On Fri, May 18, 2012 at 09:42:37AM -0500, Mark Tinguely wrote:
> On 05/18/12 05:10, Dave Chinner wrote:
> >Still, this doesn't explain the hang at all - the CIL forms a new
> >list every time a checkpoint occurs, and this corruption would cause
> >a crash trying to walk the li_lv list when pushed. So it comes back
> >to why hasn't the CIL been pushed? what does the CIL context
> >structure look like?
> 
> The CIL context on the machine that was running 3+ days before hanging.
> 
> struct xfs_cil_ctx {
>   cil = 0xffff88034a8c5240,
>   sequence = 1241833,
>   start_lsn = 0,
>   commit_lsn = 0,
>   ticket = 0xffff88034e0ebc08,
>   nvecs = 237,
>   space_used = 39964,
>   busy_extents = {
>     next = 0xffff88034b287958,
>     prev = 0xffff88034d10c698
>   },
>   lv_chain = 0x0,
>   log_cb = {
>     cb_next = 0x0,
>     cb_func = 0,
>     cb_arg = 0x0
>   },
>   committing = {
>     next = 0xffff88034c84d120,
>     prev = 0xffff88034c84d120
>   }
> }

And the struct xfs_cil itself?

> Start the cleaning of the log when still full after last clean.
> ---
>  fs/xfs/xfs_log.c |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> Index: b/fs/xfs/xfs_log.c
> ===================================================================
> --- a/fs/xfs/xfs_log.c
> +++ b/fs/xfs/xfs_log.c
> @@ -191,8 +191,10 @@ xlog_grant_head_wake(
>  
>  	list_for_each_entry(tic, &head->waiters, t_queue) {
>  		need_bytes = xlog_ticket_reservation(log, head, tic);
> -		if (*free_bytes < need_bytes)
> +		if (*free_bytes < need_bytes) {
> +			xlog_grant_push_ail(log, need_bytes);

Ok, so that means every time the log tail is moved or a transaction
completes and returns unused space to the grant head, it pushes the
AIL target along.  But if we are hanging with an empty AIL, this is
not actually doing anything of note, just changing timing to make
whatever problem we have less common.  I'd remove this patch to make
reproducing the problem easier....

We've almost certainly got a CIL hang, and it looks like it is being
caused by an accounting leak. i.e.  if the CIL hasn't reached it's
push threshold (12.5% of the log space), but the AIL is empty and we
have the grant heads indicating that there is less than 25% of the
log space free, we are slowly leaking log space somewhere in the CIL
commit or checkpoint path.  Given that we've done 1.24 million
checkpoints in the above example, it's not a common thing. Given the
size of log, it may be related to log wrap commits, and it is also
worth noting that if this an accounting leak, it will eventually
result in a hard hang.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-21 17:11                                       ` Ben Myers
@ 2012-05-24  5:45                                         ` Juerg Haefliger
  2012-05-24 14:23                                           ` Ben Myers
  0 siblings, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-05-24  5:45 UTC (permalink / raw)
  To: Ben Myers; +Cc: xfs

Hi Ben,


> Hey Juerg,
>
> On Sat, May 19, 2012 at 09:28:55AM +0200, Juerg Haefliger wrote:
>> > On Wed, May 09, 2012 at 09:54:08AM +0200, Juerg Haefliger wrote:
>> >> > On Sat, May 05, 2012 at 09:44:35AM +0200, Juerg Haefliger wrote:
>> >> >> Did anybody have a chance to look at the data?
>> >> >
>> >> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498
>> >> >
>> >> > Here you indicate that you have created a reproducer.  Can you post it to the list?
>> >>
>> >> Canonical attached them to the bug report that they filed yesterday:
>> >> http://oss.sgi.com/bugzilla/show_bug.cgi?id=922
>> >
>> > I'm interested in understanding to what extent the hang you see in production
>> > on 2.6.38 is similar to the hang of the reproducer.  Mark is seeing a situation
>> > where there is nothing on the AIL and is clogged up in the CIL, others are
>> > seeing items on the AIL that don't seem to be making progress.  Could you
>> > provide a dump or traces from a hang on a filesystem with a normal sized log?
>> > Can the reproducer hit the hang eventually without resorting to the tiny log?
>>
>> I'm not certain that the reproducer hang is identical to the
>> production hang. One difference that I've noticed is that a reproducer
>> hang can be cleared with an emergency sync while a production hang
>> can't. I'm working on trying to get a trace from a production machine.
>
> Hit this on a filesystem with a regular sized log over the weekend.  If you see
> this again in production could you gather up task states?
>
> echo t > /proc/sysrq-trigger

Here is the log from a production hang:

May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111805] INFO:
task xfssyncd/dm-4:971 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111864] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111951]
xfssyncd/dm-4   D 000000000000000f     0   971      2 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111957]
ffff880325e09d00 0000000000000046 ffff880325e09fd8 ffff880325e08000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111962]
0000000000013d00 ffff880326774858 ffff880325e09fd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111966]
ffff8803241badc0 ffff8803267744a0 0000000000000282 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111971] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112016]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112023]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112046]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112070]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112092]
[<ffffffffa00e6383>] xfs_fs_log_dummy+0x43/0x90 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112117]
[<ffffffffa01193c1>] xfs_sync_worker+0x81/0x90 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112141]
[<ffffffffa01180f3>] xfssyncd+0x183/0x230 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112164]
[<ffffffffa0117f70>] ? xfssyncd+0x0/0x230 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112170]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112176]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112180]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112183]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112241] INFO:
task ruby1.8:2734 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112295] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112378] ruby1.8
      D 000000000000000e     0  2734      1 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112382]
ffff88004b933c08 0000000000000086 ffff88004b933fd8 ffff88004b932000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112386]
0000000000013d00 ffff8805df2eb178 ffff88004b933fd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112391]
ffff88032730adc0 ffff8805df2eadc0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112395] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112419]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112423]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112446]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112470]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112493]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112517]
[<ffffffffa0107162>] xfs_setattr+0x652/0x9e0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112541]
[<ffffffffa0114e4b>] xfs_vn_setattr+0x1b/0x20 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112546]
[<ffffffff8117fe69>] notify_change+0x189/0x370
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112552]
[<ffffffff811903be>] utimes_common+0xce/0x1d0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112556]
[<ffffffff811905ac>] do_utimes+0xec/0x100
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112560]
[<ffffffff811906e0>] sys_futimesat+0x30/0xb0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112565]
[<ffffffff811784a2>] ? sys_select+0x52/0x100
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112569]
[<ffffffff81190779>] sys_utimes+0x19/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112572]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112579] INFO:
task cron:32731 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112631] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112714] cron
      D 0000000000000001     0 32731   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112718]
ffff880025753c48 0000000000000082 ffff880025753fd8 ffff880025752000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112722]
0000000000013d00 ffff88006b9c5f38 ffff880025753fd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112727]
ffff8803270c8000 ffff88006b9c5b80 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112731] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112754]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112758]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112781]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112805]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112828]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112852]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112876]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112880]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112883]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112887]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112890]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112893]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112897]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112901]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112905]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112908]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112912]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112915] INFO:
task cron:32732 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112970] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113054] cron
      D 0000000000000001     0 32732   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113058]
ffff8800a4a9fc48 0000000000000086 ffff8800a4a9ffd8 ffff8800a4a9e000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113062]
0000000000013d00 ffff88006b9c4858 ffff8800a4a9ffd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113066]
ffff8803270c8000 ffff88006b9c44a0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113071] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113094]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113098]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113121]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113144]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113168]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113192]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113215]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113219]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113223]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113226]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113229]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113232]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113236]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113240]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113243]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113247]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113250]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113253] INFO:
task cron:32733 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113305] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113388] cron
      D 0000000000000006     0 32733   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113392]
ffff88003d4d3c48 0000000000000082 ffff88003d4d3fd8 ffff88003d4d2000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113397]
0000000000013d00 ffff88006b9c1a98 ffff88003d4d3fd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113401]
ffff8805df302dc0 ffff88006b9c16e0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113405] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113428]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113432]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113455]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113479]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113502]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113526]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113550]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113554]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113557]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113560]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113563]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113567]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113570]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113574]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113577]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113580]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113584]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113587] INFO:
task cron:32734 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113639] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113722] cron
      D 0000000000000001     0 32734   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113726]
ffff88003dddfc48 0000000000000086 ffff88003dddffd8 ffff88003ddde000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113731]
0000000000013d00 ffff88006b9c3178 ffff88003dddffd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113735]
ffff8803270c8000 ffff88006b9c2dc0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113739] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113762]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113766]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113789]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113813]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113836]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113860]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113884]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113887]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113891]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113894]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113897]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113900]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113904]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113908]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113911]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113914]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113918]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113921] INFO:
task cron:32735 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.113973] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114056] cron
      D 0000000000000007     0 32735   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114060]
ffff88007ab6fc48 0000000000000082 ffff88007ab6ffd8 ffff88007ab6e000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114065]
0000000000013d00 ffff88062318c858 ffff88007ab6ffd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114069]
ffff880327255b80 ffff88062318c4a0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114073] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114096]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114100]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114123]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114147]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114170]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114194]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114218]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114221]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114225]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114228]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114231]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114235]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114238]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114242]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114245]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114249]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114252]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114255] INFO:
task cron:32736 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114307] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114390] cron
      D 0000000000000006     0 32736   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114394]
ffff88001281bc48 0000000000000082 ffff88001281bfd8 ffff88001281a000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114399]
0000000000013d00 ffff880611094858 ffff88001281bfd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114403]
ffff880327250000 ffff8806110944a0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114407] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114430]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114434]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114457]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114481]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114505]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114528]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114552]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114556]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114559]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114562]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114565]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114569]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114572]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114576]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114579]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114582]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114586]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114589] INFO:
task cron:32737 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114641] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114724] cron
      D 0000000000000007     0 32737   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114728]
ffff88004173dc48 0000000000000086 ffff88004173dfd8 ffff88004173c000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114733]
0000000000013d00 ffff8805df2edf38 ffff88004173dfd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114737]
ffff880327255b80 ffff8805df2edb80 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114741] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114764]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114768]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114791]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114818]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114841]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114865]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114890]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114894]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114897]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114901]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114904]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114907]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114910]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114914]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114917]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114921]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114924]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114930] INFO:
task cron:32738 blocked for more than 120 seconds.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.114982] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115065] cron
      D 0000000000000000     0 32738   1276 0x00000000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115069]
ffff88007a7b7c48 0000000000000082 ffff88007a7b7fd8 ffff88007a7b6000
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115073]
0000000000013d00 ffff8805df2ec858 ffff88007a7b7fd8 0000000000013d00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115077]
ffffffff81a0b020 ffff8805df2ec4a0 0000000000000286 ffff8806265d7e00
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115082] Call Trace:
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115105]
[<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115109]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115132]
[<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115155]
[<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115179]
[<ffffffffa0102071>] ? xfs_trans_alloc+0xa1/0xb0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115203]
[<ffffffffa0107acd>] xfs_inactive+0x27d/0x470 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115226]
[<ffffffffa0115dde>] xfs_fs_evict_inode+0x9e/0xf0 [xfs]
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115230]
[<ffffffff8117deb4>] evict+0x24/0xc0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115234]
[<ffffffff8117eae7>] iput_final+0x187/0x270
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115237]
[<ffffffff8117ec0b>] iput+0x3b/0x50
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115240]
[<ffffffff8117ae40>] d_kill+0x100/0x140
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115243]
[<ffffffff8117bdf2>] dput+0xd2/0x1b0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115247]
[<ffffffff811666eb>] __fput+0x13b/0x1f0
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115250]
[<ffffffff811667c5>] fput+0x25/0x30
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115254]
[<ffffffff811630e0>] filp_close+0x60/0x90
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115257]
[<ffffffff811638e7>] sys_close+0xb7/0x120
May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.115260]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: 765.481915]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481918]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481922]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481925]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481928] cron
      D 0000000000000015     0 14448   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481932]
ffff8803a2007cb8 0000000000000082 ffff8803a2007fd8 ffff8803a2006000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481936]
0000000000013d00 ffff8800a99b5f38 ffff8803a2007fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481941]
ffff8803273b0000 ffff8800a99b5b80 ffff8803a2007cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481945] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481948]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481952]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481956]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481959]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481963]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481967]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481971]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481975]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481978]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481981]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481984] cron
      D 000000000000000e     0 14449   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481988]
ffff88059d3dfcb8 0000000000000086 ffff88059d3dffd8 ffff88059d3de000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481993]
0000000000013d00 ffff88002a974858 ffff88059d3dffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.481997]
ffff88032730adc0 ffff88002a9744a0 ffff88059d3dfcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482001] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482005]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482008]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482012]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482016]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482019]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482023]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482027]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482031]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482034]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482038]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482040] cron
      D 000000000000000d     0 14450   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482045]
ffff8803a4af5cb8 0000000000000082 ffff8803a4af5fd8 ffff8803a4af4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482049]
0000000000013d00 ffff88002a975f38 ffff8803a4af5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482053]
ffff8803272d44a0 ffff88002a975b80 ffff8803a4af5cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482057] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482061]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482065]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482068]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482072]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482076]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482079]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482083]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482087]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482091]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482094]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482097] cron
      D 000000000000000f     0 14451   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482101]
ffff8803a9c57cb8 0000000000000086 ffff8803a9c57fd8 ffff8803a9c56000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482105]
0000000000013d00 ffff88008898c858 ffff8803a9c57fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482109]
ffff8803273396e0 ffff88008898c4a0 ffff8803a9c57cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482113] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482117]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482121]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482124]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482128]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482132]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482135]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482140]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482143]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482147]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482150]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482153] cron
      D 0000000000000010     0 14452   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482157]
ffff8804b6779cb8 0000000000000082 ffff8804b6779fd8 ffff8804b6778000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482161]
0000000000013d00 ffff88008898b178 ffff8804b6779fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482165]
ffff880327350000 ffff88008898adc0 ffff8804b6779cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482170] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482173]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482177]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482180]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482184]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482188]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482191]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482196]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482199]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482203]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482206]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482209] cron
      D 000000000000000d     0 14453   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482213]
ffff880423ec9cb8 0000000000000086 ffff880423ec9fd8 ffff880423ec8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482218]
0000000000013d00 ffff880059245f38 ffff880423ec9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482222]
ffff8803272d44a0 ffff880059245b80 ffff880423ec9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482226] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482230]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482233]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482237]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482241]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482244]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482248]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482252]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482256]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482259]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482263]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482265] cron
      D 000000000000000e     0 14454   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482270]
ffff880486261cb8 0000000000000082 ffff880486261fd8 ffff880486260000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482274]
0000000000013d00 ffff88003eb25f38 ffff880486261fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482278]
ffff88032730adc0 ffff88003eb25b80 ffff880486261cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482282] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482286]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482289]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482293]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482297]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482300]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482304]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482308]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482312]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482316]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482319]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482322] cron
      D 000000000000000f     0 14455   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482326]
ffff8803f266fcb8 0000000000000086 ffff8803f266ffd8 ffff8803f266e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482330]
0000000000013d00 ffff88003eb23178 ffff8803f266ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482334]
ffff8803273396e0 ffff88003eb22dc0 ffff8803f266fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482338] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482342]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482345]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482349]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482353]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482357]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482360]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482364]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482368]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482372]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482375]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482378] cron
      D 000000000000000d     0 14456   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482382]
ffff8803ef555cb8 0000000000000082 ffff8803ef555fd8 ffff8803ef554000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482386]
0000000000013d00 ffff88003eb24858 ffff8803ef555fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482390]
ffff8803272d44a0 ffff88003eb244a0 ffff8803ef555cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482394] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482398]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482402]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482405]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482409]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482413]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482416]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482420]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482424]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482428]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482431]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482434] cron
      D 000000000000000e     0 14457   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482438]
ffff880495453cb8 0000000000000086 ffff880495453fd8 ffff880495452000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482442]
0000000000013d00 ffff880033551a98 ffff880495453fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482446]
ffff88032730adc0 ffff8800335516e0 ffff880495453cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482451] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482454]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482458]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482461]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482465]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482469]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482472]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482477]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482480]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482484]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482487]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482490] cron
      D 0000000000000010     0 14458   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482494]
ffff88046f919cb8 0000000000000082 ffff88046f919fd8 ffff88046f918000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482498]
0000000000013d00 ffff880087f49a98 ffff88046f919fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482503]
ffff880327350000 ffff880087f496e0 ffff88046f919cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482507] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482510]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482514]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482518]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482521]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482525]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482528]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482533]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482536]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482540]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482543]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482546] cron
      D 000000000000000d     0 14464   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482550]
ffff88048cff1cb8 0000000000000086 ffff88048cff1fd8 ffff88048cff0000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482554]
0000000000013d00 ffff880087f4c858 ffff88048cff1fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482558]
ffff8803272d44a0 ffff880087f4c4a0 ffff88048cff1cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482563] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482566]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482570]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482573]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482577]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482581]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482584]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482589]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482592]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482596]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482599]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482602] cron
      D 0000000000000015     0 14466   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482606]
ffff8803ee087cb8 0000000000000082 ffff8803ee087fd8 ffff8803ee086000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482610]
0000000000013d00 ffff88007b4603b8 ffff8803ee087fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482614]
ffff8803273b0000 ffff88007b460000 ffff8803ee087cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482619] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482622]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482626]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482630]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482633]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482637]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482640]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482645]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482649]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482652]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482655]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482658] cron
      D 000000000000000d     0 14467   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482662]
ffff880437fb9cb8 0000000000000086 ffff880437fb9fd8 ffff880437fb8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482667]
0000000000013d00 ffff88007b461a98 ffff880437fb9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482671]
ffff8803272d44a0 ffff88007b4616e0 ffff880437fb9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482675] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482678]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482682]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482686]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482689]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482693]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482696]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482701]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482705]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482708]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482711]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482714] mktemp
      D 0000000000000007     0 14490      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482718]
ffff8800abdefdd8 0000000000000082 ffff8800abdeffd8 ffff8800abdee000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482723]
0000000000013d00 ffff8800a185b178 ffff8800abdeffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482727]
ffff880327255b80 ffff8800a185adc0 ffff8800abdefdb8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482731] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482734]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482738]
[<ffffffff811729b7>] ? do_path_lookup+0x87/0x160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482742]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482745]
[<ffffffff8116f6dd>] lookup_create+0x2d/0xd0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482749]
[<ffffffff81174741>] sys_mkdirat+0x61/0x140
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482753]
[<ffffffff81174838>] sys_mkdir+0x18/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482756]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482759] cron
      D 000000000000000d     0 14491   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482763]
ffff8800345b9cb8 0000000000000086 ffff8800345b9fd8 ffff8800345b8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482768]
0000000000013d00 ffff88009bee83b8 ffff8800345b9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482772]
ffff8805dabc2dc0 ffff88009bee8000 ffff8800345b9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482776] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482779]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482783]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482787]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482791]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482794]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482798]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482802]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482806]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482810]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482813]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482816] if_eth0
      D 0000000000000001     0 14496      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482820]
ffff880007203cb8 0000000000000082 ffff880007203fd8 ffff880007202000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482824]
0000000000013d00 ffff880051b583b8 ffff880007203fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482828]
ffff8803270c8000 ffff880051b58000 ffff880007203cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482832] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482836]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482840]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482843]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482847]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482851]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482854]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482859]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482863]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482867]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482870]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482874]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482877]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482880] if_eth0
      D 0000000000000002     0 14499      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482884]
ffff8800aa88dcb8 0000000000000086 ffff8800aa88dfd8 ffff8800aa88c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482889]
0000000000013d00 ffff880051b5b178 ffff8800aa88dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482893]
ffff8803270cdb80 ffff880051b5adc0 ffff8800aa88dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482897] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482901]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482904]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482908]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482912]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482915]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482919]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482923]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482927]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482931]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482935]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482938]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482942]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482945]
fw_conntrack    D 0000000000000000     0 14562      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482949]
ffff880075443cb8 0000000000000086 ffff880075443fd8 ffff880075442000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482953]
0000000000013d00 ffff88005fc73178 ffff880075443fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482957]
ffffffff81a0b020 ffff88005fc72dc0 ffff880075443cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482961] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482965]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482969]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482972]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482976]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482980]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482983]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482988]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482992]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482995]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.482999]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483003]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483006]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483009]
fw_conntrack    D 0000000000000000     0 14563      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483013]
ffff8800676e9cb8 0000000000000086 ffff8800676e9fd8 ffff8800676e8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483017]
0000000000013d00 ffff88005fc71a98 ffff8800676e9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483022]
ffff8806231f5b80 ffff88005fc716e0 ffff8800676e9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483026] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483029]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483033]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483037]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483040]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483044]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483048]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483052]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483056]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483060]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483064]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483067]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483071]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483074] cron
      D 0000000000000015     0 14634   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483078]
ffff8800677d5cb8 0000000000000086 ffff8800677d5fd8 ffff8800677d4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483082]
0000000000013d00 ffff88009beedf38 ffff8800677d5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483087]
ffff8803273b0000 ffff88009beedb80 ffff8800677d5cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483091] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483094]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483098]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483102]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483105]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483109]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483112]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483117]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483121]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483124]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483127]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483130] ipmi_temp
      D 0000000000000001     0 14674      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483135]
ffff88008b0c7cb8 0000000000000082 ffff88008b0c7fd8 ffff88008b0c6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483139]
0000000000013d00 ffff8800825c03b8 ffff88008b0c7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483143]
ffff8803270c8000 ffff8800825c0000 ffff88008b0c7cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483147] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483151]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483154]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483158]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483162]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483165]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483169]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483173]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483177]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483181]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483185]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483188]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483192]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483194] ipmi_temp
      D 0000000000000000     0 14676      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483198]
ffff88008485dcb8 0000000000000086 ffff88008485dfd8 ffff88008485c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483203]
0000000000013d00 ffff8800825c1a98 ffff88008485dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483207]
ffffffff81a0b020 ffff8800825c16e0 ffff88008485dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483211] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483215]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483218]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483222]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483226]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483230]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483233]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483237]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483242]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483245]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483249]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483253]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483256]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483259] cron
      D 0000000000000015     0 15139   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483263]
ffff8800692fbcb8 0000000000000086 ffff8800692fbfd8 ffff8800692fa000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483267]
0000000000013d00 ffff88005f1dc858 ffff8800692fbfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483271]
ffff8803273b0000 ffff88005f1dc4a0 ffff8800692fbcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483276] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483279]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483283]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483286]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483290]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483294]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483297]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483302]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483306]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483309]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483312]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483315] cron
      D 000000000000000e     0 15140   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483319]
ffff880062e71cb8 0000000000000082 ffff880062e71fd8 ffff880062e70000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483324]
0000000000013d00 ffff88005f1ddf38 ffff880062e71fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483328]
ffff88032730adc0 ffff88005f1ddb80 ffff880062e71cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483332] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483336]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483339]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483343]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483347]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483350]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483354]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483358]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483362]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483365]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483369]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483371] cron
      D 0000000000000016     0 15141   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483376]
ffff880067db5cb8 0000000000000086 ffff880067db5fd8 ffff880067db4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483380]
0000000000013d00 ffff88005f1db178 ffff880067db5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483384]
ffff8803273b5b80 ffff88005f1dadc0 ffff880067db5cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483388] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483392]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483396]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483399]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483403]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483407]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483410]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483414]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483418]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483422]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483425]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483428] cron
      D 000000000000000f     0 15142   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483432]
ffff88001a93bcb8 0000000000000082 ffff88001a93bfd8 ffff88001a93a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483436]
0000000000013d00 ffff88005f1d9a98 ffff88001a93bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483440]
ffff8803273396e0 ffff88005f1d96e0 ffff88001a93bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483444] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483448]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483452]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483455]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483459]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483463]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483466]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483471]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483474]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483478]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483481]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483484] cron
      D 0000000000000015     0 15143   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483488]
ffff8800a71e9cb8 0000000000000086 ffff8800a71e9fd8 ffff8800a71e8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483492]
0000000000013d00 ffff88005f1d83b8 ffff8800a71e9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483497]
ffff8803273b0000 ffff88005f1d8000 ffff8800a71e9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483501] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483504]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483508]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483512]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483515]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483519]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483523]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483527]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483531]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483534]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483537]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483540] cron
      D 000000000000000e     0 15144   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483544]
ffff8800767b9cb8 0000000000000082 ffff8800767b9fd8 ffff8800767b8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483549]
0000000000013d00 ffff880003303178 ffff8800767b9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483553]
ffff88032730adc0 ffff880003302dc0 ffff8800767b9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483557] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483560]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483564]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483568]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483571]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483575]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483579]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483583]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483587]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483590]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483593]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483596] cron
      D 0000000000000016     0 15145   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483600]
ffff88005399dcb8 0000000000000082 ffff88005399dfd8 ffff88005399c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483605]
0000000000013d00 ffff88009f64c858 ffff88005399dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483609]
ffff8803273b5b80 ffff88009f64c4a0 ffff88005399dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483613] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483617]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483620]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483624]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483628]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483631]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483635]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483639]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483643]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483646]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483650]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483652] cron
      D 000000000000000f     0 15146   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483656]
ffff880025ab3cb8 0000000000000086 ffff880025ab3fd8 ffff880025ab2000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483661]
0000000000013d00 ffff88009f649a98 ffff880025ab3fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483665]
ffff8803273396e0 ffff88009f6496e0 ffff880025ab3cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483669] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483673]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483676]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483680]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483684]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483687]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483691]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483695]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483699]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483703]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483706]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483709] cron
      D 000000000000000d     0 15161   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483713]
ffff880098757cb8 0000000000000082 ffff880098757fd8 ffff880098756000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483717]
0000000000013d00 ffff8800a18583b8 ffff880098757fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483721]
ffff8803272d44a0 ffff8800a1858000 ffff880098757cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483725] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483729]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483733]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483736]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483740]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483744]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483747]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483752]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483755]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483759]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483762]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483765] cron
      D 000000000000000e     0 15163   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483769]
ffff880512ef9cb8 0000000000000082 ffff880512ef9fd8 ffff880512ef8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483773]
0000000000013d00 ffff8800a1859a98 ffff880512ef9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483777]
ffff88032730adc0 ffff8800a18596e0 ffff880512ef9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483781] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483785]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483789]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483792]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483796]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483800]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483803]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483808]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483811]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483815]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483818]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483821] cron
      D 0000000000000016     0 15164   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483825]
ffff88035a101cb8 0000000000000086 ffff88035a101fd8 ffff88035a100000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483829]
0000000000013d00 ffff880025465f38 ffff88035a101fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483833]
ffff8803273b5b80 ffff880025465b80 ffff88035a101cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483838] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483841]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483845]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483848]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483852]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483856]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483859]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483864]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483867]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483871]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483874]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483877] cron
      D 000000000000000d     0 15165   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483881]
ffff8800abc29cb8 0000000000000086 ffff8800abc29fd8 ffff8800abc28000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483885]
0000000000013d00 ffff88005fc703b8 ffff8800abc29fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483889]
ffff8803272d44a0 ffff88005fc70000 ffff8800abc29cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483894] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483897]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483901]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483904]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483908]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483912]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483915]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483920]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483923]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483927]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483930]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483933] cron
      D 000000000000000f     0 15166   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483937]
ffff8800776c7cb8 0000000000000086 ffff8800776c7fd8 ffff8800776c6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483941]
0000000000013d00 ffff8800433583b8 ffff8800776c7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483946]
ffff8803273396e0 ffff880043358000 ffff8800776c7cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483950] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483953]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483957]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483961]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483964]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483968]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483972]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483976]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483980]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483983]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483986]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483989] cron
      D 000000000000000e     0 15168   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483993]
ffff8800a5ebbcb8 0000000000000082 ffff8800a5ebbfd8 ffff8800a5eba000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.483997]
0000000000013d00 ffff880043359a98 ffff8800a5ebbfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484002]
ffff88032730adc0 ffff8800433596e0 ffff8800a5ebbcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484006] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484009]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484013]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484017]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484020]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484024]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484027]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484032]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484036]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484039]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484042]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484045] cron
      D 000000000000000d     0 15174   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484049]
ffff880098651cb8 0000000000000082 ffff880098651fd8 ffff880098650000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484054]
0000000000013d00 ffff88004335b178 ffff880098651fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484058]
ffff8803272d44a0 ffff88004335adc0 ffff880098651cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484062] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484066]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484069]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484073]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484076]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484080]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484084]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484088]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484092]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484095]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484098]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484101] cron
      D 0000000000000015     0 15175   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484105]
ffff8800651f3cb8 0000000000000082 ffff8800651f3fd8 ffff8800651f2000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484110]
0000000000013d00 ffff88004335c858 ffff8800651f3fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484114]
ffff8803273b0000 ffff88004335c4a0 ffff8800651f3cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484118] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484122]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484126]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484129]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484133]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484137]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484140]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484144]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484148]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484152]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484155]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484158] cron
      D 000000000000000e     0 15176   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484162]
ffff88001d367cb8 0000000000000082 ffff88001d367fd8 ffff88001d366000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484167]
0000000000013d00 ffff88004335df38 ffff88001d367fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484171]
ffff88032730adc0 ffff88004335db80 ffff88001d367cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484175] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484178]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484182]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484186]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484189]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484193]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484197]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484201]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484205]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484208]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484211]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484214] cron
      D 0000000000000016     0 15177   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484218]
ffff8800aa9f9cb8 0000000000000086 ffff8800aa9f9fd8 ffff8800aa9f8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484223]
0000000000013d00 ffff880090fdc858 ffff8800aa9f9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484227]
ffff8803273b5b80 ffff880090fdc4a0 ffff8800aa9f9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484231] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484235]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484238]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484242]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484245]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484249]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484253]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484257]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484261]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484264]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484267]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484270] cron
      D 000000000000000d     0 15178   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484274]
ffff880089ebdcb8 0000000000000082 ffff880089ebdfd8 ffff880089ebc000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484279]
0000000000013d00 ffff880090fd83b8 ffff880089ebdfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484283]
ffff8800327d16e0 ffff880090fd8000 ffff880089ebdcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484287] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484291]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484294]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484298]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484302]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484305]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484309]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484313]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484317]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484321]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484324]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484327] cron
      D 0000000000000015     0 15179   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484331]
ffff880019455cb8 0000000000000086 ffff880019455fd8 ffff880019454000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484335]
0000000000013d00 ffff880090fddf38 ffff880019455fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484339]
ffff8803273b0000 ffff880090fddb80 ffff880019455cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484343] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484347]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484351]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484354]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484358]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484362]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484365]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484370]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484373]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484377]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484380]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484383] cron
      D 000000000000000e     0 15183   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484387]
ffff8800327e7cb8 0000000000000086 ffff8800327e7fd8 ffff8800327e6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484391]
0000000000013d00 ffff8800327d03b8 ffff8800327e7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484396]
ffff88032730adc0 ffff8800327d0000 ffff8800327e7cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484400] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484403]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484407]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484411]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484414]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484418]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484421]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484426]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484429]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484433]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484436]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484439] cron
      D 000000000000000d     0 15184   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484443]
ffff880032799cb8 0000000000000086 ffff880032799fd8 ffff880032798000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484447]
0000000000013d00 ffff8800327d1a98 ffff880032799fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484452]
ffff8803272d44a0 ffff8800327d16e0 ffff880032799cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484456] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484459]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484463]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484467]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484470]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484474]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484477]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484482]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484485]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484489]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484492]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484495] cron
      D 000000000000000f     0 15185   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484499]
ffff880032783cb8 0000000000000086 ffff880032783fd8 ffff880032782000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484503]
0000000000013d00 ffff8800327d3178 ffff880032783fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484508]
ffff880027148000 ffff8800327d2dc0 ffff880032783cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484512] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484515]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484519]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484523]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484526]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484530]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484533]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484538]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484541]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484545]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484548]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484551] cron
      D 000000000000000e     0 15186   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484555]
ffff880032729cb8 0000000000000082 ffff880032729fd8 ffff880032728000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484560]
0000000000013d00 ffff8800327d4858 ffff880032729fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484564]
ffff88032730adc0 ffff8800327d44a0 ffff880032729cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484568] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484571]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484575]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484579]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484582]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484586]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484590]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484594]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484598]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484601]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484605]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484607] cron
      D 000000000000000d     0 15187   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484612]
ffff880032709cb8 0000000000000086 ffff880032709fd8 ffff880032708000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484616]
0000000000013d00 ffff8800327d5f38 ffff880032709fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484620]
ffff8803272d44a0 ffff8800327d5b80 ffff880032709cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484624] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484628]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484632]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484635]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484639]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484642]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484646]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484650]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484654]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484658]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484661]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484664] cron
      D 000000000000000f     0 15188   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484668]
ffff8800326abcb8 0000000000000082 ffff8800326abfd8 ffff8800326aa000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484672]
0000000000013d00 ffff8800271483b8 ffff8800326abfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484676]
ffff8803273396e0 ffff880027148000 ffff8800326abcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484680] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484684]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484688]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484691]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484695]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484699]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484702]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484706]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484710]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484714]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484717]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484720] cron
      D 000000000000000e     0 15205   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484724]
ffff880032663cb8 0000000000000086 ffff880032663fd8 ffff880032662000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484728]
0000000000013d00 ffff880027149a98 ffff880032663fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484732]
ffff88032730adc0 ffff8800271496e0 ffff880032663cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484736] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484740]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484743]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484747]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484751]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484754]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484758]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484762]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484766]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484770]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484773]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484776] cron
      D 000000000000000d     0 15206   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484780]
ffff88003264bcb8 0000000000000082 ffff88003264bfd8 ffff88003264a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484784]
0000000000013d00 ffff88002714b178 ffff88003264bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484788]
ffff8803272d44a0 ffff88002714adc0 ffff88003264bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484792] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484796]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484800]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484803]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484807]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484811]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484814]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484818]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484822]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484826]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484829]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484832] cron
      D 000000000000000f     0 15207   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484836]
ffff880032615cb8 0000000000000086 ffff880032615fd8 ffff880032614000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484840]
0000000000013d00 ffff88002714c858 ffff880032615fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484844]
ffff8803273396e0 ffff88002714c4a0 ffff880032615cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484848] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484852]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484856]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484859]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484863]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484867]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484870]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484875]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484878]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484882]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484885]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484888] cron
      D 000000000000000d     0 15216   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484892]
ffff88006bfcfcb8 0000000000000082 ffff88006bfcffd8 ffff88006bfce000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484896]
0000000000013d00 ffff88002714df38 ffff88006bfcffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484901]
ffff8803272d44a0 ffff88002714db80 ffff88006bfcfcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484905] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484908]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484912]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484916]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484919]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484923]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484926]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484931]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484934]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484938]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484941]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484944] cron
      D 0000000000000015     0 15217   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484948]
ffff88007b2ebcb8 0000000000000086 ffff88007b2ebfd8 ffff88007b2ea000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484953]
0000000000013d00 ffff88005d2c03b8 ffff88007b2ebfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484957]
ffff8803273b0000 ffff88005d2c0000 ffff88007b2ebcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484961] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484964]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484968]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484972]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484975]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484979]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484983]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484987]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484991]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484994]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.484997]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485000] mktemp
      D 0000000000000007     0 15243      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485005]
ffff880038b4fdd8 0000000000000082 ffff880038b4ffd8 ffff880038b4e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485009]
0000000000013d00 ffff88009beeb178 ffff880038b4ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485013]
ffff880327255b80 ffff88009beeadc0 ffff880038b4fdb8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485017] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485021]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485025]
[<ffffffff811729b7>] ? do_path_lookup+0x87/0x160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485028]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485032]
[<ffffffff8116f6dd>] lookup_create+0x2d/0xd0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485035]
[<ffffffff81174741>] sys_mkdirat+0x61/0x140
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485040]
[<ffffffff81174838>] sys_mkdir+0x18/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485043]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485046] if_eth0
      D 0000000000000006     0 15253      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485050]
ffff88004b43dcb8 0000000000000082 ffff88004b43dfd8 ffff88004b43c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485054]
0000000000013d00 ffff88000e82b178 ffff88004b43dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485058]
ffff880327250000 ffff88000e82adc0 ffff88004b43dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485063] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485066]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485070]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485073]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485077]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485081]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485084]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485089]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485093]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485097]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485100]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485104]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485107]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485110] cron
      D 0000000000000015     0 15255   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485114]
ffff88001765fcb8 0000000000000082 ffff88001765ffd8 ffff88001765e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485119]
0000000000013d00 ffff88005d2c1a98 ffff88001765ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485123]
ffff8805dabc2dc0 ffff88005d2c16e0 ffff88001765fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485127] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485130]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485134]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485138]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485141]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485145]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485149]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485153]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485157]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485160]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485163]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485166] if_eth0
      D 0000000000000000     0 15256      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485170]
ffff88000bf7fcb8 0000000000000082 ffff88000bf7ffd8 ffff88000bf7e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485175]
0000000000013d00 ffff88000e82c858 ffff88000bf7ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485179]
ffffffff81a0b020 ffff88000e82c4a0 ffff88000bf7fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485183] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485187]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485190]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485194]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485197]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485201]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485205]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485209]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485213]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485217]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485221]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485224]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485228]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485231]
fw_conntrack    D 0000000000000006     0 15311      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485235]
ffff8800195cdcb8 0000000000000082 ffff8800195cdfd8 ffff8800195cc000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485239]
0000000000013d00 ffff88006c1a83b8 ffff8800195cdfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485243]
ffff880087f48000 ffff88006c1a8000 ffff8800195cdcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485247] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485251]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485255]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485258]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485262]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485266]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485269]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485274]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485278]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485282]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485285]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485289]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485292]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485295]
fw_conntrack    D 0000000000000000     0 15314      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485299]
ffff880098ff3cb8 0000000000000086 ffff880098ff3fd8 ffff880098ff2000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485303]
0000000000013d00 ffff88006c1ab178 ffff880098ff3fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485308]
ffffffff81a0b020 ffff88006c1aadc0 ffff880098ff3cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485312] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485315]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485319]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485323]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485326]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485330]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485333]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485338]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485342]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485346]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485349]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485353]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485356]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485359] ipmi_temp
      D 0000000000000007     0 15425      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485363]
ffff8800070d3cb8 0000000000000082 ffff8800070d3fd8 ffff8800070d2000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485368]
0000000000013d00 ffff88009ca65f38 ffff8800070d3fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485372]
ffff880327255b80 ffff88009ca65b80 ffff8800070d3cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485376] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485380]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485383]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485387]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485391]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485394]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485398]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485402]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485406]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485410]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485414]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485417]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485421]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485424] cron
      D 000000000000000d     0 15426   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485428]
ffff88008050bcb8 0000000000000082 ffff88008050bfd8 ffff88008050a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485432]
0000000000013d00 ffff88005b5f03b8 ffff88008050bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485436]
ffff8803272d44a0 ffff88005b5f0000 ffff88008050bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485440] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485444]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485448]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485451]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485455]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485459]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485462]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485467]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485470]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485474]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485477]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485480] ipmi_temp
      D 0000000000000000     0 15427      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485484]
ffff88001a8c7cb8 0000000000000082 ffff88001a8c7fd8 ffff88001a8c6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485488]
0000000000013d00 ffff88009ca64858 ffff88001a8c7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485493]
ffffffff81a0b020 ffff88009ca644a0 ffff88001a8c7cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485497] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485500]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485504]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485508]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485511]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485515]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485518]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485523]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485527]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485531]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485534]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485538]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485541]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485544] cron
      D 000000000000000d     0 15889   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485548]
ffff88006936bcb8 0000000000000086 ffff88006936bfd8 ffff88006936a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485553]
0000000000013d00 ffff88006d8ddf38 ffff88006936bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485557]
ffff8803272d44a0 ffff88006d8ddb80 ffff88006936bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485561] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485564]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485568]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485572]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485575]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485579]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485583]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485587]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485591]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485594]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485598]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485601] cron
      D 000000000000000e     0 15890   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485605]
ffff880048603cb8 0000000000000082 ffff880048603fd8 ffff880048602000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485609]
0000000000013d00 ffff88006d8d9a98 ffff880048603fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485613]
ffff88032730adc0 ffff88006d8d96e0 ffff880048603cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485617] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485621]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485624]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485628]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485632]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485635]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485639]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485643]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485647]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485651]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485654]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485657] cron
      D 000000000000000d     0 15896   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485661]
ffff880055683cb8 0000000000000086 ffff880055683fd8 ffff880055682000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485665]
0000000000013d00 ffff88006d8d83b8 ffff880055683fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485669]
ffff8803272d44a0 ffff88006d8d8000 ffff880055683cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485674] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485677]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485681]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485684]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485688]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485692]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485695]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485700]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485703]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485707]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485710]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485713] cron
      D 000000000000000e     0 15897   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485717]
ffff8800a5f53cb8 0000000000000082 ffff8800a5f53fd8 ffff8800a5f52000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485721]
0000000000013d00 ffff88006d8db178 ffff8800a5f53fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485726]
ffff88032730adc0 ffff88006d8dadc0 ffff8800a5f53cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485730] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485733]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485737]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485740]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485744]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485748]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485751]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485756]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485760]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485763]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485766]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485769] cron
      D 0000000000000015     0 15898   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485773]
ffff88001d173cb8 0000000000000086 ffff88001d173fd8 ffff88001d172000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485778]
0000000000013d00 ffff88006d8dc858 ffff88001d173fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485782]
ffff8803273b0000 ffff88006d8dc4a0 ffff88001d173cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485786] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485790]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485793]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485797]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485801]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485804]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485808]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485812]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485816]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485819]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485823]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485825] cron
      D 000000000000000f     0 15899   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485829]
ffff8800ab2dbcb8 0000000000000082 ffff8800ab2dbfd8 ffff8800ab2da000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485834]
0000000000013d00 ffff8800b67303b8 ffff8800ab2dbfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485838]
ffff8803273396e0 ffff8800b6730000 ffff8800ab2dbcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485842] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485846]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485849]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485853]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485857]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485860]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485864]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485868]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485872]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485876]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485879]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485882] cron
      D 000000000000000d     0 15900   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485886]
ffff88004b66bcb8 0000000000000082 ffff88004b66bfd8 ffff88004b66a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485890]
0000000000013d00 ffff8800b6734858 ffff88004b66bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485894]
ffff8803272d44a0 ffff8800b67344a0 ffff88004b66bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485898] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485902]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485906]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485909]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485913]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485917]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485920]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485924]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485928]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485932]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485935]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485938] cron
      D 000000000000000e     0 15901   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485942]
ffff8800a5b17cb8 0000000000000086 ffff8800a5b17fd8 ffff8800a5b16000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485946]
0000000000013d00 ffff8800b6733178 ffff8800a5b17fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485950]
ffff88032730adc0 ffff8800b6732dc0 ffff8800a5b17cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485954] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485958]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485962]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485965]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485969]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485973]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485976]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485981]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485984]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485988]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485991]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485994] nscd
      S 000000000000000c     0 15910      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.485998]
ffff88000de21da8 0000000000000086 ffff88000de21fd8 ffff88000de20000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486002]
0000000000013d00 ffff880076564858 ffff88000de21fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486006]
ffff8803272bdb80 ffff8800765644a0 0000000000000000 ffff88000de21ef8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486010] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486014]
[<ffffffff815d685c>] schedule_hrtimeout_range_clock+0x12c/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486019]
[<ffffffff8108b380>] ? hrtimer_wakeup+0x0/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486023]
[<ffffffff8108beb4>] ? hrtimer_start_range_ns+0x14/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486027]
[<ffffffff815d68b3>] schedule_hrtimeout_range+0x13/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486031]
[<ffffffff811a37c9>] ep_poll+0x1d9/0x320
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486034]
[<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486038]
[<ffffffff811a42b5>] sys_epoll_wait+0xc5/0xe0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486042]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486044] nscd
      S 0000000000000002     0 15911      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486049]
ffff88002585fcf8 0000000000000086 ffff88002585ffd8 ffff88002585e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486053]
0000000000013d00 ffff8804ec77df38 ffff88002585ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486057]
ffff8803270cdb80 ffff8804ec77db80 ffff88002585fe18 ffff88002585fd98
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486061] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486066]
[<ffffffff81099e29>] futex_wait_queue_me+0xc9/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486069]
[<ffffffff8109abb7>] futex_wait+0x1d7/0x300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486074]
[<ffffffff8108b380>] ? hrtimer_wakeup+0x0/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486078]
[<ffffffff8108beb4>] ? hrtimer_start_range_ns+0x14/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486082]
[<ffffffff8109c747>] do_futex+0xd7/0x210
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486085]
[<ffffffff8109c8fb>] sys_futex+0x7b/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486090]
[<ffffffff815d8aee>] ? do_device_not_available+0xe/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486094]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486097] nscd
      S 0000000000000002     0 15912      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486101]
ffff880030fafcf8 0000000000000086 ffff880030faffd8 ffff880030fae000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486105]
0000000000013d00 ffff8804ec77b178 ffff880030faffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486109]
ffff8803270cdb80 ffff8804ec77adc0 ffff880030fafe18 ffff880030fafd98
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486113] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486117]
[<ffffffff81099e29>] futex_wait_queue_me+0xc9/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486120]
[<ffffffff8109abb7>] futex_wait+0x1d7/0x300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486125]
[<ffffffff8108b380>] ? hrtimer_wakeup+0x0/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486129]
[<ffffffff8108beb4>] ? hrtimer_start_range_ns+0x14/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486133]
[<ffffffff8109c747>] do_futex+0xd7/0x210
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486136]
[<ffffffff8109c8fb>] sys_futex+0x7b/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486140]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486142] nscd
      S 0000000000000001     0 15914      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486146]
ffff8800071c9cf8 0000000000000086 ffff8800071c9fd8 ffff8800071c8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486151]
0000000000013d00 ffff8804ec779a98 ffff8800071c9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486155]
ffff8803270c8000 ffff8804ec7796e0 ffffffff81099588 ffff8800071c9d98
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486159] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486163]
[<ffffffff81099588>] ? get_futex_value_locked+0x28/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486166]
[<ffffffff81099e29>] futex_wait_queue_me+0xc9/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486170]
[<ffffffff8109abb7>] futex_wait+0x1d7/0x300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486175]
[<ffffffff81178f6f>] ? __d_free+0x4f/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486179]
[<ffffffff81178ff4>] ? d_free+0x64/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486183]
[<ffffffff811820c8>] ? vfsmount_lock_global_unlock_online+0x58/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486187]
[<ffffffff8109c747>] do_futex+0xd7/0x210
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486190]
[<ffffffff8109c8fb>] sys_futex+0x7b/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486193]
[<ffffffff811667c5>] ? fput+0x25/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486197]
[<ffffffff811630e0>] ? filp_close+0x60/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486200]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486203] nscd
      S 0000000000000012     0 15918      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486207]
ffff880005bf5cf8 0000000000000086 ffff880005bf5fd8 ffff880005bf4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486212]
0000000000013d00 ffff880088989a98 ffff880005bf5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486216]
ffff88032736c4a0 ffff8800889896e0 ffffffff81099588 ffff880005bf5d98
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486220] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486223]
[<ffffffff81099588>] ? get_futex_value_locked+0x28/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486227]
[<ffffffff81099e29>] futex_wait_queue_me+0xc9/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486230]
[<ffffffff8109abb7>] futex_wait+0x1d7/0x300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486235]
[<ffffffff81178f6f>] ? __d_free+0x4f/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486239]
[<ffffffff81178ff4>] ? d_free+0x64/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486243]
[<ffffffff811820c8>] ? vfsmount_lock_global_unlock_online+0x58/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486247]
[<ffffffff8109c747>] do_futex+0xd7/0x210
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486250]
[<ffffffff8109c8fb>] sys_futex+0x7b/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486254]
[<ffffffff811667c5>] ? fput+0x25/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486257]
[<ffffffff811630e0>] ? filp_close+0x60/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486260]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486263] nscd
      S 0000000000000006     0 15919      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486267]
ffff880062ec7cf8 0000000000000086 ffff880062ec7fd8 ffff880062ec6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486272]
0000000000013d00 ffff88009f64df38 ffff880062ec7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486276]
ffff880327250000 ffff88009f64db80 ffffffff81099588 ffff880062ec7d98
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486280] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486283]
[<ffffffff81099588>] ? get_futex_value_locked+0x28/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486287]
[<ffffffff81099e29>] futex_wait_queue_me+0xc9/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486291]
[<ffffffff8109abb7>] futex_wait+0x1d7/0x300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486295]
[<ffffffff81178f6f>] ? __d_free+0x4f/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486299]
[<ffffffff81178ff4>] ? d_free+0x64/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486303]
[<ffffffff811820c8>] ? vfsmount_lock_global_unlock_online+0x58/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486307]
[<ffffffff8109c747>] do_futex+0xd7/0x210
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486310]
[<ffffffff8109c8fb>] sys_futex+0x7b/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486314]
[<ffffffff811667c5>] ? fput+0x25/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486317]
[<ffffffff811630e0>] ? filp_close+0x60/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486321]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486324] nscd
      S 0000000000000006     0 15920      1 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486328]
ffff88002e25bcf8 0000000000000086 ffff88002e25bfd8 ffff88002e25a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486332]
0000000000013d00 ffff88009f6483b8 ffff88002e25bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486336]
ffff880327250000 ffff88009f648000 ffffffff81099588 ffff88002e25bd98
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486340] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486344]
[<ffffffff81099588>] ? get_futex_value_locked+0x28/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486347]
[<ffffffff81099e29>] futex_wait_queue_me+0xc9/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486351]
[<ffffffff8109abb7>] futex_wait+0x1d7/0x300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486355]
[<ffffffff81178f6f>] ? __d_free+0x4f/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486359]
[<ffffffff81178ff4>] ? d_free+0x64/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486363]
[<ffffffff811820c8>] ? vfsmount_lock_global_unlock_online+0x58/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486367]
[<ffffffff8109c747>] do_futex+0xd7/0x210
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486370]
[<ffffffff8109c8fb>] sys_futex+0x7b/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486374]
[<ffffffff811667c5>] ? fput+0x25/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486377]
[<ffffffff811630e0>] ? filp_close+0x60/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486381]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486384] cron
      D 000000000000000e     0 15921   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486388]
ffff8800331a1cb8 0000000000000082 ffff8800331a1fd8 ffff8800331a0000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486392]
0000000000013d00 ffff8800b6735f38 ffff8800331a1fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486396]
ffff88032730adc0 ffff8800b6735b80 ffff8800331a1cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486401] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486404]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486408]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486412]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486415]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486419]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486422]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486427]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486431]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486434]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486437]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486440] mktemp
      D 0000000000000001     0 15945      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486445]
ffff8800314f9dd8 0000000000000082 ffff8800314f9fd8 ffff8800314f8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486449]
0000000000013d00 ffff880050bb5f38 ffff8800314f9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486453]
ffff8803270c8000 ffff880050bb5b80 ffff8800314f9db8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486457] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486461]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486465]
[<ffffffff811729b7>] ? do_path_lookup+0x87/0x160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486468]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486472]
[<ffffffff8116f6dd>] lookup_create+0x2d/0xd0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486475]
[<ffffffff81174741>] sys_mkdirat+0x61/0x140
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486480]
[<ffffffff81174838>] sys_mkdir+0x18/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486483]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486486] if_eth0
      D 0000000000000000     0 15950      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486490]
ffff8800a3305cb8 0000000000000086 ffff8800a3305fd8 ffff8800a3304000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486494]
0000000000013d00 ffff880070563178 ffff8800a3305fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486498]
ffffffff81a0b020 ffff880070562dc0 ffff8800a3305cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486503] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486506]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486510]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486514]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486517]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486521]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486525]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486529]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486533]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486537]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486541]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486544]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486548]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486550] cron
      D 0000000000000015     0 15951   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486555]
ffff880099b0bcb8 0000000000000082 ffff880099b0bfd8 ffff880099b0a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486559]
0000000000013d00 ffff880025463178 ffff880099b0bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486563]
ffff8803273b0000 ffff880025462dc0 ffff880099b0bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486567] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486571]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486575]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486578]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486582]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486586]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486589]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486594]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486597]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486601]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486604]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486607] cron
      D 0000000000000016     0 15952   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486611]
ffff8800140d7cb8 0000000000000086 ffff8800140d7fd8 ffff8800140d6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486615]
0000000000013d00 ffff880025461a98 ffff8800140d7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486619]
ffff8806232a16e0 ffff8800254616e0 ffff8800140d7cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486624] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486627]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486631]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486634]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486638]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486642]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486645]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486650]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486653]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486657]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486660]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486663] if_eth0
      D 0000000000000000     0 15953      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486667]
ffff88001c485cb8 0000000000000082 ffff88001c485fd8 ffff88001c484000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486672]
0000000000013d00 ffff880070565f38 ffff88001c485fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486676]
ffff880603f32dc0 ffff880070565b80 ffff88001c485cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486680] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486683]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486687]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486691]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486694]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486698]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486702]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486706]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486710]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486714]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486718]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486721]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486724]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486728]
fw_conntrack    D 0000000000000000     0 16003      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486732]
ffff880036cebcb8 0000000000000086 ffff880036cebfd8 ffff880036cea000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486736]
0000000000013d00 ffff88009bee9a98 ffff880036cebfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486740]
ffffffff81a0b020 ffff88009bee96e0 ffff880036cebcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486745] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486748]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486752]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486756]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486759]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486763]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486766]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486771]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486775]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486779]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486782]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486786]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486789]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486792]
fw_conntrack    D 0000000000000006     0 16004      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486796]
ffff88004b91dcb8 0000000000000086 ffff88004b91dfd8 ffff88004b91c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486801]
0000000000013d00 ffff88009beec858 ffff88004b91dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486805]
ffff880327250000 ffff88009beec4a0 ffff88004b91dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486809] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486813]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486816]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486820]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486824]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486827]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486831]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486835]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486839]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486843]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486847]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486850]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486854]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486857] ipmi_temp
      D 0000000000000000     0 16113      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486861]
ffff88002fac7cb8 0000000000000082 ffff88002fac7fd8 ffff88002fac6000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486865]
0000000000013d00 ffff8800765603b8 ffff88002fac7fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486869]
ffff8806210044a0 ffff880076560000 ffff88002fac7cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486873] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486877]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486880]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486884]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486888]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486892]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486895]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486899]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486904]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486907]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486911]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486915]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486918]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486921] cron
      D 000000000000000e     0 16114   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486925]
ffff8800b7753cb8 0000000000000082 ffff8800b7753fd8 ffff8800b7752000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486929]
0000000000013d00 ffff88007618df38 ffff8800b7753fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486934]
ffff88032730adc0 ffff88007618db80 ffff8800b7753cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486938] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486941]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486945]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486949]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486952]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486956]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486960]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486964]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486968]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486971]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486975]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486977] ipmi_temp
      D 0000000000000001     0 16115      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486982]
ffff880010c2fcb8 0000000000000082 ffff880010c2ffd8 ffff880010c2e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486986]
0000000000013d00 ffff880076563178 ffff880010c2ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486990]
ffff8803270c8000 ffff880076562dc0 ffff880010c2fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486994] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.486998]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487001]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487005]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487009]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487012]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487016]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487020]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487024]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487028]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487032]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487035]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487039]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487042]
flush-8:144     S 0000000000000000     0 16585      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487046]
ffff88004b7b3e70 0000000000000046 ffff88004b7b3fd8 ffff88004b7b2000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487050]
0000000000013d00 ffff880063eb9a98 ffff88004b7b3fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487054]
ffff88006d762dc0 ffff880063eb96e0 0000000000000282 ffff880063eb96e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487058] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487063]
[<ffffffff8118c6c9>] bdi_writeback_thread+0x179/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487067]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487070]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487074]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487078]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487081]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487084] cron
      D 000000000000000d     0 16586   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487088]
ffff88006cefdcb8 0000000000000082 ffff88006cefdfd8 ffff88006cefc000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487092]
0000000000013d00 ffff880076189a98 ffff88006cefdfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487097]
ffff8803272d44a0 ffff8800761896e0 ffff88006cefdcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487101] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487104]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487108]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487112]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487115]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487119]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487122]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487127]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487131]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487134]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487137]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487140]
flush-8:16      S 0000000000000009     0 16588      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487144]
ffff88001bf5ddc0 0000000000000046 ffff88001bf5dfd8 ffff88001bf5c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487149]
0000000000013d00 ffff880063ebc858 ffff88001bf5dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487153]
ffff880323eeadc0 ffff880063ebc4a0 ffff88001bf5ddc0 ffff88001bf5dde0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487157] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487161]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487166]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487170]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487174]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487178]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487181]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487185]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487189]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487191]
flush-8:160     S 000000000000000c     0 16589      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487195]
ffff8800ab3f9dc0 0000000000000046 ffff8800ab3f9fd8 ffff8800ab3f8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487200]
0000000000013d00 ffff880063ebdf38 ffff8800ab3f9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487204]
ffff8803272bdb80 ffff880063ebdb80 ffff8800ab3f9dc0 ffff8800ab3f9de0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487208] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487211]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487215]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487220]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487224]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487227]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487231]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487234]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487238]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487240]
flush-8:32      S 0000000000000012     0 16591      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487244]
ffff880010d53dc0 0000000000000046 ffff880010d53fd8 ffff880010d52000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487249]
0000000000013d00 ffff88000f979a98 ffff880010d53fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487253]
ffff88032736c4a0 ffff88000f9796e0 ffff880010d53dc0 ffff880010d53de0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487257] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487260]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487264]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487269]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487272]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487276]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487279]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487283]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487287]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487289]
flush-8:64      S 0000000000000013     0 16596      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487293]
ffff8801ea13fdc0 0000000000000046 ffff8801ea13ffd8 ffff8801ea13e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487298]
0000000000013d00 ffff880035654858 ffff8801ea13ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487302]
ffff880323ef8000 ffff8800356544a0 ffff8801ea13fdc0 ffff8801ea13fde0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487306] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487309]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487313]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487317]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487321]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487325]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487328]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487332]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487335]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487338]
flush-8:80      S 0000000000000014     0 16597      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487342]
ffff880110d61e70 0000000000000046 ffff880110d61fd8 ffff880110d60000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487346]
0000000000013d00 ffff880035651a98 ffff880110d61fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487351]
ffff88032738db80 ffff8800356516e0 ffff8800bf4f3d00 ffff8800356516e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487355] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487358]
[<ffffffff8118c6c9>] bdi_writeback_thread+0x179/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487362]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487366]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487370]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487373]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487377]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487379]
flush-251:0     S 0000000000000012     0 16598      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487384]
ffff8801e719ddc0 0000000000000046 ffff8801e719dfd8 ffff8801e719c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487388]
0000000000013d00 ffff880035655f38 ffff8801e719dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487392]
ffff880323eec4a0 ffff880035655b80 ffff8801e719ddc0 ffff8801e719dde0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487396] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487400]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487404]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487408]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487412]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487415]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487419]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487423]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487426]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487429] cron
      D 000000000000000d     0 16604   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487433]
ffff880185255cb8 0000000000000086 ffff880185255fd8 ffff880185254000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487437]
0000000000013d00 ffff88007618b178 ffff880185255fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487442]
ffff8803272d44a0 ffff88007618adc0 ffff880185255cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487446] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487449]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487453]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487457]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487460]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487464]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487468]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487472]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487476]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487479]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487483]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487486] cron
      D 0000000000000015     0 16605   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487490]
ffff8802b0ee9cb8 0000000000000086 ffff8802b0ee9fd8 ffff8802b0ee8000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487494]
0000000000013d00 ffff88007618c858 ffff8802b0ee9fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487498]
ffff8803273b0000 ffff88007618c4a0 ffff8802b0ee9cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487502] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487506]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487510]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487513]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487517]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487521]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487524]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487529]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487532]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487536]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487539]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487542] cron
      D 000000000000000e     0 16606   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487546]
ffff88011b21dcb8 0000000000000086 ffff88011b21dfd8 ffff88011b21c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487551]
0000000000013d00 ffff8800761883b8 ffff88011b21dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487555]
ffff88032730adc0 ffff880076188000 ffff88011b21dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487559] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487562]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487566]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487570]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487573]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487577]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487581]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487585]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487589]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487592]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487596]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487598] cron
      D 000000000000000d     0 16607   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487603]
ffff88012fed1cb8 0000000000000082 ffff88012fed1fd8 ffff88012fed0000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487607]
0000000000013d00 ffff8800254603b8 ffff88012fed1fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487611]
ffff8803272d44a0 ffff880025460000 ffff88012fed1cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487615] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487619]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487623]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487626]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487630]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487634]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487637]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487644]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487649]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487654]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487659]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487663] cron
      D 0000000000000016     0 16608   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487667]
ffff8802bc651cb8 0000000000000086 ffff8802bc651fd8 ffff8802bc650000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487672]
0000000000013d00 ffff880050bb1a98 ffff8802bc651fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487676]
ffff8803273b5b80 ffff880050bb16e0 ffff8802bc651cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487680] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487684]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487688]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487691]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487695]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487699]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487702]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487707]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487710]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487714]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487717]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487720] cron
      D 000000000000000e     0 16609   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487724]
ffff8802031ddcb8 0000000000000082 ffff8802031ddfd8 ffff8802031dc000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487729]
0000000000013d00 ffff880050bb3178 ffff8802031ddfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487733]
ffff88032730adc0 ffff880050bb2dc0 ffff8802031ddcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487737] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487740]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487744]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487748]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487751]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487755]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487759]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487763]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487767]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487770]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487773]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487776] cron
      D 000000000000000d     0 16610   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487780]
ffff88014caadcb8 0000000000000086 ffff88014caadfd8 ffff88014caac000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487785]
0000000000013d00 ffff880050bb4858 ffff88014caadfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487789]
ffff880326cedb80 ffff880050bb44a0 ffff88014caadcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487793] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487797]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487800]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487804]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487808]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487811]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487815]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487819]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487823]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487827]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487830]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487833] cron
      D 000000000000000f     0 16611   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487837]
ffff88012a41fcb8 0000000000000082 ffff88012a41ffd8 ffff88012a41e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487841]
0000000000013d00 ffff880050bb03b8 ffff88012a41ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487846]
ffff8803241badc0 ffff880050bb0000 ffff88012a41fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487850] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487853]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487857]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487861]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487864]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487868]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487871]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487876]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487879]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487883]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487886]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487889] cron
      D 0000000000000015     0 16612   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487893]
ffff880227201cb8 0000000000000086 ffff880227201fd8 ffff880227200000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487897]
0000000000013d00 ffff88000c8a3178 ffff880227201fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487902]
ffff8803273b0000 ffff88000c8a2dc0 ffff880227201cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487906] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487909]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487913]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487917]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487920]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487924]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487927]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487932]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487936]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487939]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487942]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487945] cron
      D 000000000000000d     0 16613   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487949]
ffff880228513cb8 0000000000000086 ffff880228513fd8 ffff880228512000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487954]
0000000000013d00 ffff880003301a98 ffff880228513fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487958]
ffff8803272d44a0 ffff8800033016e0 ffff880228513cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487962] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487966]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487969]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487973]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487977]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487980]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487984]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487988]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487992]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487996]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.487999]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488002] cron
      D 000000000000000e     0 16614   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488006]
ffff88011b105cb8 0000000000000082 ffff88011b105fd8 ffff88011b104000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488010]
0000000000013d00 ffff88009c595f38 ffff88011b105fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488014]
ffff88032730adc0 ffff88009c595b80 ffff88011b105cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488018] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488022]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488026]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488029]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488033]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488037]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488040]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488045]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488048]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488052]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488055]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488058] cron
      D 000000000000000f     0 16615   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488062]
ffff8801e0725cb8 0000000000000086 ffff8801e0725fd8 ffff8801e0724000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488066]
0000000000013d00 ffff880051b59a98 ffff8801e0725fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488070]
ffff8803273396e0 ffff880051b596e0 ffff8801e0725cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488075] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488078]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488082]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488085]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488089]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488093]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488096]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488101]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488104]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488108]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488111]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488114] cron
      D 0000000000000015     0 16616   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488118]
ffff8802019a5cb8 0000000000000086 ffff8802019a5fd8 ffff8802019a4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488122]
0000000000013d00 ffff880051b5c858 ffff8802019a5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488127]
ffff8803273b0000 ffff880051b5c4a0 ffff8802019a5cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488131] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488134]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488138]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488142]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488145]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488149]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488152]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488157]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488161]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488164]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488167]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488170] cron
      D 000000000000000e     0 16617   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488174]
ffff8802dc519cb8 0000000000000082 ffff8802dc519fd8 ffff8802dc518000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488179]
0000000000013d00 ffff880051b5df38 ffff8802dc519fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488183]
ffff88032730adc0 ffff880051b5db80 ffff8802dc519cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488187] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488190]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488194]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488198]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488201]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488205]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488209]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488213]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488217]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488220]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488223]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488226] cron
      D 000000000000000f     0 16618   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488230]
ffff8802fa9c1cb8 0000000000000086 ffff8802fa9c1fd8 ffff8802fa9c0000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488235]
0000000000013d00 ffff880008bec858 ffff8802fa9c1fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488239]
ffff8803273396e0 ffff880008bec4a0 ffff8802fa9c1cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488243] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488247]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488250]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488254]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488258]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488261]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488265]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488269]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488273]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488277]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488280]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488283] cron
      D 0000000000000016     0 16619   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488287]
ffff88011bd1fcb8 0000000000000082 ffff88011bd1ffd8 ffff88011bd1e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488291]
0000000000013d00 ffff88029bf703b8 ffff88011bd1ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488295]
ffff8803273b5b80 ffff88029bf70000 ffff88011bd1fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488300] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488303]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488307]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488310]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488314]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488318]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488321]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488326]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488329]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488333]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488336]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488339]
flush-8:96      S 0000000000000002     0 16621      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488343]
ffff88028a73fe70 0000000000000046 ffff88028a73ffd8 ffff88028a73e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488347]
0000000000013d00 ffff880035653178 ffff88028a73ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488352]
ffff8803270cdb80 ffff880035652dc0 0000000000000282 ffff880035652dc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488356] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488360]
[<ffffffff8118c6c9>] bdi_writeback_thread+0x179/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488364]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488367]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488371]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488375]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488378]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488381]
flush-8:128     S 000000000000000a     0 16624      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488385]
ffff8801bbe61dc0 0000000000000046 ffff8801bbe61fd8 ffff8801bbe60000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488390]
0000000000013d00 ffff88006c1adf38 ffff8801bbe61fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488394]
ffff88043d4d0000 ffff88006c1adb80 ffff8801bbe61dc0 ffff8801bbe61de0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488398] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488401]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488406]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488410]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488414]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488417]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488421]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488424]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488428]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488430]
flush-8:176     S 0000000000000006     0 16625      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488434]
ffff8802f4a5de70 0000000000000046 ffff8802f4a5dfd8 ffff8802f4a5c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488439]
0000000000013d00 ffff88006c1a9a98 ffff8802f4a5dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488443]
ffff880327250000 ffff88006c1a96e0 ffff88063fd73d00 ffff88006c1a96e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488447] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488451]
[<ffffffff8118c6c9>] bdi_writeback_thread+0x179/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488455]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488458]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488462]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488466]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488469]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488472] cron
      D 000000000000000d     0 16628   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488476]
ffff8802c4f49cb8 0000000000000086 ffff8802c4f49fd8 ffff8802c4f48000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488480]
0000000000013d00 ffff88029bf71a98 ffff8802c4f49fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488484]
ffff8803272d44a0 ffff88029bf716e0 ffff8802c4f49cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488489] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488492]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488496]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488500]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488503]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488507]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488510]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488515]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488519]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488522]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488525]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488528] flush-8:0
      S 000000000000000e     0 16630      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488532]
ffff8802e064ddc0 0000000000000046 ffff8802e064dfd8 ffff8802e064c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488537]
0000000000013d00 ffff88006c1ac858 ffff8802e064dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488541]
ffff88032730adc0 ffff88006c1ac4a0 ffff8802e064ddc0 ffff8802e064dde0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488545] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488548]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488552]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488557]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488561]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488564]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488568]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488571]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488575]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488577]
flush-8:48      S 0000000000000004     0 16631      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488582]
ffff8801e7cb5dc0 0000000000000046 ffff8801e7cb5fd8 ffff8801e7cb4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488586]
0000000000013d00 ffff8801001b03b8 ffff8801e7cb5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488590]
ffff880327220000 ffff8801001b0000 ffff8801e7cb5dc0 ffff8801e7cb5de0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488594] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488598]
[<ffffffff815d5ac3>] schedule_timeout+0x173/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488602]
[<ffffffff81074b10>] ? process_timeout+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488606]
[<ffffffff8118c775>] bdi_writeback_thread+0x225/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488610]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488613]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488617]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488621]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488624]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488627]
flush-8:112     S 000000000000000e     0 16632      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488631]
ffff880109453e70 0000000000000046 ffff880109453fd8 ffff880109452000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488635]
0000000000013d00 ffff8801001b1a98 ffff880109453fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488639]
ffff8803241bdb80 ffff8801001b16e0 000000000000000e ffff8801001b16e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488644] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488647]
[<ffffffff8118c6c9>] bdi_writeback_thread+0x179/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488651]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488655]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488658]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488662]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488666]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488668] sshd
      S 0000000000000006     0 16633  29861 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488673]
ffff88006be9bae8 0000000000000086 ffff88006be9bfd8 ffff88006be9a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488677]
0000000000013d00 ffff8800249a5f38 ffff88006be9bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488681]
ffff880327250000 ffff8800249a5b80 ffff88006be9bae8 ffff8800357071c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488685] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488689]
[<ffffffff815d5bbd>] schedule_timeout+0x26d/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488694]
[<ffffffff814dc5aa>] ? __scm_destroy+0xda/0x110
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488697]
[<ffffffff81038c79>] ? default_spin_lock_flags+0x9/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488703]
[<ffffffff8156d142>] unix_stream_data_wait+0xa2/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488707]
[<ffffffff81087940>] ? autoremove_wake_function+0x0/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488711]
[<ffffffff8156fe65>] unix_stream_recvmsg+0x3c5/0x650
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488715]
[<ffffffff81168a7d>] ? cdev_get+0x2d/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488719]
[<ffffffff8116f333>] ? generic_permission+0x23/0xc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488724]
[<ffffffff814cd904>] sock_aio_read+0x164/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488728]
[<ffffffff81164c82>] do_sync_read+0xd2/0x110
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488733]
[<ffffffff81279083>] ? security_file_permission+0x93/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488737]
[<ffffffff81164fa1>] ? rw_verify_area+0x61/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488741]
[<ffffffff81165507>] vfs_read+0x167/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488744]
[<ffffffff81165571>] sys_read+0x51/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488748]
[<ffffffff815d8aee>] ? do_device_not_available+0xe/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488752]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488754] sshd
      S 0000000000000014     0 16648  16633 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488758]
ffff88000de438f8 0000000000000082 ffff88000de43fd8 ffff88000de42000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488763]
0000000000013d00 ffff88006a50c858 ffff88000de43fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488767]
ffff8803273996e0 ffff88006a50c4a0 ffff88000de43ac8 0000000000000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488771] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488775]
[<ffffffff815d687d>] schedule_hrtimeout_range_clock+0x14d/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488779]
[<ffffffff8104e4c3>] ? __wake_up+0x53/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488783]
[<ffffffff815d68b3>] schedule_hrtimeout_range+0x13/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488787]
[<ffffffff81177389>] poll_schedule_timeout+0x49/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488791]
[<ffffffff81177e8e>] do_select+0x4ae/0x5f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488796]
[<ffffffff8151b1c7>] ? ip_finish_output+0x157/0x320
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488801]
[<ffffffff81177460>] ? __pollwait+0x0/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488804]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488808]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488812]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488816]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488819]
[<ffffffff815d76b9>] ? _raw_spin_unlock_bh+0x19/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488823]
[<ffffffff814d3aca>] ? release_sock+0xfa/0x120
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488827]
[<ffffffff8152744f>] ? tcp_recvmsg+0x5ff/0xbb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488831]
[<ffffffff8105f554>] ? try_to_wake_up+0x244/0x3e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488834]
[<ffffffff8105f702>] ? default_wake_function+0x12/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488839]
[<ffffffff8104bb39>] ? __wake_up_common+0x59/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488843]
[<ffffffff8117817c>] core_sys_select+0x1ac/0x2f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488847]
[<ffffffff8104e4c3>] ? __wake_up+0x53/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488851]
[<ffffffff8119e10d>] ? fsnotify+0x1cd/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488858]
[<ffffffff813893fb>] ? put_ldisc+0x5b/0xc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488862]
[<ffffffff8117848f>] sys_select+0x3f/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488866]
[<ffffffff81165601>] ? sys_write+0x51/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488869]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488872] bash
      S 0000000000000014     0 16649  16648 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488876]
ffff880067683e68 0000000000000086 ffff880067683fd8 ffff880067682000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488881]
0000000000013d00 ffff880022013178 ffff880067683fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488885]
ffff8803273996e0 ffff880022012dc0 ffff880067683e68 0000000000000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488889] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488893]
[<ffffffff81069705>] do_wait+0x1d5/0x270
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488897]
[<ffffffff81064e9a>] ? do_fork+0xca/0x330
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488901]
[<ffffffff8106aac3>] sys_wait4+0xa3/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488905]
[<ffffffff81067f30>] ? child_wait_callback+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488909]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488912] cron
      D 0000000000000016     0 16700   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488916]
ffff8802ccc1fcb8 0000000000000082 ffff8802ccc1ffd8 ffff8802ccc1e000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488920]
0000000000013d00 ffff88029bf75f38 ffff8802ccc1ffd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488925]
ffff8803273b5b80 ffff88029bf75b80 ffff8802ccc1fcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488929] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488932]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488936]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488940]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488943]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488947]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488951]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488955]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488959]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488962]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488966]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488969]
flush-251:3     S 0000000000000002     0 16702      2 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488973]
ffff88010ce87e70 0000000000000046 ffff88010ce87fd8 ffff88010ce86000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488977]
0000000000013d00 ffff8801001b3178 ffff88010ce87fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488981]
ffff8800191c96e0 ffff8801001b2dc0 000000000000000f ffff8801001b2dc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488985] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488989]
[<ffffffff8118c6c9>] bdi_writeback_thread+0x179/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488993]
[<ffffffff8118c550>] ? bdi_writeback_thread+0x0/0x260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.488997]
[<ffffffff810871f6>] kthread+0x96/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489000]
[<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489004]
[<ffffffff81087160>] ? kthread+0x0/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489008]
[<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489010] mktemp
      D 0000000000000003     0 16725      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489014]
ffff8801ec269dd8 0000000000000086 ffff8801ec269fd8 ffff8801ec268000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489019]
0000000000013d00 ffff880023f94858 ffff8801ec269fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489023]
ffff880603f316e0 ffff880023f944a0 ffff8801ec269db8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489027] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489030]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489034]
[<ffffffff811729b7>] ? do_path_lookup+0x87/0x160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489038]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489041]
[<ffffffff8116f6dd>] lookup_create+0x2d/0xd0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489045]
[<ffffffff81174741>] sys_mkdirat+0x61/0x140
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489049]
[<ffffffff81174838>] sys_mkdir+0x18/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489052]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489055] if_eth0
      D 0000000000000006     0 16732      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489060]
ffff8802aec51cb8 0000000000000082 ffff8802aec51fd8 ffff8802aec50000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489064]
0000000000013d00 ffff8800705603b8 ffff8802aec51fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489068]
ffff880327250000 ffff880070560000 ffff8802aec51cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489072] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489076]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489079]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489083]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489087]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489090]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489094]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489098]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489103]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489106]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489110]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489114]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489117]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489120] if_eth0
      D 0000000000000006     0 16733      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489124]
ffff880224d9bcb8 0000000000000086 ffff880224d9bfd8 ffff880224d9a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489128]
0000000000013d00 ffff880070564858 ffff880224d9bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489133]
ffff880327250000 ffff8800705644a0 ffff880224d9bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489137] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489140]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489144]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489148]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489151]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489155]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489159]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489163]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489167]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489171]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489175]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489178]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489181]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489184] sshd
      S 0000000000000006     0 16734  29861 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489189]
ffff880136d57ae8 0000000000000082 ffff880136d57fd8 ffff880136d56000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489193]
0000000000013d00 ffff88008094df38 ffff880136d57fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489197]
ffff880327250000 ffff88008094db80 ffff880136d57ae8 ffff8803238c4780
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489201] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489205]
[<ffffffff815d5bbd>] schedule_timeout+0x26d/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489208]
[<ffffffff814dc5aa>] ? __scm_destroy+0xda/0x110
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489212]
[<ffffffff81038c79>] ? default_spin_lock_flags+0x9/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489216]
[<ffffffff8156d142>] unix_stream_data_wait+0xa2/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489220]
[<ffffffff81087940>] ? autoremove_wake_function+0x0/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489225]
[<ffffffff8156fe65>] unix_stream_recvmsg+0x3c5/0x650
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489228]
[<ffffffff81168a7d>] ? cdev_get+0x2d/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489233]
[<ffffffff8116f333>] ? generic_permission+0x23/0xc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489236]
[<ffffffff814cd904>] sock_aio_read+0x164/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489241]
[<ffffffff81164c82>] do_sync_read+0xd2/0x110
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489245]
[<ffffffff81279083>] ? security_file_permission+0x93/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489249]
[<ffffffff81164fa1>] ? rw_verify_area+0x61/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489253]
[<ffffffff81165507>] vfs_read+0x167/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489256]
[<ffffffff81165571>] sys_read+0x51/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489260]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489263] sshd
      S 0000000000000013     0 16749  16734 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489267]
ffff8802b0e598f8 0000000000000086 ffff8802b0e59fd8 ffff8802b0e58000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489271]
0000000000013d00 ffff88002335b178 ffff8802b0e59fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489275]
ffff88032738adc0 ffff88002335adc0 ffff8802b0e59ac8 0000000000000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489279] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489283]
[<ffffffff815d687d>] schedule_hrtimeout_range_clock+0x14d/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489287]
[<ffffffff8104e4c3>] ? __wake_up+0x53/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489291]
[<ffffffff815d68b3>] schedule_hrtimeout_range+0x13/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489295]
[<ffffffff81177389>] poll_schedule_timeout+0x49/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489299]
[<ffffffff81177e8e>] do_select+0x4ae/0x5f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489303]
[<ffffffff8151b1c7>] ? ip_finish_output+0x157/0x320
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489307]
[<ffffffff81177460>] ? __pollwait+0x0/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489311]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489315]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489318]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489322]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489326]
[<ffffffff815d76b9>] ? _raw_spin_unlock_bh+0x19/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489330]
[<ffffffff814d3aca>] ? release_sock+0xfa/0x120
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489333]
[<ffffffff815259cc>] ? tcp_sendmsg+0x7fc/0xc50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489338]
[<ffffffff8154a904>] ? inet_sendmsg+0x64/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489345]
[<ffffffff812accd7>] ? apparmor_socket_sendmsg+0x17/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489348]
[<ffffffff814cd78e>] ? sock_aio_write+0x14e/0x160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489353]
[<ffffffff8117817c>] core_sys_select+0x1ac/0x2f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489357]
[<ffffffff81164b72>] ? do_sync_write+0xd2/0x110
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489361]
[<ffffffff8119e10d>] ? fsnotify+0x1cd/0x2e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489365]
[<ffffffff8117848f>] sys_select+0x3f/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489369]
[<ffffffff81165601>] ? sys_write+0x51/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489372]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489375] bash
      S 0000000000000002     0 16750  16749 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489379]
ffff8802301d1e68 0000000000000086 ffff8802301d1fd8 ffff8802301d0000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489384]
0000000000013d00 ffff880022011a98 ffff8802301d1fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489388]
ffff8803270cdb80 ffff8800220116e0 ffff8802301d1e68 0000000000000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489392] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489396]
[<ffffffff81069705>] do_wait+0x1d5/0x270
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489400]
[<ffffffff81064e9a>] ? do_fork+0xca/0x330
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489404]
[<ffffffff8106aac3>] sys_wait4+0xa3/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489408]
[<ffffffff81067f30>] ? child_wait_callback+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489411]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489414]
fw_conntrack    D 0000000000000012     0 16849      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489418]
ffff880221125cb8 0000000000000086 ffff880221125fd8 ffff880221124000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489423]
0000000000013d00 ffff880066501a98 ffff880221125fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489427]
ffff88032736c4a0 ffff8800665016e0 ffff880221125cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489431] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489435]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489438]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489442]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489446]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489449]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489453]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489457]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489461]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489465]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489469]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489473]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489476]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489479]
fw_conntrack    D 0000000000000007     0 16850      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489483]
ffff8801efd29cb8 0000000000000086 ffff8801efd29fd8 ffff8801efd28000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489488]
0000000000013d00 ffff880066503178 ffff8801efd29fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489492]
ffff880327255b80 ffff880066502dc0 ffff8801efd29cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489496] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489499]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489503]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489507]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489510]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489514]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489518]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489522]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489526]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489530]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489534]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489537]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489540]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489544] cron
      D 0000000000000015     0 16851   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489548]
ffff88042558bcb8 0000000000000082 ffff88042558bfd8 ffff88042558a000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489552]
0000000000013d00 ffff88029bf74858 ffff88042558bfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489556]
ffff8803273b0000 ffff88029bf744a0 ffff88042558bcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489560] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489564]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489568]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489571]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489575]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489579]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489582]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489587]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489590]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489594]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489597]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489600] cron
      D 000000000000000e     0 16852   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489604]
ffff880567523cb8 0000000000000086 ffff880567523fd8 ffff880567522000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489608]
0000000000013d00 ffff88029bf73178 ffff880567523fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489613]
ffff88032730adc0 ffff88029bf72dc0 ffff880567523cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489617] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489620]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489624]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489627]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489631]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489635]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489638]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489643]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489646]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489650]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489653]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489656] sudo
      S 0000000000000008     0 16858  16750 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489660]
ffff88011f2238f8 0000000000000086 ffff88011f223fd8 ffff88011f222000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489665]
0000000000013d00 ffff880023f95f38 ffff88011f223fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489669]
ffff88032727c4a0 ffff880023f95b80 ffff88063fc56f20 0000000000000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489673] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489677]
[<ffffffff815d687d>] schedule_hrtimeout_range_clock+0x14d/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489681]
[<ffffffff81087d3e>] ? add_wait_queue+0x4e/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489685]
[<ffffffff811774d5>] ? __pollwait+0x75/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489688]
[<ffffffff815d68b3>] schedule_hrtimeout_range+0x13/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489692]
[<ffffffff81177389>] poll_schedule_timeout+0x49/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489696]
[<ffffffff81177e8e>] do_select+0x4ae/0x5f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489701]
[<ffffffff81177460>] ? __pollwait+0x0/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489705]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489709]
[<ffffffff81113958>] ? __alloc_pages_nodemask+0x118/0x830
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489715]
[<ffffffff812da0ed>] ? cpumask_any_but+0x2d/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489718]
[<ffffffff81045a18>] ? flush_tlb_page+0x48/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489722]
[<ffffffff81117ffd>] ? lru_cache_add_lru+0x2d/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489727]
[<ffffffff81139c5d>] ? page_add_new_anon_rmap+0x8d/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489730]
[<ffffffff8112dd58>] ? do_wp_page+0x408/0x770
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489735]
[<ffffffff8117817c>] core_sys_select+0x1ac/0x2f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489739]
[<ffffffff8104dea6>] ? enqueue_task+0x66/0x80
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489743]
[<ffffffff815db5b8>] ? do_page_fault+0x258/0x540
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489747]
[<ffffffff81064e9a>] ? do_fork+0xca/0x330
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489751]
[<ffffffff8117848f>] sys_select+0x3f/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489754]
[<ffffffff815d7f95>] ? page_fault+0x25/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489758]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489761] tail
      S 0000000000000000     0 16859  16858 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489765]
ffff880593473e58 0000000000000082 ffff880593473fd8 ffff880593472000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489769]
0000000000013d00 ffff88007b465f38 ffff880593473fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489774]
ffffffff81a0b020 ffff88007b465b80 ffff880323aa4310 ffff880060365800
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489778] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489782]
[<ffffffff811a0e37>] inotify_read+0xc7/0x1e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489786]
[<ffffffff81087940>] ? autoremove_wake_function+0x0/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489790]
[<ffffffff81165463>] vfs_read+0xc3/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489793]
[<ffffffff81165571>] sys_read+0x51/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489796]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489799] ipmi_temp
      D 0000000000000000     0 16964      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489804]
ffff8802bea3dcb8 0000000000000082 ffff8802bea3dfd8 ffff8802bea3c000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489808]
0000000000013d00 ffff88002335df38 ffff8802bea3dfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489812]
ffffffff81a0b020 ffff88002335db80 ffff8802bea3dcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489816] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489820]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489824]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489828]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489831]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489835]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489838]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489843]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489847]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489851]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489854]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489858]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489861]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489864] ipmi_temp
      D 0000000000000007     0 16966      1 0x00000004
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489868]
ffff8802a0abbcb8 0000000000000086 ffff8802a0abbfd8 ffff8802a0aba000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489873]
0000000000013d00 ffff8800233583b8 ffff8802a0abbfd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489877]
ffff880327255b80 ffff880023358000 ffff8802a0abbcf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489881] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489884]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489888]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489892]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489895]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489899]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489903]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489907]
[<ffffffff8118183e>] ? vfsmount_lock_local_unlock+0x1e/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489911]
[<ffffffff81183436>] ? mntput_no_expire+0x36/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489915]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489919]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489922]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489926]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489928] cron
      D 000000000000000d     0 17481   1276 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489932]
ffff88055a633cb8 0000000000000082 ffff88055a633fd8 ffff88055a632000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489937]
0000000000013d00 ffff88007b464858 ffff88055a633fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489941]
ffff8803272d44a0 ffff88007b4644a0 ffff88055a633cf8 ffff88033f7d71b8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489945] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489949]
[<ffffffff815d6537>] __mutex_lock_slowpath+0xf7/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489952]
[<ffffffff812797d0>] ? security_inode_exec_permission+0x30/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489956]
[<ffffffff815d5f23>] mutex_lock+0x23/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489960]
[<ffffffff811739a8>] do_last+0x118/0x410
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489963]
[<ffffffff81174032>] do_filp_open+0x392/0x7c0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489967]
[<ffffffff8113135d>] ? handle_mm_fault+0x16d/0x250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489971]
[<ffffffff811810f7>] ? alloc_fd+0xf7/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489975]
[<ffffffff8116474a>] do_sys_open+0x6a/0x150
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489979]
[<ffffffff81164850>] sys_open+0x20/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489982]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489985] sudo
      S 0000000000000007     0 17486  16649 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489989]
ffff880552dd18f8 0000000000000086 ffff880552dd1fd8 ffff880552dd0000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489993]
0000000000013d00 ffff88001ec49a98 ffff880552dd1fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.489997]
ffff880327255b80 ffff88001ec496e0 ffff88063fc36f20 0000000000000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490001] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490005]
[<ffffffff815d687d>] schedule_hrtimeout_range_clock+0x14d/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490009]
[<ffffffff81087d3e>] ? add_wait_queue+0x4e/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490013]
[<ffffffff811774d5>] ? __pollwait+0x75/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490017]
[<ffffffff815d68b3>] schedule_hrtimeout_range+0x13/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490021]
[<ffffffff81177389>] poll_schedule_timeout+0x49/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490025]
[<ffffffff81177e8e>] do_select+0x4ae/0x5f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490029]
[<ffffffff81177460>] ? __pollwait+0x0/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490033]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490037]
[<ffffffff81113958>] ? __alloc_pages_nodemask+0x118/0x830
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490042]
[<ffffffff812da0ed>] ? cpumask_any_but+0x2d/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490045]
[<ffffffff81045a18>] ? flush_tlb_page+0x48/0xb0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490049]
[<ffffffff81117ffd>] ? lru_cache_add_lru+0x2d/0x50
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490053]
[<ffffffff81139c5d>] ? page_add_new_anon_rmap+0x8d/0xa0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490056]
[<ffffffff8112dd58>] ? do_wp_page+0x408/0x770
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490061]
[<ffffffff8117817c>] core_sys_select+0x1ac/0x2f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490064]
[<ffffffff8104dea6>] ? enqueue_task+0x66/0x80
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490068]
[<ffffffff815db5b8>] ? do_page_fault+0x258/0x540
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490072]
[<ffffffff81064e9a>] ? do_fork+0xca/0x330
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490076]
[<ffffffff8117848f>] sys_select+0x3f/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490080]
[<ffffffff815d7f95>] ? page_fault+0x25/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490083]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490086] bash
      R  running task        0 17487  17486 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490090]
ffff88059e7efcd8 ffff88059e7efca9 353637393832365b 5d3039303039342e
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490095]
0000000000000020 0000000000000021 0000000000000001 ffff88059e7efd08
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490099]
0000000000000034 00000000d5bd0400 ffff88059e7efd78 ffff88009f798000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490103] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490106]
[<ffffffff8100ec30>] ? dump_trace+0x1f0/0x3a0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490111]
[<ffffffff8100ffc5>] show_trace_log_lvl+0x55/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490115]
[<ffffffff8100ee7a>] show_stack_log_lvl+0x9a/0x160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490118]
[<ffffffff8101001a>] show_stack+0x1a/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490122]
[<ffffffff810615a8>] sched_show_task+0x98/0x100
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490125]
[<ffffffff8106168e>] show_state_filter+0x7e/0xc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490130]
[<ffffffff8138d020>] sysrq_handle_showstate+0x10/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490134]
[<ffffffff8138d509>] __handle_sysrq+0x129/0x190
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490138]
[<ffffffff8138d570>] ? write_sysrq_trigger+0x0/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490141]
[<ffffffff8138d5ad>] write_sysrq_trigger+0x3d/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490146]
[<ffffffff811bfcbf>] proc_reg_write+0x7f/0xc0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490150]
[<ffffffff811652e6>] vfs_write+0xc6/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490153]
[<ffffffff81165601>] sys_write+0x51/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490157]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490160]
swift-object-up S 0000000000000009     0 17544  11858 0x00000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490163]
ffff8801e44c59f8 0000000000000082 ffff8801e44c5fd8 ffff8801e44c4000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490168]
0000000000013d00 ffff8800377f9a98 ffff8801e44c5fd8 0000000000013d00
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490172]
ffff88032728adc0 ffff8800377f96e0 0000000000000000 ffff8801e44c5b38
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490176] Call Trace:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490180]
[<ffffffff815d685c>] schedule_hrtimeout_range_clock+0x12c/0x170
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490184]
[<ffffffff8108b380>] ? hrtimer_wakeup+0x0/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490188]
[<ffffffff8108beb4>] ? hrtimer_start_range_ns+0x14/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490192]
[<ffffffff815d68b3>] schedule_hrtimeout_range+0x13/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490196]
[<ffffffff81177389>] poll_schedule_timeout+0x49/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490201]
[<ffffffff8117788a>] do_poll.clone.2+0x1ca/0x290
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490205]
[<ffffffff81178789>] do_sys_poll+0x1b9/0x230
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490209]
[<ffffffff81177460>] ? __pollwait+0x0/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490213]
[<ffffffff81177550>] ? pollwake+0x0/0x60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490216]
[<ffffffff8151b469>] ? ip_local_out+0x29/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490220]
[<ffffffff8151b5e9>] ? ip_queue_xmit+0x179/0x3f0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490224]
[<ffffffff8151fe21>] ? __inet_hash_nolisten+0x131/0x180
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490230]
[<ffffffff81536780>] ? tcp_v4_md5_lookup+0x10/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490233]
[<ffffffff81038c79>] ? default_spin_lock_flags+0x9/0x10
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490237]
[<ffffffff815d794f>] ? _raw_spin_lock_irqsave+0x2f/0x40
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490241]
[<ffffffff81074ceb>] ? lock_timer_base.clone.20+0x3b/0x70
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490246]
[<ffffffff81076624>] ? mod_timer+0x144/0x2b0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490250]
[<ffffffff814d250c>] ? sk_reset_timer+0x1c/0x30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490254]
[<ffffffff81533e6f>] ? tcp_connect+0x1bf/0x200
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490258]
[<ffffffff81538e61>] ? tcp_v4_connect+0x451/0x560
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490263]
[<ffffffff815d76b9>] ? _raw_spin_unlock_bh+0x19/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490266]
[<ffffffff814d3aca>] ? release_sock+0xfa/0x120
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490270]
[<ffffffff8154a4ca>] ? inet_stream_connect+0x7a/0x1e0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490275]
[<ffffffff810137e9>] ? read_tsc+0x9/0x20
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490280]
[<ffffffff810927e1>] ? ktime_get_ts+0xb1/0xf0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490284]
[<ffffffff811779d2>] ? poll_select_set_timeout+0x82/0x90
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490288]
[<ffffffff811788e6>] sys_poll+0x76/0x110
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490292]
[<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490296] Sched
Debug Version: v0.10, 2.6.38-8-server #42-Ubuntu
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490299] ktime
                              : 6306067582.666011
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490301] sched_clk
                              : 6289765490.294713
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490304] cpu_clk
                              : 6289765490.294802
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490306] jiffies
                              : 4925544054
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490308]
sched_clock_stable                      : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490310]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490311] sysctl_sched
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490313]
.sysctl_sched_latency                    : 24.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490315]
.sysctl_sched_min_granularity            : 3.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490317]
.sysctl_sched_wakeup_granularity         : 4.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490320]
.sysctl_sched_child_runs_first           : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490322]
.sysctl_sched_features                   : 7279
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490324]
.sysctl_sched_tunable_scaling            : 1 (logaritmic)
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490328]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490328] cpu#0, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490330]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490332]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490334]
.nr_switches                   : 13269177245
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490336]
.nr_load_updates               : 630550527
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490338]
.nr_uninterruptible            : 123
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490341]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490343]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490345]   .clock
                       : 6289765487.703822
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490347]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490349]
.cpu_load[1]                   : 6
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490351]
.cpu_load[2]                   : 48
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490352]
.cpu_load[3]                   : 76
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490354]
.cpu_load[4]                   : 74
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490356]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490358]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490360]
.sched_count                   : 653616048
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490362]
.sched_goidle                  : -82094772
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490364]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490366]
.ttwu_count                    : 1414739666
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490368]
.ttwu_local                    : -1969906404
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490370]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490374]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490374]
cfs_rq[0]:/autogroup-414266
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490376]
.exec_clock                    : 50383101.638139
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490379]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490381]
.min_vruntime                  : 43883530.906718
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490384]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490386]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490388]
.spread0                       : -5231875277.660282
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490390]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490392]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490394]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490396]
.load_avg                      : 12.118707
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490399]
.load_period                   : 9.069581
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490401]
.load_contrib                  : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490403]
.load_tg                       : 5
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490405]
.se->exec_start                : 6289765436.243793
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490408]
.se->vruntime                  : 5275758784.716051
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490410]
.se->sum_exec_runtime          : 50367570.709334
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490413]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490415]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490417]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490419]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490421]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490424]
.se->statistics.exec_max       : 32.017135
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490426]
.se->statistics.slice_max      : 11.398645
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490428]
.se->statistics.wait_max       : 34.226799
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490430]
.se->statistics.wait_sum       : 1617848.631238
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490432]
.se->statistics.wait_count     : 104229844
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490435]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490437]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490438]
cfs_rq[0]:/autogroup-414268
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490440]
.exec_clock                    : 775955772.742699
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490443]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490445]
.min_vruntime                  : 448560870.484090
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490447]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490449]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490451]
.spread0                       : -4827197938.082910
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490454]
.nr_spread_over                : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490456]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490458]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490460]
.load_avg                      : 384.499191
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490462]
.load_period                   : 9.106361
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490464]
.load_contrib                  : 42
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490466]
.load_tg                       : 4879
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490469]
.se->exec_start                : 6289765486.357642
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490471]
.se->vruntime                  : 5275758797.951477
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490473]
.se->sum_exec_runtime          : 775925136.170860
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490476]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490478]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490480]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490482]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490484]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490486]
.se->statistics.exec_max       : 45.560735
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490489]
.se->statistics.slice_max      : 13.265967
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490491]
.se->statistics.wait_max       : 49.549283
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490493]
.se->statistics.wait_sum       : 33100642.218701
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490495]
.se->statistics.wait_count     : 2153050423
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490498]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490500]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490501]
cfs_rq[0]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490503]
.exec_clock                    : 271299383.906867
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490505]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490507]
.min_vruntime                  : 87857192.784002
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490510]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490512]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490514]
.spread0                       : -5187901615.782998
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490516]
.nr_spread_over                : 15
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490518]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490520]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490522]
.load_avg                      : 3327.502438
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490525]
.load_period                   : 9.150364
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490527]
.load_contrib                  : 363
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490529]
.load_tg                       : 5662
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490531]
.se->exec_start                : 6289765484.601400
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490534]
.se->vruntime                  : 5275758808.567000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490536]
.se->sum_exec_runtime          : 271274806.232030
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490538]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490540]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490543]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490545]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490547]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490549]
.se->statistics.exec_max       : 89.063097
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490552]
.se->statistics.slice_max      : 9.634280
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490554]
.se->statistics.wait_max       : 62.823183
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490556]
.se->statistics.wait_sum       : 17149411.365687
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490558]
.se->statistics.wait_count     : 1956847769
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490560]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490563]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490563]
cfs_rq[0]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490565]
.exec_clock                    : 2654329998.061406
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490568]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490570]
.min_vruntime                  : 5275758808.567000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490572]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490574]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490576]
.spread0                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490579]
.nr_spread_over                : 6279
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490580]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490582]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490584]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490587]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490588]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490590]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490593]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490593] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490594]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.490596]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491051]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491051] cpu#1, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491053]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491055]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491057]
.nr_switches                   : 8273569831
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491060]
.nr_load_updates               : 626375158
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491062]
.nr_uninterruptible            : 73
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491064]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491066]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491068]   .clock
                       : 6289765487.728713
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491070]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491072]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491074]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491075]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491077]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491079]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491081]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491083]
.sched_count                   : 45212876
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491085]
.sched_goidle                  : -979747614
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491087]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491089]
.ttwu_count                    : 395251045
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491091]
.ttwu_local                    : 613608772
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491093]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491096]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491096]
cfs_rq[1]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491099]
.exec_clock                    : 137793543.715387
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491101]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491103]
.min_vruntime                  : 56229285.572333
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491105]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491108]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491110]
.spread0                       : -5219529523.874906
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491112]
.nr_spread_over                : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491114]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491116]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491118]
.load_avg                      : 720.890368
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491120]
.load_period                   : 8.895212
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491122]
.load_contrib                  : 81
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491124]
.load_tg                       : 5589
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491127]
.se->exec_start                : 6289765463.038064
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491129]
.se->vruntime                  : 3790335530.548519
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491132]
.se->sum_exec_runtime          : 137786656.089908
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491134]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491136]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491138]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491140]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491142]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491145]
.se->statistics.exec_max       : 13.720668
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491147]
.se->statistics.slice_max      : 6.589942
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491149]
.se->statistics.wait_max       : 18.785882
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491152]
.se->statistics.wait_sum       : 2443247.106339
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491154]
.se->statistics.wait_count     : 1151412388
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491156]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491158]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491159]
cfs_rq[1]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491161]
.exec_clock                    : 1303085478.810990
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491163]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491166]
.min_vruntime                  : 3790335530.548519
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491168]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491170]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491172]
.spread0                       : -1485423278.898720
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491174]
.nr_spread_over                : 7869
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491176]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491178]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491180]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491182]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491184]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491186]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491188]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491189] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491190]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491191]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491504]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491505] cpu#2, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491507]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491509]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491511]
.nr_switches                   : 6889445027
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491513]
.nr_load_updates               : 611379029
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491515]
.nr_uninterruptible            : 14
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491517]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491519]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491521]   .clock
                       : 6289765487.729349
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491523]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491525]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491527]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491529]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491531]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491533]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491534]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491536]
.sched_count                   : -1273998515
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491538]
.sched_goidle                  : -1411803272
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491541]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491543]
.ttwu_count                    : -572524616
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491545]
.ttwu_local                    : 591573297
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491547]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491549]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491550]
cfs_rq[2]:/autogroup-1343927
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491552]
.exec_clock                    : 63555.175046
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491555]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491557]
.min_vruntime                  : 63654.032536
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491559]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491561]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491563]
.spread0                       : -5275695155.414703
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491566]
.nr_spread_over                : 8881
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491568]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491569]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491572]
.load_avg                      : 59.119616
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491574]
.load_period                   : 2.316679
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491576]
.load_contrib                  : 25
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491578]
.load_tg                       : 25
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491580]
.se->exec_start                : 6289765485.470404
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491583]
.se->vruntime                  : 3082370740.515635
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491585]
.se->sum_exec_runtime          : 63527.270556
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491587]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491590]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491592]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491594]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491596]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491598]
.se->statistics.exec_max       : 9.976826
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491601]
.se->statistics.slice_max      : 18.380990
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491603]
.se->statistics.wait_max       : 9.590159
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491605]
.se->statistics.wait_sum       : 1626.960757
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491607]
.se->statistics.wait_count     : 377142
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491609]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491612]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491612]
cfs_rq[2]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491615]
.exec_clock                    : 983717920.859277
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491617]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491619]
.min_vruntime                  : 3082370752.457901
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491621]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491624]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491626]
.spread0                       : -2193388056.989338
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491628]
.nr_spread_over                : 4336
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491630]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491632]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491634]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491636]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491638]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491640]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491642]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491643] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491644]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491645]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491938]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491938] cpu#3, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491940]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491942]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491944]
.nr_switches                   : 6403859186
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491946]
.nr_load_updates               : 568113726
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491948]
.nr_uninterruptible            : 5
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491950]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491952]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491954]   .clock
                       : 6289765467.818555
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491957]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491958]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491960]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491962]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491964]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491966]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491968]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491970]
.sched_count                   : -1743444661
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491972]
.sched_goidle                  : -1599054336
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491974]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491976]
.ttwu_count                    : -815691714
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491978]
.ttwu_local                    : 552071322
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491980]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491982]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491983]
cfs_rq[3]:/autogroup-414261
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491985]
.exec_clock                    : 66010196.680942
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491988]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491990]
.min_vruntime                  : 65185812.473969
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491992]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491994]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491997]
.spread0                       : -5210572999.838724
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.491999]
.nr_spread_over                : 281868
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492001]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492003]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492005]
.load_avg                      : 72.545280
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492007]
.load_period                   : 0.072483
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492009]
.load_contrib                  : 1000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492011]
.load_tg                       : 1000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492014]
.se->exec_start                : 6289765467.816917
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492016]
.se->vruntime                  : 1311664958.678780
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492018]
.se->sum_exec_runtime          : 66010159.924905
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492021]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492023]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492025]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492027]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492029]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492032]
.se->statistics.exec_max       : 19.027862
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492034]
.se->statistics.slice_max      : 19.941796
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492036]
.se->statistics.wait_max       : 20.023231
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492038]
.se->statistics.wait_sum       : 240560.263176
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492041]
.se->statistics.wait_count     : 6984964
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492043]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492045]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492046]
cfs_rq[3]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492048]
.exec_clock                    : 464974138.541177
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492050]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492052]
.min_vruntime                  : 1311664969.134367
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492055]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492057]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492059]
.spread0                       : -3964093843.178326
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492061]
.nr_spread_over                : 2429
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492063]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492065]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492067]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492069]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492071]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492073]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492075]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492076] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492076]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492078]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492362]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492362] cpu#4, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492364]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492366]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492368]
.nr_switches                   : 4835528789
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492370]
.nr_load_updates               : 491423226
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492372]
.nr_uninterruptible            : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492374]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492377]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492379]   .clock
                       : 6289765008.957932
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492381]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492383]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492384]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492386]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492388]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492390]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492392]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492394]
.sched_count                   : 954966696
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492396]
.sched_goidle                  : 2116137063
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492398]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492400]
.ttwu_count                    : -1790023253
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492402]
.ttwu_local                    : 480380309
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492404]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492406]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492407]
cfs_rq[4]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492409]
.exec_clock                    : 327123880.841111
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492412]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492414]
.min_vruntime                  : 835940500.943709
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492416]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492418]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492421]
.spread0                       : -4439818311.368984
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492423]
.nr_spread_over                : 3241
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492425]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492426]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492428]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492431]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492433]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492435]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492436]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492437] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492438]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492440]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492719]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492720] cpu#5, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492722]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492724]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492726]
.nr_switches                   : 3294281116
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492728]
.nr_load_updates               : 378987536
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492730]
.nr_uninterruptible            : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492732]
.next_balance                  : 4925.544040
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492734]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492736]   .clock
                       : 6289765338.357122
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492738]
.cpu_load[0]                   : 1024
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492740]
.cpu_load[1]                   : 512
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492742]
.cpu_load[2]                   : 256
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492744]
.cpu_load[3]                   : 128
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492746]
.cpu_load[4]                   : 64
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492748]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492750]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492752]
.sched_count                   : -695239207
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492754]
.sched_goidle                  : 1495821089
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492756]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492758]
.ttwu_count                    : 1637517949
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492760]
.ttwu_local                    : 357881361
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492762]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492764]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492765]
cfs_rq[5]:/autogroup-1344172
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492767]
.exec_clock                    : 1080.391262
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492770]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492772]
.min_vruntime                  : 1079.342686
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492774]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492776]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492778]
.spread0                       : -5275757732.970007
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492781]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492782]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492784]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492786]
.load_avg                      : 135.440505
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492789]
.load_period                   : 9.688498
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492791]
.load_contrib                  : 13
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492793]
.load_tg                       : 15
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492795]
.se->exec_start                : 6289765328.077289
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492797]
.se->vruntime                  : 712495920.041799
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492800]
.se->sum_exec_runtime          : 1076.136845
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492802]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492804]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492806]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492808]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492811]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492813]
.se->statistics.exec_max       : 2.928119
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492815]
.se->statistics.slice_max      : 0.989146
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492817]
.se->statistics.wait_max       : 3.533611
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492819]
.se->statistics.wait_sum       : 14.705974
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492822]
.se->statistics.wait_count     : 63411
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492824]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492826]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492827]
cfs_rq[5]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492829]
.exec_clock                    : 320306614.362258
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492831]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492833]
.min_vruntime                  : 712495920.041799
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492835]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492838]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492840]
.spread0                       : -4563262892.270894
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492842]
.nr_spread_over                : 4796
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492844]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492846]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492848]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492850]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492852]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492854]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492856]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492856] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492857]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.492859]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493134]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493135] cpu#6, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493137]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493138]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493140]
.nr_switches                   : 11537883767
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493143]
.nr_load_updates               : 630391301
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493145]
.nr_uninterruptible            : 92
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493147]
.next_balance                  : 4925.544054
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493149]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493151]   .clock
                       : 6289765487.670789
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493153]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493155]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493157]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493158]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493160]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493162]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493164]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493166]
.sched_count                   : -996222699
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493168]
.sched_goidle                  : -40688227
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493170]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493172]
.ttwu_count                    : -2138146965
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493174]
.ttwu_local                    : 1505718348
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493176]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493178]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493179]
cfs_rq[6]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493181]
.exec_clock                    : 1421711390.657324
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493184]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493186]
.min_vruntime                  : 2916845843.938185
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493188]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493190]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493192]
.spread0                       : -2358912968.374508
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493195]
.nr_spread_over                : 5348
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493196]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493198]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493200]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493202]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493204]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493206]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493208]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493209] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493210]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493211]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493484]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493485] cpu#7, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493487]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493489]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493491]
.nr_switches                   : 6695645899
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493493]
.nr_load_updates               : 612738698
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493495]
.nr_uninterruptible            : 62
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493497]
.next_balance                  : 4925.544054
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493499]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493501]   .clock
                       : 6289765487.670362
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493503]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493505]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493507]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493509]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493510]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493512]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493514]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493516]
.sched_count                   : -1525818244
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493518]
.sched_goidle                  : -1432775573
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493520]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493522]
.ttwu_count                    : -797764508
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493524]
.ttwu_local                    : 585220017
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493526]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493529]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493529]
cfs_rq[7]:/autogroup-414268
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493532]
.exec_clock                    : 137935906.157835
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493534]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493536]
.min_vruntime                  : 93463977.406881
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493538]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493541]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493543]
.spread0                       : -5182294834.905812
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493545]
.nr_spread_over                : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493547]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493549]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493551]
.load_avg                      : 78.385152
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493553]
.load_period                   : 8.153831
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493555]
.load_contrib                  : 9
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493557]
.load_tg                       : 4864
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493559]
.se->exec_start                : 6289765484.052588
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493562]
.se->vruntime                  : 3478818192.764154
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493564]
.se->sum_exec_runtime          : 137927608.900390
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493566]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493568]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493571]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493573]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493575]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493577]
.se->statistics.exec_max       : 19.297779
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493579]
.se->statistics.slice_max      : 15.079650
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493582]
.se->statistics.wait_max       : 32.862905
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493584]
.se->statistics.wait_sum       : 1018357.880256
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493586]
.se->statistics.wait_count     : 396458522
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493588]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493590]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493591]
cfs_rq[7]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493593]
.exec_clock                    : 91523168.906140
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493596]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493598]
.min_vruntime                  : 37610542.680748
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493600]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493602]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493604]
.spread0                       : -5238148269.631945
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493607]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493608]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493610]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493612]
.load_avg                      : 101.347657
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493615]
.load_period                   : 6.499393
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493617]
.load_contrib                  : 15
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493619]
.load_tg                       : 5617
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493621]
.se->exec_start                : 6289765473.695056
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493623]
.se->vruntime                  : 3478818161.842403
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493626]
.se->sum_exec_runtime          : 91518291.507576
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493628]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493630]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493632]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493634]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493637]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493639]
.se->statistics.exec_max       : 20.074031
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493641]
.se->statistics.slice_max      : 9.955814
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493643]
.se->statistics.wait_max       : 32.797921
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493646]
.se->statistics.wait_sum       : 1338794.767813
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493648]
.se->statistics.wait_count     : 785736373
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493650]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493652]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493653]
cfs_rq[7]:/autogroup-380665
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493655]
.exec_clock                    : 9307263.927915
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493657]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493660]
.min_vruntime                  : 9207981.799574
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493662]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493664]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493666]
.spread0                       : -5266550830.513119
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493668]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493670]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493672]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493674]
.load_avg                      : 326.181094
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493677]
.load_period                   : 6.561932
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493679]
.load_contrib                  : 49
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493681]
.load_tg                       : 95
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493683]
.se->exec_start                : 6289765484.036593
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493685]
.se->vruntime                  : 3478818204.694603
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493688]
.se->sum_exec_runtime          : 9294825.282405
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493690]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493692]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493694]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493696]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493698]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493700]
.se->statistics.exec_max       : 9.790313
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493703]
.se->statistics.slice_max      : 1.652869
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493705]
.se->statistics.wait_max       : 15.692025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493707]
.se->statistics.wait_sum       : 363105.130211
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493709]
.se->statistics.wait_count     : 177718052
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493711]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493714]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493714]
cfs_rq[7]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493716]
.exec_clock                    : 1107910908.880324
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493719]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493721]
.min_vruntime                  : 3478818204.694603
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493723]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493725]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493727]
.spread0                       : -1796940607.618090
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493730]
.nr_spread_over                : 5958
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493731]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493733]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493735]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493737]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493739]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493741]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493743]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493744] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493745]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.493746]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494017]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494018] cpu#8, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494020]
.nr_running                    : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494022]   .load
                       : 1024
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494024]
.nr_switches                   : 5416915121
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494026]
.nr_load_updates               : 579675220
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494028]
.nr_uninterruptible            : 12
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494030]
.next_balance                  : 4925.544035
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494032]
.curr->pid                     : 17487
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494034]   .clock
                       : 6289765358.070164
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494036]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494038]
.cpu_load[1]                   : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494040]
.cpu_load[2]                   : 13
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494042]
.cpu_load[3]                   : 16
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494044]
.cpu_load[4]                   : 12
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494046]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494048]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494050]
.sched_count                   : 1588920241
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494052]
.sched_goidle                  : -1957322198
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494054]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494056]
.ttwu_count                    : -1513781286
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494058]
.ttwu_local                    : 549479165
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494060]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494062]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494063]
cfs_rq[8]:/autogroup-1408130
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494065]
.exec_clock                    : 13.405400
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494068]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494070]
.min_vruntime                  : 11.603859
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494072]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494074]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494076]
.spread0                       : -5275758800.708834
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494078]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494080]
.nr_running                    : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494082]   .load
                       : 1024
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494084]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494087]
.load_period                   : 5.026042
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494089]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494091]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494093]
.se->exec_start                : 6289765358.070164
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494095]
.se->vruntime                  : 2776129442.665810
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494098]
.se->sum_exec_runtime          : 13.405400
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494100]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494102]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494104]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494106]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494108]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494111]
.se->statistics.exec_max       : 3.415769
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494113]
.se->statistics.slice_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494115]
.se->statistics.wait_max       : 0.002060
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494117]
.se->statistics.wait_sum       : 0.003596
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494119]
.se->statistics.wait_count     : 38
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494121]
.se->load.weight               : 1024
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494123]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494124]
cfs_rq[8]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494126]
.exec_clock                    : 841892747.542919
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494129]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494131]
.min_vruntime                  : 2776129454.665810
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494133]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494135]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494137]
.spread0                       : -2499629357.646883
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494140]
.nr_spread_over                : 3209
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494141]
.nr_running                    : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494143]   .load
                       : 1024
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494145]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494147]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494149]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494151]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494153]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494154] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494155]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494157]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494425] R
  bash 17487        11.603859        59   120        11.603859
169.359973      5319.278309 /autogroup-1408130
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494435]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494436] cpu#9, 2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494438]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494440]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494442]
.nr_switches                   : 4721596264
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494444]
.nr_load_updates               : 498681368
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494446]
.nr_uninterruptible            : 7
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494448]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494450]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494452]   .clock
                       : 6289765487.712458
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494454]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494456]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494458]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494460]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494461]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494463]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494465]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494467]
.sched_count                   : 839097613
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494469]
.sched_goidle                  : 2048386623
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494471]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494473]
.ttwu_count                    : -1870865388
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494475]
.ttwu_local                    : 474982552
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494477]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494480]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494480]
cfs_rq[9]:/autogroup-414265
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494483]
.exec_clock                    : 689840.876722
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494485]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494487]
.min_vruntime                  : 689556.834122
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494489]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494491]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494494]
.spread0                       : -5275069255.478571
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494496]
.nr_spread_over                : 2729
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494498]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494500]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494502]
.load_avg                      : 166.652416
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494504]
.load_period                   : 9.619815
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494506]
.load_contrib                  : 17
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494508]
.load_tg                       : 17
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494511]
.se->exec_start                : 6289765457.231051
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494513]
.se->vruntime                  : 1135181035.439371
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494515]
.se->sum_exec_runtime          : 689806.619188
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494518]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494520]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494522]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494524]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494526]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494528]
.se->statistics.exec_max       : 16.246839
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494531]
.se->statistics.slice_max      : 19.573915
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494533]
.se->statistics.wait_max       : 10.035338
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494535]
.se->statistics.wait_sum       : 2598.461167
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494537]
.se->statistics.wait_count     : 340653
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494539]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494541]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494542]
cfs_rq[9]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494544]
.exec_clock                    : 393469668.059550
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494547]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494549]
.min_vruntime                  : 1135181043.479470
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494551]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494553]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494555]
.spread0                       : -4140577768.833223
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494558]
.nr_spread_over                : 2234
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494560]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494561]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494563]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494566]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494568]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494570]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494572]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494572] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494573]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494575]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494844]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494845] cpu#10,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494847]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494849]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494850]
.nr_switches                   : 3281330776
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494853]
.nr_load_updates               : 403094355
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494855]
.nr_uninterruptible            : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494857]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494859]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494861]   .clock
                       : 6289765467.799841
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494863]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494865]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494867]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494868]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494870]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494872]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494874]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494876]
.sched_count                   : -707466222
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494878]
.sched_goidle                  : 1446069866
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494880]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494882]
.ttwu_count                    : 1629261485
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494884]
.ttwu_local                    : 382180163
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494886]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494888]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494889]
cfs_rq[10]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494891]
.exec_clock                    : 333160694.802754
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494893]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494896]
.min_vruntime                  : 807462066.912896
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494898]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494900]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494902]
.spread0                       : -4468296745.399797
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494904]
.nr_spread_over                : 3771
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494906]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494908]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494910]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494912]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494914]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494916]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494918]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494919] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494920]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.494921]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495193]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495193] cpu#11,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495195]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495197]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495199]
.nr_switches                   : 2099517291
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495201]
.nr_load_updates               : 297786759
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495203]
.nr_uninterruptible            : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495205]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495207]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495210]   .clock
                       : 6289765009.367969
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495212]
.cpu_load[0]                   : 1246
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495214]
.cpu_load[1]                   : 623
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495216]
.cpu_load[2]                   : 312
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495217]
.cpu_load[3]                   : 156
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495219]
.cpu_load[4]                   : 78
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495221]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495223]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495225]
.sched_count                   : -2005818197
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495227]
.sched_goidle                  : 941510736
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495229]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495231]
.ttwu_count                    : 1020560516
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495233]
.ttwu_local                    : 265240453
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495235]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495237]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495238]
cfs_rq[11]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495240]
.exec_clock                    : 353560715.715641
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495243]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495245]
.min_vruntime                  : 781077937.648733
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495247]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495249]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495251]
.spread0                       : -4494680874.663960
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495254]
.nr_spread_over                : 5553
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495255]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495257]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495259]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495261]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495263]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495265]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495267]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495268] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495269]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495270]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495566]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495566] cpu#12,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495568]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495570]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495572]
.nr_switches                   : 8519047582
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495574]
.nr_load_updates               : 630156685
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495576]
.nr_uninterruptible            : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495579]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495581]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495583]   .clock
                       : 6289765487.699010
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495585]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495587]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495588]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495590]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495592]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495594]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495596]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495598]
.sched_count                   : 175035727
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495600]
.sched_goidle                  : -1478549509
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495602]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495604]
.ttwu_count                    : 1427272944
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495606]
.ttwu_local                    : -419453012
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495608]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495610]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495611]
cfs_rq[12]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495613]
.exec_clock                    : 184328681.282203
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495616]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495618]
.min_vruntime                  : 73634526.631563
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495620]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495622]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495625]
.spread0                       : -5202124285.681130
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495627]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495629]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495630]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495633]
.load_avg                      : 24378.268672
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495635]
.load_period                   : 5.037714
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495637]
.load_contrib                  : 4839
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495639]
.load_tg                       : 19536
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495641]
.se->exec_start                : 6289765484.448479
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495644]
.se->vruntime                  : 3952581873.738911
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495646]
.se->sum_exec_runtime          : 184316825.917328
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495649]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495651]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495653]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495655]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495657]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495660]
.se->statistics.exec_max       : 42.239391
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495662]
.se->statistics.slice_max      : 7.615074
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495664]
.se->statistics.wait_max       : 103.603720
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495667]
.se->statistics.wait_sum       : 9012855.733623
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495669]
.se->statistics.wait_count     : 1208901194
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495671]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495673]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495674]
cfs_rq[12]:/autogroup-414268
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495676]
.exec_clock                    : 566521319.262718
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495678]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495681]
.min_vruntime                  : 345842837.495971
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495683]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495685]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495687]
.spread0                       : -4929915974.816722
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495690]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495692]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495693]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495696]
.load_avg                      : 328.033792
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495698]
.load_period                   : 5.408881
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495700]
.load_contrib                  : 60
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495702]
.load_tg                       : 4745
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495704]
.se->exec_start                : 6289765463.877819
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495707]
.se->vruntime                  : 3952581883.252696
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495709]
.se->sum_exec_runtime          : 566505865.251723
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495711]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495714]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495716]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495718]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495720]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495722]
.se->statistics.exec_max       : 35.901087
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495725]
.se->statistics.slice_max      : 14.592761
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495727]
.se->statistics.wait_max       : 103.632083
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495729]
.se->statistics.wait_sum       : 16743407.627504
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495731]
.se->statistics.wait_count     : 1350461873
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495733]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495736]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495736]
cfs_rq[12]:/autogroup-414266
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495739]
.exec_clock                    : 30445649.407088
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495741]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495743]
.min_vruntime                  : 26511357.108827
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495745]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495747]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495750]
.spread0                       : -5249247455.203866
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495752]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495754]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495756]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495758]
.load_avg                      : 24.308360
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495760]
.load_period                   : 5.136436
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495762]
.load_contrib                  : 4
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495764]
.load_tg                       : 4
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495767]
.se->exec_start                : 6289765458.722339
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495769]
.se->vruntime                  : 3952581862.767688
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495771]
.se->sum_exec_runtime          : 30441084.954135
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495773]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495776]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495778]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495780]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495782]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495784]
.se->statistics.exec_max       : 16.560905
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495787]
.se->statistics.slice_max      : 12.834884
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495789]
.se->statistics.wait_max       : 18.165292
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495791]
.se->statistics.wait_sum       : 756171.449715
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495793]
.se->statistics.wait_count     : 56915362
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495795]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495798]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495798]
cfs_rq[12]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495801]
.exec_clock                    : 1784805334.393486
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495803]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495805]
.min_vruntime                  : 3952581883.252696
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495807]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495809]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495812]
.spread0                       : -1323176929.059997
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495814]
.nr_spread_over                : 4236
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495816]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495818]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495820]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495822]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495824]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495826]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495828]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495829] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495829]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.495831]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496132]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496133] cpu#13,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496135]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496137]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496139]
.nr_switches                   : 4184433988
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496141]
.nr_load_updates               : 630212765
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496143]
.nr_uninterruptible            : 300
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496145]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496147]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496149]   .clock
                       : 6289765487.729069
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496151]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496153]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496155]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496157]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496158]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496160]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496162]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496164]
.sched_count                   : -16799295
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496166]
.sched_goidle                  : 1182579038
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496168]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496170]
.ttwu_count                    : -1414612398
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496172]
.ttwu_local                    : -1920524038
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496174]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496177]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496178]
cfs_rq[13]:/autogroup-414268
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496180]
.exec_clock                    : 223468277.375189
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496182]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496184]
.min_vruntime                  : 127219422.061603
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496187]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496189]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496191]
.spread0                       : -5148539390.251090
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496193]
.nr_spread_over                : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496195]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496197]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496199]
.load_avg                      : 39215.367168
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496201]
.load_period                   : 8.671612
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496204]
.load_contrib                  : 4522
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496205]
.load_tg                       : 4739
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496208]
.se->exec_start                : 6289765483.936444
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496210]
.se->vruntime                  : 13613423591.181610
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496213]
.se->sum_exec_runtime          : 223458340.855636
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496215]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496217]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496219]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496221]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496223]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496226]
.se->statistics.exec_max       : 14.564073
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496228]
.se->statistics.slice_max      : 13.917641
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496231]
.se->statistics.wait_max       : 24.606594
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496233]
.se->statistics.wait_sum       : 6717302.850682
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496235]
.se->statistics.wait_count     : 634746830
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496237]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496239]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496240]
cfs_rq[13]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496242]
.exec_clock                    : 73980356.257668
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496245]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496247]
.min_vruntime                  : 24552765.323576
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496249]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496251]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496254]
.spread0                       : -5251206046.989117
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496256]
.nr_spread_over                : 6
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496258]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496260]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496262]
.load_avg                      : 773.157896
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496264]
.load_period                   : 5.579626
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496266]
.load_contrib                  : 138
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496268]
.load_tg                       : 20277
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496271]
.se->exec_start                : 6289765463.564514
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496273]
.se->vruntime                  : 13613423595.274174
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496275]
.se->sum_exec_runtime          : 73972294.848174
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496278]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496280]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496282]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496284]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496286]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496289]
.se->statistics.exec_max       : 9.980478
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496291]
.se->statistics.slice_max      : 8.921369
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496293]
.se->statistics.wait_max       : 15.957078
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496295]
.se->statistics.wait_sum       : 3381020.091853
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496297]
.se->statistics.wait_count     : 532967632
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496300]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496302]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496303]
cfs_rq[13]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496305]
.exec_clock                    : 3774089192.117876
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496307]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496309]
.min_vruntime                  : 13613423595.274174
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496312]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496314]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496316]
.spread0                       : 8337664782.961481
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496318]
.nr_spread_over                : 1861
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496320]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496322]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496324]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496326]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496328]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496330]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496332]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496333] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496333]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496335]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496621]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496621] cpu#14,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496623]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496625]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496627]
.nr_switches                   : 3892117971
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496629]
.nr_load_updates               : 630088542
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496631]
.nr_uninterruptible            : 223
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496634]
.next_balance                  : 4925.544055
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496636]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496638]   .clock
                       : 6289765496.321709
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496640]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496642]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496643]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496645]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496647]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496649]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496651]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496653]
.sched_count                   : -312842858
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496655]
.sched_goidle                  : 1059622121
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496657]
.avg_idle                      : 962065
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496659]
.ttwu_count                    : -1607558487
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496661]
.ttwu_local                    : -2010395759
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496663]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496666]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496666]
cfs_rq[14]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496669]
.exec_clock                    : 66484034.534015
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496671]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496673]
.min_vruntime                  : 21335708.745506
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496675]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496678]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496680]
.spread0                       : -5254423103.567187
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496682]
.nr_spread_over                : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496684]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496686]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496688]
.load_avg                      : 19769.290752
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496690]
.load_period                   : 2.250877
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496692]
.load_contrib                  : 8782
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496694]
.load_tg                       : 15050
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496697]
.se->exec_start                : 6289765496.319632
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496699]
.se->vruntime                  : 14056822292.007774
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496702]
.se->sum_exec_runtime          : 66475774.854974
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496704]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496706]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496708]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496710]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496713]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496715]
.se->statistics.exec_max       : 11.878564
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496717]
.se->statistics.slice_max      : 9.104942
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496719]
.se->statistics.wait_max       : 27.274247
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496722]
.se->statistics.wait_sum       : 2956791.364711
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496724]
.se->statistics.wait_count     : 488823698
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496726]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496728]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496729]
cfs_rq[14]:/autogroup-414268
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496731]
.exec_clock                    : 204354942.893983
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496734]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496736]
.min_vruntime                  : 113839275.970511
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496738]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496740]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496742]
.spread0                       : -5161919536.342182
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496745]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496746]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496748]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496750]
.load_avg                      : 588.242944
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496753]
.load_period                   : 9.138982
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496755]
.load_contrib                  : 64
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496757]
.load_tg                       : 4733
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496759]
.se->exec_start                : 6289765466.857242
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496762]
.se->vruntime                  : 14056822300.077635
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496764]
.se->sum_exec_runtime          : 204344836.797040
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496766]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496769]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496771]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496773]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496775]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496777]
.se->statistics.exec_max       : 27.068387
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496779]
.se->statistics.slice_max      : 12.783886
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496782]
.se->statistics.wait_max       : 36.970361
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496784]
.se->statistics.wait_sum       : 5448784.630161
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496786]
.se->statistics.wait_count     : 595551250
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496788]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496791]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496791]
cfs_rq[14]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496793]
.exec_clock                    : 3846732330.563615
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496796]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496798]
.min_vruntime                  : 14056822300.077635
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496800]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496803]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496805]
.spread0                       : 8781063487.764942
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496807]
.nr_spread_over                : 1820
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496809]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496811]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496813]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496815]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496817]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496819]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496821]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496822] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496823]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.496824]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497099]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497100] cpu#15,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497102]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497104]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497106]
.nr_switches                   : 1467542746
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497108]
.nr_load_updates               : 464694592
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497110]
.nr_uninterruptible            : 104
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497112]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497114]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497116]   .clock
                       : 6289763992.407670
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497118]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497120]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497122]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497124]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497125]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497127]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497129]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497131]
.sched_count                   : 1556567199
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497133]
.sched_goidle                  : 458060158
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497135]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497137]
.ttwu_count                    : 683663534
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497139]
.ttwu_local                    : 426308738
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497141]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497144]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497144]
cfs_rq[15]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497146]
.exec_clock                    : 2653889018.895324
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497149]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497151]
.min_vruntime                  : 10732446381.128018
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497153]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497155]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497158]
.spread0                       : 5456687568.815325
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497160]
.nr_spread_over                : 750
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497162]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497163]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497165]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497168]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497170]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497171]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497174]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497174] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497175]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497177]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497453]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497453] cpu#16,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497455]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497457]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497459]
.nr_switches                   : 1155838223
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497461]
.nr_load_updates               : 317997352
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497463]
.nr_uninterruptible            : 30
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497465]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497467]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497469]   .clock
                       : 6289764181.122562
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497472]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497473]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497475]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497477]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497479]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497481]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497483]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497485]
.sched_count                   : 1250894599
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497487]
.sched_goidle                  : 440132106
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497489]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497491]
.ttwu_count                    : 544070964
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497493]
.ttwu_local                    : 256880520
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497495]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497497]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497498]
cfs_rq[16]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497500]
.exec_clock                    : 1177959715.168901
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497502]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497505]
.min_vruntime                  : 4720610373.307036
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497507]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497509]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497511]
.spread0                       : -555148439.005657
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497513]
.nr_spread_over                : 850
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497515]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497517]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497519]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497521]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497523]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497525]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497527]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497528] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497529]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497530]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497812]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497813] cpu#17,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497815]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497817]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497819]
.nr_switches                   : 913555408
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497821]
.nr_load_updates               : 240875821
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497823]
.nr_uninterruptible            : 3
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497825]
.next_balance                  : 4925.544057
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497827]
.curr->pid                     : 121
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497829]   .clock
                       : 6289765497.681411
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497831]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497833]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497835]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497837]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497839]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497840]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497842]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497844]
.sched_count                   : 994956076
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497846]
.sched_goidle                  : 401551103
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497848]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497850]
.ttwu_count                    : 436793404
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497852]
.ttwu_local                    : 147593319
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497854]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497857]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497858]
cfs_rq[17]:/autogroup-1344172
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497860]
.exec_clock                    : 464.785804
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497862]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497864]
.min_vruntime                  : 1214.362053
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497867]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497869]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497871]
.spread0                       : -5275757597.950640
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497873]
.nr_spread_over                : 57
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497875]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497877]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497879]
.load_avg                      : 10.422461
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497881]
.load_period                   : 8.443211
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497883]
.load_contrib                  : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497885]
.load_tg                       : 14
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497888]
.se->exec_start                : 6289765489.312148
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497890]
.se->vruntime                  : 1302304750.857467
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497893]
.se->sum_exec_runtime          : 461.013420
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497895]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497897]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497899]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497901]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497903]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497906]
.se->statistics.exec_max       : 0.678118
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497908]
.se->statistics.slice_max      : 0.101983
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497910]
.se->statistics.wait_max       : 2.871402
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497913]
.se->statistics.wait_sum       : 9.541349
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497915]
.se->statistics.wait_count     : 28221
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497917]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497919]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497920]
cfs_rq[17]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497922]
.exec_clock                    : 344700585.164511
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497925]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497927]
.min_vruntime                  : 1302304750.857467
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497929]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497931]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497933]
.spread0                       : -3973454061.455226
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497936]
.nr_spread_over                : 741
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497937]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497939]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497941]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497943]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497945]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497947]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497949]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497950] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497951]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.497953]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498262]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498263] cpu#18,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498265]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498267]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498269]
.nr_switches                   : 7455211219
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498271]
.nr_load_updates               : 629643232
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498273]
.nr_uninterruptible            : 5
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498275]
.next_balance                  : 4925.544056
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498277]
.curr->pid                     : 128
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498279]   .clock
                       : 6289765497.662428
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498281]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498283]
.cpu_load[1]                   : 3
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498285]
.cpu_load[2]                   : 31
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498287]
.cpu_load[3]                   : 45
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498289]
.cpu_load[4]                   : 38
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498291]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498292]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498294]
.sched_count                   : -926556508
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498296]
.sched_goidle                  : -1762212123
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498299]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498301]
.ttwu_count                    : 914760691
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498303]
.ttwu_local                    : -462604583
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498305]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498307]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498308]
cfs_rq[18]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498310]
.exec_clock                    : 1571207744.312799
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498312]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498314]
.min_vruntime                  : 3243800040.632366
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498317]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498319]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498321]
.spread0                       : -2031958771.680327
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498323]
.nr_spread_over                : 3557
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498325]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498327]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498329]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498331]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498333]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498335]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498337]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498337] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498338]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498340]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498650]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498650] cpu#19,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498652]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498654]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498656]
.nr_switches                   : 3599158570
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498658]
.nr_load_updates               : 630310079
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498660]
.nr_uninterruptible            : 4
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498663]
.next_balance                  : 4925.544056
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498665]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498667]   .clock
                       : 6289765497.663226
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498669]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498671]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498672]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498674]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498676]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498678]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498680]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498682]
.sched_count                   : -626732989
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498684]
.sched_goidle                  : 925306809
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498686]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498688]
.ttwu_count                    : -1881628315
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498690]
.ttwu_local                    : 2056592174
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498692]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498694]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498695]
cfs_rq[19]:/autogroup-414268
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498697]
.exec_clock                    : 187276180.861557
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498699]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498702]
.min_vruntime                  : 107070865.507918
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498704]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498706]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498708]
.spread0                       : -5168687946.804775
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498710]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498712]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498714]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498716]
.load_avg                      : 266.655968
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498718]
.load_period                   : 5.487920
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498720]
.load_contrib                  : 48
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498722]
.load_tg                       : 4108
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498725]
.se->exec_start                : 6289765484.041362
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498727]
.se->vruntime                  : 15661065321.278192
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498730]
.se->sum_exec_runtime          : 187262750.605687
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498732]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498734]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498736]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498738]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498740]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498743]
.se->statistics.exec_max       : 36.909326
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498745]
.se->statistics.slice_max      : 13.179646
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498747]
.se->statistics.wait_max       : 41.208130
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498749]
.se->statistics.wait_sum       : 5204021.250094
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498751]
.se->statistics.wait_count     : 527137673
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498754]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498756]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498757]
cfs_rq[19]:/autogroup-414269
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498759]
.exec_clock                    : 59843445.225623
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498761]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498763]
.min_vruntime                  : 20073189.589512
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498766]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498768]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498770]
.spread0                       : -5255685622.723181
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498772]
.nr_spread_over                : 3
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498774]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498776]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498778]
.load_avg                      : 3820.784938
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498781]
.load_period                   : 5.487139
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498783]
.load_contrib                  : 696
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498785]
.load_tg                       : 7668
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498787]
.se->exec_start                : 6289765495.957959
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498789]
.se->vruntime                  : 15661065384.435444
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498792]
.se->sum_exec_runtime          : 59832687.678389
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498794]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498796]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498798]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498800]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498802]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498805]
.se->statistics.exec_max       : 29.020704
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498807]
.se->statistics.slice_max      : 9.931935
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498809]
.se->statistics.wait_max       : 25.102393
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498811]
.se->statistics.wait_sum       : 2624183.942429
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498814]
.se->statistics.wait_count     : 454604568
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498816]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498818]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498819]
cfs_rq[19]:/autogroup-380665
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498821]
.exec_clock                    : 426289.002659
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498823]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498825]
.min_vruntime                  : 420765.051585
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498828]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498830]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498832]
.spread0                       : -5275338047.261108
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498834]
.nr_spread_over                : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498836]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498838]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498840]
.load_avg                      : 463.236238
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498842]
.load_period                   : 5.508837
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498844]
.load_contrib                  : 84
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498846]
.load_tg                       : 117
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498849]
.se->exec_start                : 6289765495.990524
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498851]
.se->vruntime                  : 15661065393.313986
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498853]
.se->sum_exec_runtime          : 424876.084046
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498856]
.se->statistics.wait_start     : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498858]
.se->statistics.sleep_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498860]
.se->statistics.block_start    : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498862]
.se->statistics.sleep_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498864]
.se->statistics.block_max      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498866]
.se->statistics.exec_max       : 15.468820
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498869]
.se->statistics.slice_max      : 0.475659
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498871]
.se->statistics.wait_max       : 10.716355
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498873]
.se->statistics.wait_sum       : 71022.014160
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498875]
.se->statistics.wait_count     : 8343865
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498877]
.se->load.weight               : 2
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498879]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498880]
cfs_rq[19]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498882]
.exec_clock                    : 4215924969.755784
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498885]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498887]
.min_vruntime                  : 15661065393.313986
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498889]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498891]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498894]
.spread0                       : 10385306581.001293
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498896]
.nr_spread_over                : 1829
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498898]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498900]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498902]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498904]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498906]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498908]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498910]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498910] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498911]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.498913]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499222]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499223] cpu#20,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499225]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499227]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499229]
.nr_switches                   : 3749833550
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499231]
.nr_load_updates               : 630207833
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499233]
.nr_uninterruptible            : 1
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499235]
.next_balance                  : 4925.544056
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499237]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499239]   .clock
                       : 6289765497.638808
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499241]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499243]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499245]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499246]
.cpu_load[3]                   : 4
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499248]
.cpu_load[4]                   : 13
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499250]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499252]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499254]
.sched_count                   : -459811351
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499256]
.sched_goidle                  : 958418073
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499258]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499260]
.ttwu_count                    : -1774778034
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499262]
.ttwu_local                    : -2118748955
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499264]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499266]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499267]
cfs_rq[20]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499269]
.exec_clock                    : 3996571456.743069
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499272]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499274]
.min_vruntime                  : 14750607378.337856
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499276]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499278]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499280]
.spread0                       : 9474848566.025163
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499283]
.nr_spread_over                : 2171
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499284]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499286]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499288]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499290]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499292]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499294]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499296]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499297] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499298]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499299]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499581]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499581] cpu#21,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499583]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499585]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499587]
.nr_switches                   : 1308651072
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499589]
.nr_load_updates               : 475751684
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499591]
.nr_uninterruptible            : 244
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499593]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499595]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499597]   .clock
                       : 6289765159.382731
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499600]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499601]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499603]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499605]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499607]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499609]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499611]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499612]
.sched_count                   : 1380080260
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499615]
.sched_goidle                  : 356371413
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499617]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499619]
.ttwu_count                    : 599844058
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499621]
.ttwu_local                    : 445633764
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499623]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499625]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499626]
cfs_rq[21]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499628]
.exec_clock                    : 2876469290.611337
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499630]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499632]
.min_vruntime                  : 11669871489.209297
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499635]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499637]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499639]
.spread0                       : 6394112676.896604
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499641]
.nr_spread_over                : 721
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499643]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499645]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499647]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499649]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499651]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499653]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499655]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499655] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499656]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499658]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499934]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499934] cpu#22,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499936]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499938]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499940]
.nr_switches                   : 921036293
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499942]
.nr_load_updates               : 307484206
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499944]
.nr_uninterruptible            : 87
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499946]
.next_balance                  : 4925.544025
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499948]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499950]   .clock
                       : 6289760061.775448
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499952]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499954]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499956]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499958]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499960]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499962]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499963]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499965]
.sched_count                   : 996726076
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499967]
.sched_goidle                  : 309899535
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499969]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499971]
.ttwu_count                    : 426376939
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499973]
.ttwu_local                    : 260067611
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499975]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499978]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499978]
cfs_rq[22]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499980]
.exec_clock                    : 1328986846.313153
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499983]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499985]
.min_vruntime                  : 5369334204.419083
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499987]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499989]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499991]
.spread0                       : 93575392.106390
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499994]
.nr_spread_over                : 679
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499995]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499997]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.499999]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500001]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500003]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500005]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500007]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500008] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500009]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500010]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500287]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500288] cpu#23,
2666.669 MHz
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500290]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500292]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500294]
.nr_switches                   : 667040928
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500296]
.nr_load_updates               : 307495912
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500298]
.nr_uninterruptible            : 8
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500300]
.next_balance                  : 4925.544051
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500302]
.curr->pid                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500304]   .clock
                       : 6289765447.848449
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500306]
.cpu_load[0]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500308]
.cpu_load[1]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500310]
.cpu_load[2]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500312]
.cpu_load[3]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500313]
.cpu_load[4]                   : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500315]
.yld_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500317]
.sched_switch                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500319]
.sched_count                   : 726298719
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500321]
.sched_goidle                  : 281320969
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500323]
.avg_idle                      : 1000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500325]
.ttwu_count                    : 314935927
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500327]
.ttwu_local                    : 141561027
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500329]
.bkl_count                     : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500331]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500332]
cfs_rq[23]:/autogroup-0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500334]
.exec_clock                    : 342971209.878743
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500336]
.MIN_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500339]
.min_vruntime                  : 1302398037.550836
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500341]
.max_vruntime                  : 0.000001
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500343]   .spread
                       : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500345]
.spread0                       : -3973360774.761857
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500347]
.nr_spread_over                : 652
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500349]
.nr_running                    : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500351]   .load
                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500353]
.load_avg                      : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500355]
.load_period                   : 0.000000
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500357]
.load_contrib                  : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500359]
.load_tg                       : 0
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500361]
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500362] runnable tasks:
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500362]
  task   PID         tree-key  switches  prio     exec-runtime
sum-exec        sum-sleep
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500364]
----------------------------------------------------------------------------------------------------------
May 23 16:34:51 sw-aw2az1-object062 kernel: [6289765.500636]



> Mark and I have been looking at the dump.  There are few interesting items to point out.
>
> 1) xfs_sync_worker is blocked trying to get log reservation:
>
> PID: 25374  TASK: ffff88013481c6c0  CPU: 3   COMMAND: "kworker/3:83"
>  #0 [ffff88013481fb50] __schedule at ffffffff813aacac
>  #1 [ffff88013481fc98] schedule at ffffffff813ab0c4
>  #2 [ffff88013481fca8] xlog_grant_head_wait at ffffffffa0347b78 [xfs]
>  #3 [ffff88013481fcf8] xlog_grant_head_check at ffffffffa03483e6 [xfs]
>  #4 [ffff88013481fd38] xfs_log_reserve at ffffffffa034852c [xfs]
>  #5 [ffff88013481fd88] xfs_trans_reserve at ffffffffa0344e64 [xfs]
>  #6 [ffff88013481fdd8] xfs_fs_log_dummy at ffffffffa02ec138 [xfs]
>  #7 [ffff88013481fdf8] xfs_sync_worker at ffffffffa02f7be4 [xfs]
>  #8 [ffff88013481fe18] process_one_work at ffffffff8104c53b
>  #9 [ffff88013481fe68] worker_thread at ffffffff8104f0e3
> #10 [ffff88013481fee8] kthread at ffffffff8105395e
> #11 [ffff88013481ff48] kernel_thread_helper at ffffffff813b3ae4
>
> This means that it is not in a position to push the AIL.  It is clear that the
> AIL has plenty of entries which can be pushed.
>
> crash> xfs_ail 0xffff88022112b7c0,
> struct xfs_ail {
> ...
>  xa_ail = {
>    next = 0xffff880144d1c318,
>    prev = 0xffff880170a02078
>  },
>  xa_target = 0x1f00003063,
>
> Here's the first item on the AIL:
>
> ffff880144d1c318
> struct xfs_log_item_t {
>  li_ail = {
>    next = 0xffff880196ea0858,
>    prev = 0xffff88022112b7d0
>  },
>  li_lsn = 0x1f00001c63,                <--- less than xa_target
>  li_desc = 0x0,
>  li_mountp = 0xffff88016adee000,
>  li_ailp = 0xffff88022112b7c0,
>  li_type = 0x123b,
>  li_flags = 0x1,
>  li_bio_list = 0xffff88016afa5cb8,
>  li_cb = 0xffffffffa034de00 <xfs_istale_done>,
>  li_ops = 0xffffffffa035f620,
>  li_cil = {
>    next = 0xffff880144d1c368,
>    prev = 0xffff880144d1c368
>  },
>  li_lv = 0x0,
>  li_seq = 0x3b
> }
>
> So if xfs_sync_worker were not blocked on log reservation it would push these
> items.
>
> 2) The CIL is waiting around too:
>
> crash> xfs_cil_ctx 0xffff880144d1a9c0,
> struct xfs_cil_ctx {
> ...
>  space_used = 0x135f68,
>
> struct log {
> ...
>  l_logsize = 0xa00000,
>
> A00000/8
> 140000                                          <--- XLOG_CIL_SPACE_LIMIT
>
> 140000 - 135F68
> A098
>
> Looks like xlog_cil_push_background will not push the CIL while space used is
> less than XLOG_CIL_SPACE_LIMIT, so that's not going anywhere either.
>
> 3) It may be unrelated to this bug, but we do have a race in the log
> reservation code that hasn't been resolved... between when log_space_left
> samples the grant heads and when the space is actually granted a bit later.
> Maybe we can grant more space than intended.
>
> If you can provide output of 'echo t > /proc/sysrq-trigger' it may be enough
> information to determine if you're seeing the same problem we hit on Saturday.
>
> Thanks,
>
> Ben & Mark

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-24  5:45                                         ` Juerg Haefliger
@ 2012-05-24 14:23                                           ` Ben Myers
  0 siblings, 0 replies; 58+ messages in thread
From: Ben Myers @ 2012-05-24 14:23 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: xfs

On Thu, May 24, 2012 at 07:45:05AM +0200, Juerg Haefliger wrote:
> > Hit this on a filesystem with a regular sized log over the weekend.  If you see
> > this again in production could you gather up task states?
> >
> > echo t > /proc/sysrq-trigger
> 
> Here is the log from a production hang:
> 
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111805] INFO:
> task xfssyncd/dm-4:971 blocked for more than 120 seconds.
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111864] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111951]
> xfssyncd/dm-4   D 000000000000000f     0   971      2 0x00000000
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111957]
> ffff880325e09d00 0000000000000046 ffff880325e09fd8 ffff880325e08000
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111962]
> 0000000000013d00 ffff880326774858 ffff880325e09fd8 0000000000013d00
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111966]
> ffff8803241badc0 ffff8803267744a0 0000000000000282 ffff8806265d7e00
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.111971] Call Trace:
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112016]
> [<ffffffffa00f42d8>] xlog_grant_log_space+0x4a8/0x500 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112023]
> [<ffffffff8105f6f0>] ? default_wake_function+0x0/0x20
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112046]
> [<ffffffffa00f61ff>] xfs_log_reserve+0xff/0x140 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112070]
> [<ffffffffa01021fc>] xfs_trans_reserve+0x9c/0x200 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112092]
> [<ffffffffa00e6383>] xfs_fs_log_dummy+0x43/0x90 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112117]
> [<ffffffffa01193c1>] xfs_sync_worker+0x81/0x90 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112141]
> [<ffffffffa01180f3>] xfssyncd+0x183/0x230 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112164]
> [<ffffffffa0117f70>] ? xfssyncd+0x0/0x230 [xfs]
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112170]
> [<ffffffff810871f6>] kthread+0x96/0xa0
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112176]
> [<ffffffff8100cde4>] kernel_thread_helper+0x4/0x10
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112180]
> [<ffffffff81087160>] ? kthread+0x0/0xa0
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112183]
> [<ffffffff8100cde0>] ? kernel_thread_helper+0x0/0x10
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112241] INFO:
> task ruby1.8:2734 blocked for more than 120 seconds.
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112295] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> May 23 11:44:33 sw-aw2az1-object062 kernel: [6272393.112378] ruby1.8

Great, that's the same one we saw last saturday.  The xfs_sync_worker is blocked
getting log reservation, so it doesn't push the target on the ail... which
would possibly release enough space for the log reservation it's blocked on.

I was thinking of experimenting with something like this:

Index: xfs/fs/xfs/xfs_sync.c
===================================================================
--- xfs.orig/fs/xfs/xfs_sync.c
+++ xfs/fs/xfs/xfs_sync.c
@@ -392,15 +392,15 @@ xfs_sync_worker(
        if (down_read_trylock(&mp->m_super->s_umount)) {
                if (!(mp->m_flags & XFS_MOUNT_RDONLY)) {
                        /* dgc: errors ignored here */
-                       if (mp->m_super->s_frozen == SB_UNFROZEN &&
-                           xfs_log_need_covered(mp))
-                               error = xfs_fs_log_dummy(mp);
-                       else
-                               xfs_log_force(mp, 0);
+                       xfs_log_force(mp, 0);

                        /* start pushing all the metadata that is currently
                         * dirty */
                        xfs_ail_push_all(mp->m_ail);
+
+                       if (mp->m_super->s_frozen == SB_UNFROZEN &&
+                           xfs_log_need_covered(mp))
+                               error = xfs_fs_log_dummy(mp);
                }
                up_read(&mp->m_super->s_umount);
        }

By forcing the log and pushing all of the ail before trying to cover the log we
don't deadlock that way... but it doesn't fix the greater problem with the ail
hang.  (It looks like that is being worked in a separate thread.)  And there may
be some other considerations with respect to covering the log later, after
pushing the ail.  Something for discussion.

-Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
@ 2012-05-24 20:18 ` Peter Watkins
  2012-05-25  6:28   ` Juerg Haefliger
  2012-06-05 15:21   ` Chris J Arges
  0 siblings, 2 replies; 58+ messages in thread
From: Peter Watkins @ 2012-05-24 20:18 UTC (permalink / raw)
  To: juergh; +Cc: bpm, xfs

Does your kernel have the effect of

0bf6a5bd4b55b466964ead6fa566d8f346a828ee xfs: convert the xfsaild
thread to a workqueue
c7eead1e118fb7e34ee8f5063c3c090c054c3820 xfs: revert to using a
kthread for AIL pushing

In particular, is this code in xfs_trans_ail_push:

       smp_wmb();
       xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
       smp_wmb();

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-24 20:18 ` Peter Watkins
@ 2012-05-25  6:28   ` Juerg Haefliger
  2012-05-25 17:03     ` Peter Watkins
  2012-06-05 15:21   ` Chris J Arges
  1 sibling, 1 reply; 58+ messages in thread
From: Juerg Haefliger @ 2012-05-25  6:28 UTC (permalink / raw)
  To: Peter Watkins; +Cc: bpm, xfs

> Does your kernel have the effect of
>
> 0bf6a5bd4b55b466964ead6fa566d8f346a828ee xfs: convert the xfsaild
> thread to a workqueue

No.


> c7eead1e118fb7e34ee8f5063c3c090c054c3820 xfs: revert to using a
> kthread for AIL pushing

No.


> In particular, is this code in xfs_trans_ail_push:
>
>       smp_wmb();
>       xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
>       smp_wmb();

No. xfs_trans_ail_push looks like this:

void
xfs_trans_ail_push(
        struct xfs_ail  *ailp,
        xfs_lsn_t       threshold_lsn)
{
        xfs_log_item_t  *lip;

        lip = xfs_ail_min(ailp);
        if (lip && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
                if (XFS_LSN_CMP(threshold_lsn, ailp->xa_target) > 0)
                        xfsaild_wakeup(ailp, threshold_lsn);
        }
}


FWIW, the XFS driver in my kernel is identical to the vanilla 2.6.38
driver. I'm still trying to get a XFS trace from a production hang. I
do have a crash dump from a production machine with /tmp hanging.
Would it be helpful to share that dump?

...Juerg

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-25  6:28   ` Juerg Haefliger
@ 2012-05-25 17:03     ` Peter Watkins
  2012-06-05 23:54       ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Peter Watkins @ 2012-05-25 17:03 UTC (permalink / raw)
  To: Juerg Haefliger; +Cc: bpm, xfs

On Fri, May 25, 2012 at 2:28 AM, Juerg Haefliger <juergh@gmail.com> wrote:
>> Does your kernel have the effect of
>>
>> 0bf6a5bd4b55b466964ead6fa566d8f346a828ee xfs: convert the xfsaild
>> thread to a workqueue
>
> No.
>
>
>> c7eead1e118fb7e34ee8f5063c3c090c054c3820 xfs: revert to using a
>> kthread for AIL pushing
>
> No.
>
>
>> In particular, is this code in xfs_trans_ail_push:
>>
>>       smp_wmb();
>>       xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
>>       smp_wmb();
>
> No. xfs_trans_ail_push looks like this:
>
> void
> xfs_trans_ail_push(
>        struct xfs_ail  *ailp,
>        xfs_lsn_t       threshold_lsn)
> {
>        xfs_log_item_t  *lip;
>
>        lip = xfs_ail_min(ailp);
>        if (lip && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
>                if (XFS_LSN_CMP(threshold_lsn, ailp->xa_target) > 0)
>                        xfsaild_wakeup(ailp, threshold_lsn);
>        }
> }
>
>
> FWIW, the XFS driver in my kernel is identical to the vanilla 2.6.38
> driver. I'm still trying to get a XFS trace from a production hang. I
> do have a crash dump from a production machine with /tmp hanging.
> Would it be helpful to share that dump?
>
> ...Juerg

It looks like the combined effect of those patches, perhaps the write
barriers, fix one log space hang. That problem exists in 2.6.38.

Reading bug #922 I see your test case reproduces in recent kernels, so
there must be a newer problem also.

I find the reproducer the most useful, so no need to upload the dump.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-24 20:18 ` Peter Watkins
  2012-05-25  6:28   ` Juerg Haefliger
@ 2012-06-05 15:21   ` Chris J Arges
  1 sibling, 0 replies; 58+ messages in thread
From: Chris J Arges @ 2012-06-05 15:21 UTC (permalink / raw)
  To: xfs



Peter Watkins-3 wrote:
> 
> Does your kernel have the effect of
> 
> 0bf6a5bd4b55b466964ead6fa566d8f346a828ee xfs: convert the xfsaild
> thread to a workqueue
> 

This patch is present in the Ubuntu Precise 3.2 kernel, and I have been able
to reproduce the failure with this patch applied.


Peter Watkins-3 wrote:
> 
> c7eead1e118fb7e34ee8f5063c3c090c054c3820 xfs: revert to using a
> kthread for AIL pushing
> 

I see this patch is 0030807c66f058230bcb20d2573bcaf28852e804 upstream, and
this patch is also present in the 3.2 Precise kernel that was tested and
exhibited the failure.


Peter Watkins-3 wrote:
> 
> In particular, is this code in xfs_trans_ail_push:
> 
>        smp_wmb();
>        xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
>        smp_wmb();
> 

In fs/xfs/xfs_trans_ail.c:xfs_ail_push():

        /*
         * Ensure that the new target is noticed in push code before it
clears
         * the XFS_AIL_PUSHING_BIT.
         */
        smp_wmb();
        xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
        smp_wmb();


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs


-- 
View this message in context: http://old.nabble.com/Still-seeing-hangs-in-xlog_grant_log_space-tp33732886p33964752.html
Sent from the Xfs - General mailing list archive at Nabble.com.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-25 17:03     ` Peter Watkins
@ 2012-06-05 23:54       ` Dave Chinner
  2012-06-06 13:40         ` Brian Foster
  2012-06-11 20:59         ` Mark Tinguely
  0 siblings, 2 replies; 58+ messages in thread
From: Dave Chinner @ 2012-06-05 23:54 UTC (permalink / raw)
  To: Peter Watkins; +Cc: Juerg Haefliger, bpm, xfs

On Fri, May 25, 2012 at 01:03:04PM -0400, Peter Watkins wrote:
> On Fri, May 25, 2012 at 2:28 AM, Juerg Haefliger <juergh@gmail.com> wrote:
> >> Does your kernel have the effect of
> >>
> >> 0bf6a5bd4b55b466964ead6fa566d8f346a828ee xfs: convert the xfsaild
> >> thread to a workqueue
> >
> > No.
> >
> >
> >> c7eead1e118fb7e34ee8f5063c3c090c054c3820 xfs: revert to using a
> >> kthread for AIL pushing
> >
> > No.
> >
> >
> >> In particular, is this code in xfs_trans_ail_push:
> >>
> >>       smp_wmb();
> >>       xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
> >>       smp_wmb();
> >
> > No. xfs_trans_ail_push looks like this:
> >
> > void
> > xfs_trans_ail_push(
> >        struct xfs_ail  *ailp,
> >        xfs_lsn_t       threshold_lsn)
> > {
> >        xfs_log_item_t  *lip;
> >
> >        lip = xfs_ail_min(ailp);
> >        if (lip && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
> >                if (XFS_LSN_CMP(threshold_lsn, ailp->xa_target) > 0)
> >                        xfsaild_wakeup(ailp, threshold_lsn);
> >        }
> > }
> >
> >
> > FWIW, the XFS driver in my kernel is identical to the vanilla 2.6.38
> > driver. I'm still trying to get a XFS trace from a production hang. I
> > do have a crash dump from a production machine with /tmp hanging.
> > Would it be helpful to share that dump?
> >
> > ...Juerg
> 
> It looks like the combined effect of those patches, perhaps the write
> barriers, fix one log space hang. That problem exists in 2.6.38.

There are a huge number of fixes to solve these problems since
2.6.38. It doesn't help us at all to test anymore on 2.6.38,
especially as that kernel is not supported, and I'd suggest that you
migrate production off it sooner rather than later.

> Reading bug #922 I see your test case reproduces in recent kernels, so
> there must be a newer problem also.

Right, that's what we need to find - it appears to be a CIL
stall/accounting leak, completely unrelated to all the other AIL/log
space stalls that have been occurring. Last thing is that I was
waiting for more information on the stall that mark T @ sgi was able
to reproduce. I haven't heard anything from him since I asked for
more information on May 23....

> I find the reproducer the most useful, so no need to upload the dump.

At this point, running on a 3.5-rc1 kernel is what we need to get
working reliably. Once we have the problems solved there, we can
work out what set of patches need to be backported to 3.0-stable and
other kernels to fix the problems in those supported kernels...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-05 23:54       ` Dave Chinner
@ 2012-06-06 13:40         ` Brian Foster
  2012-06-06 17:41           ` Mark Tinguely
                             ` (2 more replies)
  2012-06-11 20:59         ` Mark Tinguely
  1 sibling, 3 replies; 58+ messages in thread
From: Brian Foster @ 2012-06-06 13:40 UTC (permalink / raw)
  To: xfs

On 06/05/2012 07:54 PM, Dave Chinner wrote:
> On Fri, May 25, 2012 at 01:03:04PM -0400, Peter Watkins wrote:
>> On Fri, May 25, 2012 at 2:28 AM, Juerg Haefliger <juergh@gmail.com> wrote:

snip

> At this point, running on a 3.5-rc1 kernel is what we need to get
> working reliably. Once we have the problems solved there, we can
> work out what set of patches need to be backported to 3.0-stable and
> other kernels to fix the problems in those supported kernels...
> 

Hi guys,

I've been reproducing a similar stall in my testing of the 're-enable
xfsaild idle mode' patch/thread that only occurs for me in the xfs tree.
I was able to do a bisect from rc2 down to commit 43ff2122, though the
history of this issue makes me wonder if this commit just makes the
problem more reproducible as opposed to introducing it. Anyways, the
characteristics I observe so far:

- Task blocked for more than 120s message in xlog_grant_head_wait(). I
see xfs_sync_worker() in my current bt, but I'm pretty sure I've seen
the same issue without it involved.
- The AIL is not empty/idle. It spins with a relatively small and
constant number of entries (I've seen ~8-40). These items are all always
marked as "flushing."
- Via crash, all the inodes in the ail appear to be marked as stale
(i.e. li_cb == xfs_istale_done). The inode flags are
XFS_ISTALE|XFS_IRECLAIMABLE|XFS_IFLOCK.
- The iflock in particular is why the ail marks these items 'flushing'
and why nothing seems to proceed any further (xfsaild just waits for
these to complete). I can kick the fs back into action with a 'sync.'

It looks like we only mark in inode stale when an inode cluster is
freed, so I repeated this test with 'ikeep' and cannot reproduce. I'm
not sure if anybody is testing for this in recent kernels (Mark?), but
if so I'd be curious if ikeep has any effect on your test (BTW, this is
still the looping 273 xfstest).

It seems like there could be some kind of race here with inodes being
marked stale, but also appears that either completion (xfs_istale_done()
or xfs_iflush_done()) should release the flush lock. I'll see if I can
trace it further and get anything useful...

Brian

> Cheers,
> 
> Dave.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-05-18 10:10                                           ` Dave Chinner
  2012-05-18 14:42                                             ` Mark Tinguely
@ 2012-06-06 15:00                                             ` Chris J Arges
  2012-06-07  0:49                                               ` Dave Chinner
  1 sibling, 1 reply; 58+ messages in thread
From: Chris J Arges @ 2012-06-06 15:00 UTC (permalink / raw)
  To: xfs



Dave Chinner wrote:
> 
> It seems unlikely, but if you turn on kmemleak it might find a
> memory leak or overwrite that is causing this.
> 

Running kmemleak on a 3.0.0 kernel results in the following:

dmesg:
[ 3855.751393] XFS (sda2): xlog_verify_grant_tail: space >
BBTOB(tail_blocks)
[22987.932317] kmemleak: 1 new suspected memory leaks (see
/sys/kernel/debug/kmemleak)

kmemleak:
unreferenced object 0xffff88015612a340 (size 208):
 comm "copy-files", pid 1483, jiffies 4310560285 (age 16571.656s)
 hex dump (first 32 bytes):
    00 f3 1b 57 01 88 ff ff 00 08 00 58 01 88 ff ff  ...W.......X....
    a0 a7 32 a0 ff ff ff ff 04 00 00 00 00 00 00 00  ..2.............
 backtrace:
    [<ffffffff815c6196>] kmemleak_alloc+0x26/0x50
    [<ffffffff811542e3>] kmem_cache_alloc+0x123/0x190
    [<ffffffffa0313967>] kmem_zone_alloc+0x67/0xe0 [xfs]
    [<ffffffffa03139fd>] kmem_zone_zalloc+0x1d/0x50 [xfs]
    [<ffffffffa02b033f>] xfs_allocbt_init_cursor+0xdf/0x130 [xfs]
    [<ffffffffa02adb4c>] xfs_alloc_ag_vextent_near+0x6c/0xd80 [xfs]
    [<ffffffffa02aea88>] xfs_alloc_ag_vextent+0x228/0x290 [xfs]
    [<ffffffffa02af7d9>] xfs_alloc_vextent+0x649/0x8c0 [xfs]
    [<ffffffffa02bcfc6>] xfs_bmap_btalloc+0x286/0x7c0 [xfs]
    [<ffffffffa02bd521>] xfs_bmap_alloc+0x21/0x40 [xfs]
    [<ffffffffa02c6ba3>] xfs_bmapi+0xdc3/0x1950 [xfs]
    [<ffffffffa02f5059>] xfs_iomap_write_allocate+0x179/0x340 [xfs]
    [<ffffffffa03147d5>] xfs_map_blocks+0x215/0x380 [xfs]
    [<ffffffffa0315792>] xfs_vm_writepage+0x1b2/0x510 [xfs]
    [<ffffffff811142e7>] __writepage+0x17/0x40
    [<ffffffff8111485d>] write_cache_pages+0x20d/0x460

analysis:
    I turned on kmemleak, function tracing, and xfs debugging in the build
in which I ran this. So far I’ve been able to run the copy-files script for
about 24 hrs  without failure. I’m not sure if this is because all these
features are turned on and it has slowed something down (so it takes longer
to reproduce), or if the debugging code is changing the behavior. I’m not
sure if this backtrace is valid, so I’m attaching an annotated objdump of my
xfs module.

--chris j arges

http://old.nabble.com/file/p33970511/xfs_objdump_3.0.txt.tar.bz2
xfs_objdump_3.0.txt.tar.bz2 
-- 
View this message in context: http://old.nabble.com/Still-seeing-hangs-in-xlog_grant_log_space-tp33732886p33970511.html
Sent from the Xfs - General mailing list archive at Nabble.com.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-06 13:40         ` Brian Foster
@ 2012-06-06 17:41           ` Mark Tinguely
  2012-06-11 20:42             ` Chris J Arges
  2012-06-06 22:03           ` Mark Tinguely
  2012-06-07  1:35           ` Dave Chinner
  2 siblings, 1 reply; 58+ messages in thread
From: Mark Tinguely @ 2012-06-06 17:41 UTC (permalink / raw)
  To: Brian Foster; +Cc: xfs

On 06/06/12 08:40, Brian Foster wrote:

>
> Hi guys,
>
> I've been reproducing a similar stall in my testing of the 're-enable
> xfsaild idle mode' patch/thread that only occurs for me in the xfs tree.
> I was able to do a bisect from rc2 down to commit 43ff2122, though the
> history of this issue makes me wonder if this commit just makes the
> problem more reproducible as opposed to introducing it. Anyways, the
> characteristics I observe so far:
>
> - Task blocked for more than 120s message in xlog_grant_head_wait(). I
> see xfs_sync_worker() in my current bt, but I'm pretty sure I've seen
> the same issue without it involved.
> - The AIL is not empty/idle. It spins with a relatively small and
> constant number of entries (I've seen ~8-40). These items are all always
> marked as "flushing."
> - Via crash, all the inodes in the ail appear to be marked as stale
> (i.e. li_cb == xfs_istale_done). The inode flags are
> XFS_ISTALE|XFS_IRECLAIMABLE|XFS_IFLOCK.
> - The iflock in particular is why the ail marks these items 'flushing'
> and why nothing seems to proceed any further (xfsaild just waits for
> these to complete). I can kick the fs back into action with a 'sync.'
>
> It looks like we only mark in inode stale when an inode cluster is
> freed, so I repeated this test with 'ikeep' and cannot reproduce. I'm
> not sure if anybody is testing for this in recent kernels (Mark?), but
> if so I'd be curious if ikeep has any effect on your test (BTW, this is
> still the looping 273 xfstest).
>
> It seems like there could be some kind of race here with inodes being
> marked stale, but also appears that either completion (xfs_istale_done()
> or xfs_iflush_done()) should release the flush lock. I'll see if I can
> trace it further and get anything useful...
>
> Brian
>

I am looking at several instances of the log hang on Linux 3.4rc2.

The problem was originally reported on Linux 2.6.38-8.

The perl script to recreate this problem is very similar to xfstest 273.
I use that because it avoids all the filesystem mount/unmount that
happen between the test 273 loops. You can build the log size that you
want to test, create the directories and let it run until it hangs.

I will look at the AIL entries in my current hangs. The problem is the
filesystem can be made to hang with a completely empty AIL.

Sometimes the flusher is hung trying to write out pages. I will go and
see if this just happened to fire after a hang, or if the pages are
important.

--Mark.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-06 13:40         ` Brian Foster
  2012-06-06 17:41           ` Mark Tinguely
@ 2012-06-06 22:03           ` Mark Tinguely
  2012-06-06 23:04             ` Brian Foster
  2012-06-07  1:35           ` Dave Chinner
  2 siblings, 1 reply; 58+ messages in thread
From: Mark Tinguely @ 2012-06-06 22:03 UTC (permalink / raw)
  To: Brian Foster; +Cc: xfs-oss

PS I hung a IA64 machine (good for testing the hanging theory, I am not 
so good at IA64 crash debugs) running the PERL test program with "ikeep" 
option enabled.

--Mark.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-06 22:03           ` Mark Tinguely
@ 2012-06-06 23:04             ` Brian Foster
  0 siblings, 0 replies; 58+ messages in thread
From: Brian Foster @ 2012-06-06 23:04 UTC (permalink / raw)
  To: Mark Tinguely; +Cc: xfs-oss

On 06/06/2012 06:03 PM, Mark Tinguely wrote:
> PS I hung a IA64 machine (good for testing the hanging theory, I am not
> so good at IA64 crash debugs) running the PERL test program with "ikeep"
> option enabled.
> 
> --Mark.

Thanks. It's sounding more like this is something different.
Unfortunately, shortly after sending my last email I somehow toasted my
test VM. I've recreated it and I'm currently trying to see if I can
continue to reproduce the stale inode flushing stall. I'll give the perl
script a whirl as well...

Brian

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-06 15:00                                             ` Chris J Arges
@ 2012-06-07  0:49                                               ` Dave Chinner
  0 siblings, 0 replies; 58+ messages in thread
From: Dave Chinner @ 2012-06-07  0:49 UTC (permalink / raw)
  To: Chris J Arges; +Cc: xfs

On Wed, Jun 06, 2012 at 08:00:00AM -0700, Chris J Arges wrote:
> 
> 
> Dave Chinner wrote:
> > 
> > It seems unlikely, but if you turn on kmemleak it might find a
> > memory leak or overwrite that is causing this.
> > 
> 
> Running kmemleak on a 3.0.0 kernel results in the following:
> 
> dmesg:
> [ 3855.751393] XFS (sda2): xlog_verify_grant_tail: space >
> BBTOB(tail_blocks)
> [22987.932317] kmemleak: 1 new suspected memory leaks (see
> /sys/kernel/debug/kmemleak)
> 
> kmemleak:
> unreferenced object 0xffff88015612a340 (size 208):
>  comm "copy-files", pid 1483, jiffies 4310560285 (age 16571.656s)
>  hex dump (first 32 bytes):
>     00 f3 1b 57 01 88 ff ff 00 08 00 58 01 88 ff ff  ...W.......X....
>     a0 a7 32 a0 ff ff ff ff 04 00 00 00 00 00 00 00  ..2.............
>  backtrace:
>     [<ffffffff815c6196>] kmemleak_alloc+0x26/0x50
>     [<ffffffff811542e3>] kmem_cache_alloc+0x123/0x190
>     [<ffffffffa0313967>] kmem_zone_alloc+0x67/0xe0 [xfs]
>     [<ffffffffa03139fd>] kmem_zone_zalloc+0x1d/0x50 [xfs]
>     [<ffffffffa02b033f>] xfs_allocbt_init_cursor+0xdf/0x130 [xfs]
>     [<ffffffffa02adb4c>] xfs_alloc_ag_vextent_near+0x6c/0xd80 [xfs]
>     [<ffffffffa02aea88>] xfs_alloc_ag_vextent+0x228/0x290 [xfs]
>     [<ffffffffa02af7d9>] xfs_alloc_vextent+0x649/0x8c0 [xfs]
>     [<ffffffffa02bcfc6>] xfs_bmap_btalloc+0x286/0x7c0 [xfs]
>     [<ffffffffa02bd521>] xfs_bmap_alloc+0x21/0x40 [xfs]
>     [<ffffffffa02c6ba3>] xfs_bmapi+0xdc3/0x1950 [xfs]
>     [<ffffffffa02f5059>] xfs_iomap_write_allocate+0x179/0x340 [xfs]
>     [<ffffffffa03147d5>] xfs_map_blocks+0x215/0x380 [xfs]
>     [<ffffffffa0315792>] xfs_vm_writepage+0x1b2/0x510 [xfs]
>     [<ffffffff811142e7>] __writepage+0x17/0x40
>     [<ffffffff8111485d>] write_cache_pages+0x20d/0x460
> 
> analysis:
>     I turned on kmemleak, function tracing, and xfs debugging in the build
> in which I ran this. So far I’ve been able to run the copy-files script for
> about 24 hrs  without failure. I’m not sure if this is because all these
> features are turned on and it has slowed something down (so it takes longer
> to reproduce), or if the debugging code is changing the behavior. I’m not
> sure if this backtrace is valid, so I’m attaching an annotated objdump of my
> xfs module.

OK, this is not what we are looking for. Yes, there's a cursor leak
that I can find just by looking at the code, but that's definitely
not related the AIL issue....

I'll send a patch to fix this leak soon.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-06 13:40         ` Brian Foster
  2012-06-06 17:41           ` Mark Tinguely
  2012-06-06 22:03           ` Mark Tinguely
@ 2012-06-07  1:35           ` Dave Chinner
  2012-06-07 14:16             ` Brian Foster
  2 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-06-07  1:35 UTC (permalink / raw)
  To: Brian Foster; +Cc: xfs

On Wed, Jun 06, 2012 at 09:40:09AM -0400, Brian Foster wrote:
> On 06/05/2012 07:54 PM, Dave Chinner wrote:
> > On Fri, May 25, 2012 at 01:03:04PM -0400, Peter Watkins wrote:
> >> On Fri, May 25, 2012 at 2:28 AM, Juerg Haefliger <juergh@gmail.com> wrote:
> 
> snip
> 
> > At this point, running on a 3.5-rc1 kernel is what we need to get
> > working reliably. Once we have the problems solved there, we can
> > work out what set of patches need to be backported to 3.0-stable and
> > other kernels to fix the problems in those supported kernels...
> > 
> 
> Hi guys,
> 
> I've been reproducing a similar stall in my testing of the 're-enable
> xfsaild idle mode' patch/thread that only occurs for me in the xfs tree.
> I was able to do a bisect from rc2 down to commit 43ff2122, though the
> history of this issue makes me wonder if this commit just makes the
> problem more reproducible as opposed to introducing it. Anyways, the
> characteristics I observe so far:

More reproducable. See below.

> - Task blocked for more than 120s message in xlog_grant_head_wait(). I
> see xfs_sync_worker() in my current bt, but I'm pretty sure I've seen
> the same issue without it involved.
> - The AIL is not empty/idle. It spins with a relatively small and
> constant number of entries (I've seen ~8-40). These items are all always
> marked as "flushing."
> - Via crash, all the inodes in the ail appear to be marked as stale
> (i.e. li_cb == xfs_istale_done). The inode flags are
> XFS_ISTALE|XFS_IRECLAIMABLE|XFS_IFLOCK.
> - The iflock in particular is why the ail marks these items 'flushing'
> and why nothing seems to proceed any further (xfsaild just waits for
> these to complete). I can kick the fs back into action with a 'sync.'

Right, I've seen this as well. What I analysed in the case I saw was
that the underlying buffer is also stale - correctly - and it is
pinned in memory so cannot be flushed. HEnce all the inodes are
inteh same state. The reason they are pinned in memory is that they
items were still active in the CIL, and a log force was need to
checkpoint the CIL and cause the checkpoint to be committed. Once
the CIL checkpoint is committed, the stale items are freed from the
AIL, and everything goes onward. The problem is that with the
xfs_sync_worker stalled, nothing triggers a log force because the
inode is returning "flushing" to the AIL pushes.

However, your analysis has allowed me to find what I think is the
bug causing your problem - what I missed when I last saw this was
the significance of the order of checks in xfs_inode_item_push().
That is, we check for whether the inode is flush locked before we
check if it is stale.

By definition, a dirty stale inode must be attached to the
underlying stale buffer and that requires it to be flush locked, as
can be seen in xfs_ifree_cluster:

>>>>>>                  xfs_iflock(ip);
>>>>>>                  xfs_iflags_set(ip, XFS_ISTALE);

                        /*
                         * we don't need to attach clean inodes or those only
                         * with unlogged changes (which we throw away, anyway).
                         */
                        iip = ip->i_itemp;
                        if (!iip || xfs_inode_clean(ip)) {
                                ASSERT(ip != free_ip);
                                xfs_ifunlock(ip);
                                xfs_iunlock(ip, XFS_ILOCK_EXCL);
                                continue;
                        }

                        iip->ili_last_fields = iip->ili_fields;
                        iip->ili_fields = 0;
                        iip->ili_logged = 1;
                        xfs_trans_ail_copy_lsn(mp->m_ail, &iip->ili_flush_lsn,
                                                &iip->ili_item.li_lsn);

>>>>>>                  xfs_buf_attach_iodone(bp, xfs_istale_done,
>>>>>>                                            &iip->ili_item);


So basically, the problem is that we should be checking for stale
before flushing in xfs_inode_item_push(). I'll send out a patch that
fixes this in a few minutes.

Good analysis work, Brian!

BTW, I think the underlying cause might be a different manifestation
of the race described in the comment above
xfs_inode_item_committed(), only this time with inodes that are
already in the AIL....

And FWIW, it doesn't explain the CIL stalls that seem to the other
cause of the problem when the AIL is empty...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-07  1:35           ` Dave Chinner
@ 2012-06-07 14:16             ` Brian Foster
  2012-06-08  0:28               ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Brian Foster @ 2012-06-07 14:16 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On 06/06/2012 09:35 PM, Dave Chinner wrote:
> On Wed, Jun 06, 2012 at 09:40:09AM -0400, Brian Foster wrote:
>> On 06/05/2012 07:54 PM, Dave Chinner wrote:
>>> On Fri, May 25, 2012 at 01:03:04PM -0400, Peter Watkins wrote:
>>>> On Fri, May 25, 2012 at 2:28 AM, Juerg Haefliger <juergh@gmail.com> wrote:
>>
>> snip
>>
>>> At this point, running on a 3.5-rc1 kernel is what we need to get
>>> working reliably. Once we have the problems solved there, we can
>>> work out what set of patches need to be backported to 3.0-stable and
>>> other kernels to fix the problems in those supported kernels...
>>>
>>
>> Hi guys,
>>
>> I've been reproducing a similar stall in my testing of the 're-enable
>> xfsaild idle mode' patch/thread that only occurs for me in the xfs tree.
>> I was able to do a bisect from rc2 down to commit 43ff2122, though the
>> history of this issue makes me wonder if this commit just makes the
>> problem more reproducible as opposed to introducing it. Anyways, the
>> characteristics I observe so far:
> 
> More reproducable. See below.
> 
>> - Task blocked for more than 120s message in xlog_grant_head_wait(). I
>> see xfs_sync_worker() in my current bt, but I'm pretty sure I've seen
>> the same issue without it involved.
>> - The AIL is not empty/idle. It spins with a relatively small and
>> constant number of entries (I've seen ~8-40). These items are all always
>> marked as "flushing."
>> - Via crash, all the inodes in the ail appear to be marked as stale
>> (i.e. li_cb == xfs_istale_done). The inode flags are
>> XFS_ISTALE|XFS_IRECLAIMABLE|XFS_IFLOCK.
>> - The iflock in particular is why the ail marks these items 'flushing'
>> and why nothing seems to proceed any further (xfsaild just waits for
>> these to complete). I can kick the fs back into action with a 'sync.'
> 
> Right, I've seen this as well. What I analysed in the case I saw was
> that the underlying buffer is also stale - correctly - and it is
> pinned in memory so cannot be flushed. HEnce all the inodes are
> inteh same state. The reason they are pinned in memory is that they
> items were still active in the CIL, and a log force was need to
> checkpoint the CIL and cause the checkpoint to be committed. Once
> the CIL checkpoint is committed, the stale items are freed from the
> AIL, and everything goes onward. The problem is that with the
> xfs_sync_worker stalled, nothing triggers a log force because the
> inode is returning "flushing" to the AIL pushes.
> 

Makes sense, thanks.

> However, your analysis has allowed me to find what I think is the
> bug causing your problem - what I missed when I last saw this was
> the significance of the order of checks in xfs_inode_item_push().
> That is, we check for whether the inode is flush locked before we
> check if it is stale.
> 

Ok, I noticed that up in inode_item_push() simply because it looked
pretty clear that it could get things moving again, but hadn't
established enough context for myself to understand whether that was
correct.

> By definition, a dirty stale inode must be attached to the
> underlying stale buffer and that requires it to be flush locked, as
> can be seen in xfs_ifree_cluster:
> 
>>>>>>>                  xfs_iflock(ip);
>>>>>>>                  xfs_iflags_set(ip, XFS_ISTALE);
> 
>                         /*
>                          * we don't need to attach clean inodes or those only
>                          * with unlogged changes (which we throw away, anyway).
>                          */
>                         iip = ip->i_itemp;
>                         if (!iip || xfs_inode_clean(ip)) {
>                                 ASSERT(ip != free_ip);
>                                 xfs_ifunlock(ip);
>                                 xfs_iunlock(ip, XFS_ILOCK_EXCL);
>                                 continue;
>                         }
> 
>                         iip->ili_last_fields = iip->ili_fields;
>                         iip->ili_fields = 0;
>                         iip->ili_logged = 1;
>                         xfs_trans_ail_copy_lsn(mp->m_ail, &iip->ili_flush_lsn,
>                                                 &iip->ili_item.li_lsn);
> 
>>>>>>>                  xfs_buf_attach_iodone(bp, xfs_istale_done,
>>>>>>>                                            &iip->ili_item);
> 
> 
> So basically, the problem is that we should be checking for stale
> before flushing in xfs_inode_item_push(). I'll send out a patch that
> fixes this in a few minutes.
> 

Ah! I had focused on the code a bit earlier in xfs_ifree_cluster() where
it iterates the inodes attached to the buffer and marks them stale. The
comment there indicates the buffers are iflocked, but I didn't quite
understand how/why. I now see that as part of the
xfs_buf_attach_iodone(). That and your description here help clear that up.

So is it reasonable to expect a situation where an inode is sitting in
the AIL, flush locked and marked stale? E.g., an inode is
created/modified, logged, committed, checkpointed and added to the AIL
pending writeout. Subsequently that and other inodes in the same buf are
deleted and the cluster removed, resulting in everything marked stale..?

The part I'm still a bit hazy on is from the AIL perspective, it looks
like if the inode was flush locked, it should result in a buffer I/O
submission up in xfsaild in the same thread, which means I would expect
either the iflush_done() or istale_done() completion to fire. Is there
another code path where the inode flush lock is async from the buffer
I/O submission (it looks like reclaim leads into xfs_iflush()), or am I
off the rails somewhere..? :P

> Good analysis work, Brian!
> 

Thanks!

> BTW, I think the underlying cause might be a different manifestation
> of the race described in the comment above
> xfs_inode_item_committed(), only this time with inodes that are
> already in the AIL....
> 
> And FWIW, it doesn't explain the CIL stalls that seem to the other
> cause of the problem when the AIL is empty...
> 

Yeah, it seems like different issues. I'm still trying to repeat this
original problem. So far I've reproduced hung task messages, but they
don't actually correspond to complete stalls. Hopefully I can get the
error back, but either way I'll plan to eventually test a change in
inode_item_push() and see what happens. Thanks again.

Brian

> Cheers,
> 
> Dave.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-07 14:16             ` Brian Foster
@ 2012-06-08  0:28               ` Dave Chinner
  2012-06-08 17:09                 ` Ben Myers
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-06-08  0:28 UTC (permalink / raw)
  To: Brian Foster; +Cc: xfs

On Thu, Jun 07, 2012 at 10:16:22AM -0400, Brian Foster wrote:
> On 06/06/2012 09:35 PM, Dave Chinner wrote:
> > On Wed, Jun 06, 2012 at 09:40:09AM -0400, Brian Foster wrote:
> >> On 06/05/2012 07:54 PM, Dave Chinner wrote:
> >>> On Fri, May 25, 2012 at 01:03:04PM -0400, Peter Watkins wrote:
> >>>> On Fri, May 25, 2012 at 2:28 AM, Juerg Haefliger <juergh@gmail.com> wrote:
> >>
> >> snip
> >>
> >>> At this point, running on a 3.5-rc1 kernel is what we need to get
> >>> working reliably. Once we have the problems solved there, we can
> >>> work out what set of patches need to be backported to 3.0-stable and
> >>> other kernels to fix the problems in those supported kernels...
> >>>
> >>
> >> Hi guys,
> >>
> >> I've been reproducing a similar stall in my testing of the 're-enable
> >> xfsaild idle mode' patch/thread that only occurs for me in the xfs tree.
> >> I was able to do a bisect from rc2 down to commit 43ff2122, though the
> >> history of this issue makes me wonder if this commit just makes the
> >> problem more reproducible as opposed to introducing it. Anyways, the
> >> characteristics I observe so far:
> > 
> > More reproducable. See below.
> > 
> >> - Task blocked for more than 120s message in xlog_grant_head_wait(). I
> >> see xfs_sync_worker() in my current bt, but I'm pretty sure I've seen
> >> the same issue without it involved.
> >> - The AIL is not empty/idle. It spins with a relatively small and
> >> constant number of entries (I've seen ~8-40). These items are all always
> >> marked as "flushing."
> >> - Via crash, all the inodes in the ail appear to be marked as stale
> >> (i.e. li_cb == xfs_istale_done). The inode flags are
> >> XFS_ISTALE|XFS_IRECLAIMABLE|XFS_IFLOCK.
> >> - The iflock in particular is why the ail marks these items 'flushing'
> >> and why nothing seems to proceed any further (xfsaild just waits for
> >> these to complete). I can kick the fs back into action with a 'sync.'
> > 
> > Right, I've seen this as well. What I analysed in the case I saw was
> > that the underlying buffer is also stale - correctly - and it is
> > pinned in memory so cannot be flushed. HEnce all the inodes are
> > inteh same state. The reason they are pinned in memory is that they
> > items were still active in the CIL, and a log force was need to
> > checkpoint the CIL and cause the checkpoint to be committed. Once
> > the CIL checkpoint is committed, the stale items are freed from the
> > AIL, and everything goes onward. The problem is that with the
> > xfs_sync_worker stalled, nothing triggers a log force because the
> > inode is returning "flushing" to the AIL pushes.
> > 
> 
> Makes sense, thanks.
> 
> > However, your analysis has allowed me to find what I think is the
> > bug causing your problem - what I missed when I last saw this was
> > the significance of the order of checks in xfs_inode_item_push().
> > That is, we check for whether the inode is flush locked before we
> > check if it is stale.
> > 
> 
> Ok, I noticed that up in inode_item_push() simply because it looked
> pretty clear that it could get things moving again, but hadn't
> established enough context for myself to understand whether that was
> correct.
> 
> > By definition, a dirty stale inode must be attached to the
> > underlying stale buffer and that requires it to be flush locked, as
> > can be seen in xfs_ifree_cluster:
> > 
> >>>>>>>                  xfs_iflock(ip);
> >>>>>>>                  xfs_iflags_set(ip, XFS_ISTALE);
> > 
> >                         /*
> >                          * we don't need to attach clean inodes or those only
> >                          * with unlogged changes (which we throw away, anyway).
> >                          */
> >                         iip = ip->i_itemp;
> >                         if (!iip || xfs_inode_clean(ip)) {
> >                                 ASSERT(ip != free_ip);
> >                                 xfs_ifunlock(ip);
> >                                 xfs_iunlock(ip, XFS_ILOCK_EXCL);
> >                                 continue;
> >                         }
> > 
> >                         iip->ili_last_fields = iip->ili_fields;
> >                         iip->ili_fields = 0;
> >                         iip->ili_logged = 1;
> >                         xfs_trans_ail_copy_lsn(mp->m_ail, &iip->ili_flush_lsn,
> >                                                 &iip->ili_item.li_lsn);
> > 
> >>>>>>>                  xfs_buf_attach_iodone(bp, xfs_istale_done,
> >>>>>>>                                            &iip->ili_item);
> > 
> > 
> > So basically, the problem is that we should be checking for stale
> > before flushing in xfs_inode_item_push(). I'll send out a patch that
> > fixes this in a few minutes.
> > 
> 
> Ah! I had focused on the code a bit earlier in xfs_ifree_cluster() where
> it iterates the inodes attached to the buffer and marks them stale. The
> comment there indicates the buffers are iflocked, but I didn't quite
> understand how/why. I now see that as part of the
> xfs_buf_attach_iodone(). That and your description here help clear that up.
> 
> So is it reasonable to expect a situation where an inode is sitting in
> the AIL, flush locked and marked stale? E.g., an inode is
> created/modified, logged, committed, checkpointed and added to the AIL
> pending writeout. Subsequently that and other inodes in the same buf are
> deleted and the cluster removed, resulting in everything marked stale..?

Right - if the inodes are already in the AIL, and then there's a
inode cluster free transaction, you can end up with stale inodes in
the AIL that are flush locked.

> The part I'm still a bit hazy on is from the AIL perspective, it looks
> like if the inode was flush locked, it should result in a buffer I/O
> submission up in xfsaild in the same thread, which means I would expect
> either the iflush_done() or istale_done() completion to fire. Is there
> another code path where the inode flush lock is async from the buffer
> I/O submission (it looks like reclaim leads into xfs_iflush()), or am I
> off the rails somewhere..? :P

Good question. This is why I think the problem is related to the
race condition described above xfs_inode_item_committed(). The
things that I couldn't initially work out is why the inodes weren't
pinned if they were stale.

The reason for that is that is that if we add the inodes to the
buffer in the manner I pointed out, they are not added to the final
unlink transaction and hence are never pinned by the transaction.
Hence we can have the situation where we have the buffer not in the
AIL (only in the CIL and pinned in memory because it had just been
written prior to the unlink of the final inode in the chunk) with
the in-memory inodes only stale and flush locked in the AIL. No
progress can be made until a log force occurs to checkpoint and
commit and unpin the buffer in the CIL and run the buffer iodone
completions which then removes the inodes from the AIL.

And because the inodes pin the tail of the AIL, there's not enough
space in the log for the xfs_sync_worker to trigger a log force via
the dummy transaction, and hence we deadlock.

FWIW, there's an argument that can be made here for an unconditional
log force in xfs_sync_worker() to provide a "get out gaol free" card
here. The thing is, I would prefer that the filesystems hang so that
we find out about these issues and have to understand them and fix
them. IMO, there is nothing harder to detect and debug than short
duration, temporary stalls of the filesystem...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-08  0:28               ` Dave Chinner
@ 2012-06-08 17:09                 ` Ben Myers
  0 siblings, 0 replies; 58+ messages in thread
From: Ben Myers @ 2012-06-08 17:09 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Brian Foster, xfs

On Fri, Jun 08, 2012 at 10:28:26AM +1000, Dave Chinner wrote:
...
> And because the inodes pin the tail of the AIL, there's not enough
> space in the log for the xfs_sync_worker to trigger a log force via
> the dummy transaction, and hence we deadlock.
>
> FWIW, there's an argument that can be made here for an unconditional
> log force in xfs_sync_worker() to provide a "get out gaol free" card
> here. 

No kidding!
http://oss.sgi.com/archives/xfs/2012-05/msg00312.html

> The thing is, I would prefer that the filesystems hang so that
> we find out about these issues and have to understand them and fix
> them. IMO, there is nothing harder to detect and debug than short
> duration, temporary stalls of the filesystem...

I agree.. such a patch is not for general consumption.  We want to fix the
actual problem, not work around it with a prod on a timer.  ;)

Regards,
	Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-06 17:41           ` Mark Tinguely
@ 2012-06-11 20:42             ` Chris J Arges
  2012-06-11 23:53               ` Dave Chinner
  0 siblings, 1 reply; 58+ messages in thread
From: Chris J Arges @ 2012-06-11 20:42 UTC (permalink / raw)
  To: xfs




Mark Tinguely-3 wrote:
> 
> The perl script to recreate this problem is very similar to xfstest 273.
> I use that because it avoids all the filesystem mount/unmount that
> happen between the test 273 loops. You can build the log size that you
> want to test, create the directories and let it run until it hangs.
> 

I tested xfstest 273, and it looks like it exhibits a different issue than
the OP in this thread.
For example, if I run this test in a while [1] loop, I get the following
backtrace:

[16413.073946] XFS (sda5): Invalid block length (0xfffff48b) for buffer
[16413.073963] BUG: unable to handle kernel NULL pointer dereference at
0000000000000130
[16413.074274] IP: [<ffffffffa02bb870>] uuid_is_nil+0x10/0x50 [xfs]
[16413.074602] PGD 156f3b067 PUD 10bebb067 PMD 0 
[16413.074942] Oops: 0000 [#1] SMP 
[16413.075379] CPU 0 
[16413.075385] Modules linked in: xfs ppdev serio_raw snd_hda_codec_realtek
nouveau ttm drm_kms_helper drm i2c_algo_bit mxm_wmi wmi snd_hda_intel video
snd_hda_codec parport_pc snd_hwdep snd_pcm snd_timer snd soundcore
snd_page_alloc mac_hid lp parport usbhid floppy hid r8169 pata_jmicron
[16413.076830] 
[16413.077334] Pid: 22295, comm: mount Not tainted 3.2.0-23-generic
#36-Ubuntu Gigabyte Technology Co., Ltd. EP45-DS3L/EP45-DS3L
[16413.077881] RIP: 0010:[<ffffffffa02bb870>]  [<ffffffffa02bb870>]
uuid_is_nil+0x10/0x50 [xfs]
[16413.077924] RSP: 0018:ffff88010c96bab8  EFLAGS: 00010206
[16413.077924] RAX: 0000000000000000 RBX: ffff88010e734800 RCX:
0000000000000000
[16413.077924] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
0000000000000130
[16413.077924] RBP: ffff88010c96bab8 R08: 0000000000000000 R09:
00000000000422a2
[16413.077924] R10: 0000000000000002 R11: 0000000000000000 R12:
0000000000000130
[16413.077924] R13: 0000000000000000 R14: ffff880113f23900 R15:
ffff88010e758200
[16413.077924] FS:  00007f698c270800(0000) GS:ffff88015fc00000(0000)
knlGS:0000000000000000
[16413.077924] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[16413.077924] CR2: 0000000000000130 CR3: 0000000156ee9000 CR4:
00000000000006f0
[16413.077924] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[16413.077924] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[16413.077924] Process mount (pid: 22295, threadinfo ffff88010c96a000, task
ffff88015686dbc0)
[16413.077924] Stack:
[16413.077924]  ffff88010c96bad8 ffffffffa02ee027 fffffffffffff48a
0000000000000001
[16413.077924]  ffff88010c96bb48 ffffffffa02eec7f ffff88010e758200
0000000000000000
[16413.077924]  000000000c96bb70 ffff88010c96bb80 ffff88010c96bb48
0000000000000000
[16413.077924] Call Trace:
[16413.077924]  [<ffffffffa02ee027>] xlog_header_check_mount+0x27/0xb0 [xfs]
[16413.077924]  [<ffffffffa02eec7f>] xlog_find_verify_log_record+0x10f/0x200
[xfs]
[16413.077924]  [<ffffffffa02f0d44>] xlog_find_head+0x2f4/0x360 [xfs]
[16413.077924]  [<ffffffffa02f0de8>] xlog_find_tail+0x38/0x400 [xfs]
[16413.077924]  [<ffffffffa02f2a5e>] xlog_recover+0x1e/0x90 [xfs]
[16413.077924]  [<ffffffffa02fad79>] xfs_log_mount+0xa9/0x180 [xfs]
[16413.077924]  [<ffffffffa02f56d2>] xfs_mountfs+0x362/0x690 [xfs]
[16413.077924]  [<ffffffffa02b32d2>] ? xfs_mru_cache_create+0x162/0x190
[xfs]
[16413.077924]  [<ffffffffa02a96e0>] ? _xfs_filestream_pick_ag+0x1e0/0x1e0
[xfs]
[16413.077924]  [<ffffffffa02b54ee>] xfs_fs_fill_super+0x1de/0x290 [xfs]
[16413.077924]  [<ffffffff8117aa46>] mount_bdev+0x1c6/0x210
[16413.077924]  [<ffffffffa02b5310>] ? xfs_parseargs+0xbc0/0xbc0 [xfs]
[16413.077924]  [<ffffffffa02b3615>] xfs_fs_mount+0x15/0x20 [xfs]
[16413.077924]  [<ffffffff8117b5d3>] mount_fs+0x43/0x1b0
[16413.077924]  [<ffffffff81195e1a>] vfs_kern_mount+0x6a/0xc0
[16413.077924]  [<ffffffff81197324>] do_kern_mount+0x54/0x110
[16413.077924]  [<ffffffff81198e74>] do_mount+0x1a4/0x260
[16413.077924]  [<ffffffff81199350>] sys_mount+0x90/0xe0
[16413.077924]  [<ffffffff81664a82>] system_call_fastpath+0x16/0x1b
[16413.077924] Code: 08 66 c1 c2 08 c1 e0 10 0f b7 d2 09 d0 89 06 8b 07 0f
c8 89 46 04 c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 31 c0 48 85 ff 74 25
<80> 3f 00 75 20 48 8d 47 01 48 83 c7 10 0f 1f 00 0f b6 10 48 83 
[16413.077924] RIP  [<ffffffffa02bb870>] uuid_is_nil+0x10/0x50 [xfs]
[16413.077924]  RSP <ffff88010c96bab8>
[16413.077924] CR2: 0000000000000130
[16413.103203] ---[ end trace 6914a6803053df67 ]---

Did you get similar backtraces when looking at this test?
Thanks,
--chris j arges
-- 
View this message in context: http://old.nabble.com/Still-seeing-hangs-in-xlog_grant_log_space-tp33732886p33996217.html
Sent from the Xfs - General mailing list archive at Nabble.com.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-05 23:54       ` Dave Chinner
  2012-06-06 13:40         ` Brian Foster
@ 2012-06-11 20:59         ` Mark Tinguely
  1 sibling, 0 replies; 58+ messages in thread
From: Mark Tinguely @ 2012-06-11 20:59 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Juerg Haefliger, bpm, Peter Watkins, xfs

On 06/05/12 18:54, Dave Chinner wrote:


>> Reading bug #922 I see your test case reproduces in recent kernels, so
>> there must be a newer problem also.
>
> Right, that's what we need to find - it appears to be a CIL
> stall/accounting leak, completely unrelated to all the other AIL/log
> space stalls that have been occurring. Last thing is that I was
> waiting for more information on the stall that mark T @ sgi was able
> to reproduce. I haven't heard anything from him since I asked for
> more information on May 23....
>
...

>
> Cheers,
>
> Dave.

I am using the test instructions/programs in the above bug report

  1) Linux 3.5rc1
  2) temporary band-aid of performing a xfs_log_force() before the
     xfs_fs_log_dummy() in the xfs_sync_worker().
   a) Even with a xfs_log_force(), it is still possible to hang the sync
      worker.
   b) or replacing the band-aid with Brian Foster's "xfs: check for stale
      inode before acquiring iflock on push" patch also resulted in a
      quick hard hang.
      i) side note, printk routines in Linux 3.5rc1 has a "struct log"
        item that crash wants to use instead of XFS's "struct log". I
  3) small log (576K)
   a) size of the log in important. The smaller the log, the easier it
      is to hang. 2+MB logs are much harder to hang.
  4) perl program that has multiple workers doing cp/rm.

Sorry Dave, I did not realize you were waiting for more information from 
me. I thought the fixing the sync worker was more important.
I also was hoping empty AIL hang was a result of the band-aid
xfs_log_force() and not a second problem.

I will use the above to try to recreate and core the hang on Linux 
3.5rc1 where the AIL is empty.



Thanks.

--Mark Tinguely.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-11 20:42             ` Chris J Arges
@ 2012-06-11 23:53               ` Dave Chinner
  2012-06-12 13:28                 ` Chris J Arges
  0 siblings, 1 reply; 58+ messages in thread
From: Dave Chinner @ 2012-06-11 23:53 UTC (permalink / raw)
  To: Chris J Arges; +Cc: xfs

On Mon, Jun 11, 2012 at 01:42:31PM -0700, Chris J Arges wrote:
> Mark Tinguely-3 wrote:
> > 
> > The perl script to recreate this problem is very similar to xfstest 273.
> > I use that because it avoids all the filesystem mount/unmount that
> > happen between the test 273 loops. You can build the log size that you
> > want to test, create the directories and let it run until it hangs.
> > 
> 
> I tested xfstest 273, and it looks like it exhibits a different issue than
> the OP in this thread.
> For example, if I run this test in a while [1] loop, I get the following
> backtrace:
> 
> [16413.073946] XFS (sda5): Invalid block length (0xfffff48b) for buffer

That looks bad. How big is the log on this filesystem?

> [16413.073963] BUG: unable to handle kernel NULL pointer dereference at
> 0000000000000130
> [16413.074274] IP: [<ffffffffa02bb870>] uuid_is_nil+0x10/0x50 [xfs]

I can't really see how this function can get a null pointer
dereference. It checks the pointer passed in for being null before
doing anything, and otherwise it just increments and dereferences
the char pointer 16 times. I can't see how that results in a NULL
being dereferenced - I might just be blind though.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: Still seeing hangs in xlog_grant_log_space
  2012-06-11 23:53               ` Dave Chinner
@ 2012-06-12 13:28                 ` Chris J Arges
  0 siblings, 0 replies; 58+ messages in thread
From: Chris J Arges @ 2012-06-12 13:28 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On 06/11/2012 06:53 PM, Dave Chinner wrote:
> On Mon, Jun 11, 2012 at 01:42:31PM -0700, Chris J Arges wrote:
>> Mark Tinguely-3 wrote:
>>>
>>> The perl script to recreate this problem is very similar to xfstest 273.
>>> I use that because it avoids all the filesystem mount/unmount that
>>> happen between the test 273 loops. You can build the log size that you
>>> want to test, create the directories and let it run until it hangs.
>>>
>>
>> I tested xfstest 273, and it looks like it exhibits a different issue than
>> the OP in this thread.
>> For example, if I run this test in a while [1] loop, I get the following
>> backtrace:
>>
>> [16413.073946] XFS (sda5): Invalid block length (0xfffff48b) for buffer
> 
> That looks bad. How big is the log on this filesystem?
> 
The test and scratch partitions were created with the following commands:
mkfs.xfs -b size=1024 -l size=576b /dev/sda5
mkfs.xfs -b size=1024 -l size=576b /dev/sda6

>> [16413.073963] BUG: unable to handle kernel NULL pointer dereference at
>> 0000000000000130
>> [16413.074274] IP: [<ffffffffa02bb870>] uuid_is_nil+0x10/0x50 [xfs]
> 
> I can't really see how this function can get a null pointer
> dereference. It checks the pointer passed in for being null before
> doing anything, and otherwise it just increments and dereferences
> the char pointer 16 times. I can't see how that results in a NULL
> being dereferenced - I might just be blind though.
> 
Yea, this is odd. What I'm really trying to accomplish is seeing if this
xfstest 273 produces a similar hang to Juerg's original bug. If you've
already run this test and it produced a hang can you post the backtrace
or let me know if it is similar?

Thanks,
--chris


> Cheers,
> 
> Dave.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2012-06-12 13:28 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-23 12:09 Still seeing hangs in xlog_grant_log_space Juerg Haefliger
2012-04-23 14:38 ` Dave Chinner
2012-04-23 15:33   ` Juerg Haefliger
2012-04-23 23:58     ` Dave Chinner
2012-04-24  8:55       ` Juerg Haefliger
2012-04-24 12:07         ` Dave Chinner
2012-04-24 18:26           ` Juerg Haefliger
2012-04-25 22:38             ` Dave Chinner
2012-04-26 12:37               ` Juerg Haefliger
2012-04-26 22:44                 ` Dave Chinner
2012-04-26 23:00                   ` Juerg Haefliger
2012-04-26 23:07                     ` Dave Chinner
2012-04-27  9:04                       ` Juerg Haefliger
2012-04-27 11:09                         ` Dave Chinner
2012-04-27 13:07                           ` Juerg Haefliger
2012-05-05  7:44                             ` Juerg Haefliger
2012-05-07 17:19                               ` Ben Myers
2012-05-09  7:54                                 ` Juerg Haefliger
2012-05-10 16:11                                   ` Chris J Arges
2012-05-10 21:53                                     ` Mark Tinguely
2012-05-16 18:42                                     ` Ben Myers
2012-05-16 19:03                                       ` Chris J Arges
2012-05-16 21:29                                         ` Mark Tinguely
2012-05-18 10:10                                           ` Dave Chinner
2012-05-18 14:42                                             ` Mark Tinguely
2012-05-22 22:59                                               ` Dave Chinner
2012-06-06 15:00                                             ` Chris J Arges
2012-06-07  0:49                                               ` Dave Chinner
2012-05-17 20:55                                       ` Chris J Arges
2012-05-18 16:53                                         ` Chris J Arges
2012-05-18 17:19                                   ` Ben Myers
2012-05-19  7:28                                     ` Juerg Haefliger
2012-05-21 17:11                                       ` Ben Myers
2012-05-24  5:45                                         ` Juerg Haefliger
2012-05-24 14:23                                           ` Ben Myers
2012-05-07 22:59                               ` Dave Chinner
2012-05-09  7:35                                 ` Dave Chinner
2012-05-09 21:07                                   ` Mark Tinguely
2012-05-10  2:10                                     ` Mark Tinguely
2012-05-18  9:37                                       ` Dave Chinner
2012-05-18  9:31                                     ` Dave Chinner
2012-05-24 20:18 ` Peter Watkins
2012-05-25  6:28   ` Juerg Haefliger
2012-05-25 17:03     ` Peter Watkins
2012-06-05 23:54       ` Dave Chinner
2012-06-06 13:40         ` Brian Foster
2012-06-06 17:41           ` Mark Tinguely
2012-06-11 20:42             ` Chris J Arges
2012-06-11 23:53               ` Dave Chinner
2012-06-12 13:28                 ` Chris J Arges
2012-06-06 22:03           ` Mark Tinguely
2012-06-06 23:04             ` Brian Foster
2012-06-07  1:35           ` Dave Chinner
2012-06-07 14:16             ` Brian Foster
2012-06-08  0:28               ` Dave Chinner
2012-06-08 17:09                 ` Ben Myers
2012-06-11 20:59         ` Mark Tinguely
2012-06-05 15:21   ` Chris J Arges

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.