All of lore.kernel.org
 help / color / mirror / Atom feed
* High CPU usage of RT threads...
@ 2018-03-15  7:23 Shyam Prasad N
  2018-03-15  7:29 ` Nikolay Borisov
  0 siblings, 1 reply; 8+ messages in thread
From: Shyam Prasad N @ 2018-03-15  7:23 UTC (permalink / raw)
  To: Btrfs BTRFS

Hi,

Our servers run some daemons that are scheduled to run many real time
threads. These threads serve the client nodes by performing I/O on top
of some set of disks, configured as DRBD pairs with disks on other
peer servers for high availability of data. Btrfs is the filesystem
that is configured on top of DRBD.

While testing high availability with fairly high load, we have noticed
the following behaviour a couple of times: When the server which was
killed comes back up and gets ready and DRBD disks start syncing the
data between the disks, a performance hit is generally expected at the
peer node which has taken over the service now. However, the real time
threads (mentioned above) on the active node are hogging the CPUs. As
a part of debugging the issue, we tried to force a core dump on these
threads by using a SIGABRT. However, these threads were not responding
to any signals. Only after using real-time throttling (to reduce real
time CPU usage to 50%), and waiting around for a few minutes, we were
able to force a core dump. However, the corefile generated didn't have
much useful info (I think it was a partial/corrupted core dump).

Based on the above behaviour, (signals not being picked up), it looks
to me like all these threads were likely stuck inside some system
call. And since majority of the system calls by these threads are VFS
calls on btrfs, I feel that these threads may have been stuck in some
I/O. Specifically, based on the CPU usage, in some spinlock (I'm open
to suggestions of other possibilities). And this is the reason I'm
posting on this mailing list.

Is there a known bug which might have caused this? Kernel version
we're using is 4.4.0.
If we go for a kernel upgrade, what are the chances of not seeing this
behaviour again?

Or is my analysis of the problem entirely wrong? My feeling is that
this maybe some issue with using Btrfs when it doesn't get a response
from DRBD quickly enough.
Because we have been using ext4 on top of DRBD for a long time, and
have never seen such issues during HA tests there.

-- 
-Shyam

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
  2018-03-15  7:23 High CPU usage of RT threads Shyam Prasad N
@ 2018-03-15  7:29 ` Nikolay Borisov
  2018-03-19  7:13   ` Shyam Prasad N
  0 siblings, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2018-03-15  7:29 UTC (permalink / raw)
  To: Shyam Prasad N, Btrfs BTRFS



On 15.03.2018 09:23, Shyam Prasad N wrote:
> Hi,
> 
> Our servers run some daemons that are scheduled to run many real time
> threads. These threads serve the client nodes by performing I/O on top
> of some set of disks, configured as DRBD pairs with disks on other
> peer servers for high availability of data. Btrfs is the filesystem
> that is configured on top of DRBD.
> 
> While testing high availability with fairly high load, we have noticed
> the following behaviour a couple of times: When the server which was
> killed comes back up and gets ready and DRBD disks start syncing the
> data between the disks, a performance hit is generally expected at the
> peer node which has taken over the service now. However, the real time
> threads (mentioned above) on the active node are hogging the CPUs. As
> a part of debugging the issue, we tried to force a core dump on these
> threads by using a SIGABRT. However, these threads were not responding
> to any signals. Only after using real-time throttling (to reduce real
> time CPU usage to 50%), and waiting around for a few minutes, we were
> able to force a core dump. However, the corefile generated didn't have
> much useful info (I think it was a partial/corrupted core dump).
> 
> Based on the above behaviour, (signals not being picked up), it looks
> to me like all these threads were likely stuck inside some system
> call. And since majority of the system calls by these threads are VFS
> calls on btrfs, I feel that these threads may have been stuck in some
> I/O. Specifically, based on the CPU usage, in some spinlock (I'm open
> to suggestions of other possibilities). And this is the reason I'm
> posting on this mailing list.

When you have a bunch of those threads get a dump of the stacks of all
sleeping tasks by "echo w > /proc/sysrq-trigger" .

> 
> Is there a known bug which might have caused this? Kernel version
> we're using is 4.4.0.

This is rather old kernel, you should at least be using latest 4.4.y
stable kernel. BTRFS is a moving target and there are a lot of
improvements made every release. So I'd suggest to try 4.14 at least on
one offending machine.

> If we go for a kernel upgrade, what are the chances of not seeing this
> behaviour again?
> 
> Or is my analysis of the problem entirely wrong? My feeling is that
> this maybe some issue with using Btrfs when it doesn't get a response
> from DRBD quickly enough.

Feelings don't count for anything. Next time this happens extract
stacktrace from the offending threads i.e. smapling /proc/[pid of
hogging thread]/stack. Furthermore, if we assume that btrfs is indeed
not getting responses fast enough this means most clients should really
be stuck in io sleep and not doing any processing.


> Because we have been using ext4 on top of DRBD for a long time, and
> have never seen such issues during HA tests there.
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
  2018-03-15  7:29 ` Nikolay Borisov
@ 2018-03-19  7:13   ` Shyam Prasad N
  2018-03-19  7:15     ` Nikolay Borisov
  0 siblings, 1 reply; 8+ messages in thread
From: Shyam Prasad N @ 2018-03-19  7:13 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Btrfs BTRFS

Hi Nikolay,

Thanks for your reply on this.

Checked the stack trace for many of the stuck threads. Looks like all
of them are stuck in this loop...
[<ffffffff810031f2>] exit_to_usermode_loop+0x72/0xd0
[<ffffffff81003c16>] prepare_exit_to_usermode+0x26/0x30
[<ffffffff818390e5>] retint_user+0x8/0x10
[<ffffffffffffffff>] 0xffffffffffffffff

Seems like it is stuck in a tight loop in exit_to_usermode_loop.
FWIW, we started seeing this issue with nodatacow btrfs mount option.
Previously we were running with default option of datacow.
However, this also coincides with fairly heavy unlink load that we've
been putting the system under.

Please let me know if there is anything else you can think of, based
on the above data.

Regards,
Shyam


On Thu, Mar 15, 2018 at 12:59 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>
>
> On 15.03.2018 09:23, Shyam Prasad N wrote:
>> Hi,
>>
>> Our servers run some daemons that are scheduled to run many real time
>> threads. These threads serve the client nodes by performing I/O on top
>> of some set of disks, configured as DRBD pairs with disks on other
>> peer servers for high availability of data. Btrfs is the filesystem
>> that is configured on top of DRBD.
>>
>> While testing high availability with fairly high load, we have noticed
>> the following behaviour a couple of times: When the server which was
>> killed comes back up and gets ready and DRBD disks start syncing the
>> data between the disks, a performance hit is generally expected at the
>> peer node which has taken over the service now. However, the real time
>> threads (mentioned above) on the active node are hogging the CPUs. As
>> a part of debugging the issue, we tried to force a core dump on these
>> threads by using a SIGABRT. However, these threads were not responding
>> to any signals. Only after using real-time throttling (to reduce real
>> time CPU usage to 50%), and waiting around for a few minutes, we were
>> able to force a core dump. However, the corefile generated didn't have
>> much useful info (I think it was a partial/corrupted core dump).
>>
>> Based on the above behaviour, (signals not being picked up), it looks
>> to me like all these threads were likely stuck inside some system
>> call. And since majority of the system calls by these threads are VFS
>> calls on btrfs, I feel that these threads may have been stuck in some
>> I/O. Specifically, based on the CPU usage, in some spinlock (I'm open
>> to suggestions of other possibilities). And this is the reason I'm
>> posting on this mailing list.
>
> When you have a bunch of those threads get a dump of the stacks of all
> sleeping tasks by "echo w > /proc/sysrq-trigger" .
>
>>
>> Is there a known bug which might have caused this? Kernel version
>> we're using is 4.4.0.
>
> This is rather old kernel, you should at least be using latest 4.4.y
> stable kernel. BTRFS is a moving target and there are a lot of
> improvements made every release. So I'd suggest to try 4.14 at least on
> one offending machine.
>
>> If we go for a kernel upgrade, what are the chances of not seeing this
>> behaviour again?
>>
>> Or is my analysis of the problem entirely wrong? My feeling is that
>> this maybe some issue with using Btrfs when it doesn't get a response
>> from DRBD quickly enough.
>
> Feelings don't count for anything. Next time this happens extract
> stacktrace from the offending threads i.e. smapling /proc/[pid of
> hogging thread]/stack. Furthermore, if we assume that btrfs is indeed
> not getting responses fast enough this means most clients should really
> be stuck in io sleep and not doing any processing.
>
>
>> Because we have been using ext4 on top of DRBD for a long time, and
>> have never seen such issues during HA tests there.
>>



-- 
-Shyam

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
  2018-03-19  7:13   ` Shyam Prasad N
@ 2018-03-19  7:15     ` Nikolay Borisov
  2018-03-19 11:03       ` Shyam Prasad N
       [not found]       ` <CANT5p=pD+6L76-fBN1ax=UYsqyFzh+PQcc0mK9C7poZi7vNVRg@mail.gmail.com>
  0 siblings, 2 replies; 8+ messages in thread
From: Nikolay Borisov @ 2018-03-19  7:15 UTC (permalink / raw)
  To: Shyam Prasad N; +Cc: Btrfs BTRFS



On 19.03.2018 09:13, Shyam Prasad N wrote:
> Hi Nikolay,
> 
> Thanks for your reply on this.
> 
> Checked the stack trace for many of the stuck threads. Looks like all
> of them are stuck in this loop...
> [<ffffffff810031f2>] exit_to_usermode_loop+0x72/0xd0
> [<ffffffff81003c16>] prepare_exit_to_usermode+0x26/0x30
> [<ffffffff818390e5>] retint_user+0x8/0x10
> [<ffffffffffffffff>] 0xffffffffffffffff

Well, this doesn't imply btrfs at all.

How about the _full_ output of :

echo w > /proq/sysrq-trigger

Perhaps there is a lot of load in workqueues?

> 
> Seems like it is stuck in a tight loop in exit_to_usermode_loop.
> FWIW, we started seeing this issue with nodatacow btrfs mount option.
> Previously we were running with default option of datacow.
> However, this also coincides with fairly heavy unlink load that we've
> been putting the system under.
> 
> Please let me know if there is anything else you can think of, based
> on the above data.
> 
> Regards,
> Shyam
> 
> 
> On Thu, Mar 15, 2018 at 12:59 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>>
>>
>> On 15.03.2018 09:23, Shyam Prasad N wrote:
>>> Hi,
>>>
>>> Our servers run some daemons that are scheduled to run many real time
>>> threads. These threads serve the client nodes by performing I/O on top
>>> of some set of disks, configured as DRBD pairs with disks on other
>>> peer servers for high availability of data. Btrfs is the filesystem
>>> that is configured on top of DRBD.
>>>
>>> While testing high availability with fairly high load, we have noticed
>>> the following behaviour a couple of times: When the server which was
>>> killed comes back up and gets ready and DRBD disks start syncing the
>>> data between the disks, a performance hit is generally expected at the
>>> peer node which has taken over the service now. However, the real time
>>> threads (mentioned above) on the active node are hogging the CPUs. As
>>> a part of debugging the issue, we tried to force a core dump on these
>>> threads by using a SIGABRT. However, these threads were not responding
>>> to any signals. Only after using real-time throttling (to reduce real
>>> time CPU usage to 50%), and waiting around for a few minutes, we were
>>> able to force a core dump. However, the corefile generated didn't have
>>> much useful info (I think it was a partial/corrupted core dump).
>>>
>>> Based on the above behaviour, (signals not being picked up), it looks
>>> to me like all these threads were likely stuck inside some system
>>> call. And since majority of the system calls by these threads are VFS
>>> calls on btrfs, I feel that these threads may have been stuck in some
>>> I/O. Specifically, based on the CPU usage, in some spinlock (I'm open
>>> to suggestions of other possibilities). And this is the reason I'm
>>> posting on this mailing list.
>>
>> When you have a bunch of those threads get a dump of the stacks of all
>> sleeping tasks by "echo w > /proc/sysrq-trigger" .
>>
>>>
>>> Is there a known bug which might have caused this? Kernel version
>>> we're using is 4.4.0.
>>
>> This is rather old kernel, you should at least be using latest 4.4.y
>> stable kernel. BTRFS is a moving target and there are a lot of
>> improvements made every release. So I'd suggest to try 4.14 at least on
>> one offending machine.
>>
>>> If we go for a kernel upgrade, what are the chances of not seeing this
>>> behaviour again?
>>>
>>> Or is my analysis of the problem entirely wrong? My feeling is that
>>> this maybe some issue with using Btrfs when it doesn't get a response
>>> from DRBD quickly enough.
>>
>> Feelings don't count for anything. Next time this happens extract
>> stacktrace from the offending threads i.e. smapling /proc/[pid of
>> hogging thread]/stack. Furthermore, if we assume that btrfs is indeed
>> not getting responses fast enough this means most clients should really
>> be stuck in io sleep and not doing any processing.
>>
>>
>>> Because we have been using ext4 on top of DRBD for a long time, and
>>> have never seen such issues during HA tests there.
>>>
> 
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
  2018-03-19  7:15     ` Nikolay Borisov
@ 2018-03-19 11:03       ` Shyam Prasad N
       [not found]       ` <CANT5p=pD+6L76-fBN1ax=UYsqyFzh+PQcc0mK9C7poZi7vNVRg@mail.gmail.com>
  1 sibling, 0 replies; 8+ messages in thread
From: Shyam Prasad N @ 2018-03-19 11:03 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Btrfs BTRFS

[-- Attachment #1: Type: text/plain, Size: 4316 bytes --]

Hi,

Attaching the sysrq-trigger output.

Regards,
Shyam

On Mon, Mar 19, 2018 at 12:45 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>
>
> On 19.03.2018 09:13, Shyam Prasad N wrote:
>> Hi Nikolay,
>>
>> Thanks for your reply on this.
>>
>> Checked the stack trace for many of the stuck threads. Looks like all
>> of them are stuck in this loop...
>> [<ffffffff810031f2>] exit_to_usermode_loop+0x72/0xd0
>> [<ffffffff81003c16>] prepare_exit_to_usermode+0x26/0x30
>> [<ffffffff818390e5>] retint_user+0x8/0x10
>> [<ffffffffffffffff>] 0xffffffffffffffff
>
> Well, this doesn't imply btrfs at all.
>
> How about the _full_ output of :
>
> echo w > /proq/sysrq-trigger
>
> Perhaps there is a lot of load in workqueues?
>
>>
>> Seems like it is stuck in a tight loop in exit_to_usermode_loop.
>> FWIW, we started seeing this issue with nodatacow btrfs mount option.
>> Previously we were running with default option of datacow.
>> However, this also coincides with fairly heavy unlink load that we've
>> been putting the system under.
>>
>> Please let me know if there is anything else you can think of, based
>> on the above data.
>>
>> Regards,
>> Shyam
>>
>>
>> On Thu, Mar 15, 2018 at 12:59 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>>>
>>>
>>> On 15.03.2018 09:23, Shyam Prasad N wrote:
>>>> Hi,
>>>>
>>>> Our servers run some daemons that are scheduled to run many real time
>>>> threads. These threads serve the client nodes by performing I/O on top
>>>> of some set of disks, configured as DRBD pairs with disks on other
>>>> peer servers for high availability of data. Btrfs is the filesystem
>>>> that is configured on top of DRBD.
>>>>
>>>> While testing high availability with fairly high load, we have noticed
>>>> the following behaviour a couple of times: When the server which was
>>>> killed comes back up and gets ready and DRBD disks start syncing the
>>>> data between the disks, a performance hit is generally expected at the
>>>> peer node which has taken over the service now. However, the real time
>>>> threads (mentioned above) on the active node are hogging the CPUs. As
>>>> a part of debugging the issue, we tried to force a core dump on these
>>>> threads by using a SIGABRT. However, these threads were not responding
>>>> to any signals. Only after using real-time throttling (to reduce real
>>>> time CPU usage to 50%), and waiting around for a few minutes, we were
>>>> able to force a core dump. However, the corefile generated didn't have
>>>> much useful info (I think it was a partial/corrupted core dump).
>>>>
>>>> Based on the above behaviour, (signals not being picked up), it looks
>>>> to me like all these threads were likely stuck inside some system
>>>> call. And since majority of the system calls by these threads are VFS
>>>> calls on btrfs, I feel that these threads may have been stuck in some
>>>> I/O. Specifically, based on the CPU usage, in some spinlock (I'm open
>>>> to suggestions of other possibilities). And this is the reason I'm
>>>> posting on this mailing list.
>>>
>>> When you have a bunch of those threads get a dump of the stacks of all
>>> sleeping tasks by "echo w > /proc/sysrq-trigger" .
>>>
>>>>
>>>> Is there a known bug which might have caused this? Kernel version
>>>> we're using is 4.4.0.
>>>
>>> This is rather old kernel, you should at least be using latest 4.4.y
>>> stable kernel. BTRFS is a moving target and there are a lot of
>>> improvements made every release. So I'd suggest to try 4.14 at least on
>>> one offending machine.
>>>
>>>> If we go for a kernel upgrade, what are the chances of not seeing this
>>>> behaviour again?
>>>>
>>>> Or is my analysis of the problem entirely wrong? My feeling is that
>>>> this maybe some issue with using Btrfs when it doesn't get a response
>>>> from DRBD quickly enough.
>>>
>>> Feelings don't count for anything. Next time this happens extract
>>> stacktrace from the offending threads i.e. smapling /proc/[pid of
>>> hogging thread]/stack. Furthermore, if we assume that btrfs is indeed
>>> not getting responses fast enough this means most clients should really
>>> be stuck in io sleep and not doing any processing.
>>>
>>>
>>>> Because we have been using ext4 on top of DRBD for a long time, and
>>>> have never seen such issues during HA tests there.
>>>>
>>
>>
>>



-- 
-Shyam

[-- Attachment #2: sysrq-trigger.txt --]
[-- Type: text/plain, Size: 20393 bytes --]


[Mon Mar 19 00:38:16 2018] sysrq: SysRq : Show backtrace of all active CPUs
[Mon Mar 19 00:38:16 2018] Sending NMI to all CPUs:
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 0
[Mon Mar 19 00:38:16 2018] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffffffff81e11500 ti: ffffffff81e00000 task.ti: ffffffff81e00000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0018:ffffffff81e03e98  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffffffff81e03e98 R08: ffff8807eea0dd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 00000001068f7f8b R11: 0000000000000004 R12: 0000000000000000
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffffffff81e00000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eea00000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 00007fe2bb4900e8 CR3: 00000007d788a000 CR4: 00000000001406f0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffffffff81e03eb8 ffffffff81038e1e ffffffff81f38200 ffffffff81e04000
[Mon Mar 19 00:38:16 2018]  ffffffff81e03ec8 ffffffff8103962f ffffffff81e03ed8 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffffffff81e03f30 ffffffff810c4841 ffffffff81e00000 ffffffff81e04000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff8182bf3c>] rest_init+0x7c/0x80
[Mon Mar 19 00:38:16 2018]  [<ffffffff81f5d011>] start_kernel+0x481/0x4a2
[Mon Mar 19 00:38:16 2018]  [<ffffffff81f5c120>] ? early_idt_handler_array+0x120/0x120
[Mon Mar 19 00:38:16 2018]  [<ffffffff81f5c339>] x86_64_start_reservations+0x2a/0x2c
[Mon Mar 19 00:38:16 2018]  [<ffffffff81f5c485>] x86_64_start_kernel+0x14a/0x16d
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 1
[Mon Mar 19 00:38:16 2018] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8807eb368e00 ti: ffff8807eb374000 task.ti: ffff8807eb374000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0018:ffff8807eb377e90  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffff8807eb377e90 R08: ffff8807eea4dd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 00000001068f7fc2 R11: 0000000000014800 R12: 0000000000000001
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8807eb374000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eea40000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 00007faae648e624 CR3: 0000000119f04000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffff8807eb377eb0 ffffffff81038e1e ffffffff81f38200 ffff8807eb378000
[Mon Mar 19 00:38:16 2018]  ffff8807eb377ec0 ffffffff8103962f ffff8807eb377ed0 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffff8807eb377f28 ffffffff810c4841 ffff8807eb374000 ffff8807eb378000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff81051784>] start_secondary+0x154/0x190
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 2
[Mon Mar 19 00:38:16 2018] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8807eb369c00 ti: ffff8807eb378000 task.ti: ffff8807eb378000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0018:ffff8807eb37be90  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffff8807eb37be90 R08: ffff8807eea8dd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 00000001068f7f88 R11: 0000000000006000 R12: 0000000000000002
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8807eb378000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eea80000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 00007fda7e223214 CR3: 00000007d52b3000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffff8807eb37beb0 ffffffff81038e1e ffffffff81f38200 ffff8807eb37c000
[Mon Mar 19 00:38:16 2018]  ffff8807eb37bec0 ffffffff8103962f ffff8807eb37bed0 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffff8807eb37bf28 ffffffff810c4841 ffff8807eb378000 ffff8807eb37c000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff81051784>] start_secondary+0x154/0x190
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 3
[Mon Mar 19 00:38:16 2018] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8807eb36aa00 ti: ffff8807eb37c000 task.ti: ffff8807eb37c000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0000:ffff8807eb37fe90  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffff8807eb37fe90 R08: ffff8807eeacdd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 000000000000037c R11: 00000000000003ae R12: 0000000000000003
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8807eb37c000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eeac0000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 00007f0fc6ed0050 CR3: 000000017057a000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffff8807eb37feb0 ffffffff81038e1e ffffffff81f38200 ffff8807eb380000
[Mon Mar 19 00:38:16 2018]  ffff8807eb37fec0 ffffffff8103962f ffff8807eb37fed0 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffff8807eb37ff28 ffffffff810c4841 ffff8807eb37c000 ffff8807eb380000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff81051784>] start_secondary+0x154/0x190
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 4
[Mon Mar 19 00:38:16 2018] CPU: 4 PID: 0 Comm: swapper/4 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8807eb36b800 ti: ffff8807eb380000 task.ti: ffff8807eb380000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0000:ffff8807eb383e90  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffff8807eb383e90 R08: ffff8807eeb0dd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 00000001068f7f45 R11: 0000000000000004 R12: 0000000000000004
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8807eb380000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eeb00000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 00000000013eec20 CR3: 000000017057a000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffff8807eb383eb0 ffffffff81038e1e ffffffff81f38200 ffff8807eb384000
[Mon Mar 19 00:38:16 2018]  ffff8807eb383ec0 ffffffff8103962f ffff8807eb383ed0 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffff8807eb383f28 ffffffff810c4841 ffff8807eb380000 ffff8807eb384000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff81051784>] start_secondary+0x154/0x190
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 5
[Mon Mar 19 00:38:16 2018] CPU: 5 PID: 0 Comm: swapper/5 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8807eb36c600 ti: ffff8807eb384000 task.ti: ffff8807eb384000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0018:ffff8807eb387e90  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffff8807eb387e90 R08: ffff8807eeb4dd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 00000001068f7fb4 R11: 0000000000008400 R12: 0000000000000005
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8807eb384000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eeb40000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 00007f4b9927a000 CR3: 00000007e8764000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffff8807eb387eb0 ffffffff81038e1e ffffffff81f38200 ffff8807eb388000
[Mon Mar 19 00:38:16 2018]  ffff8807eb387ec0 ffffffff8103962f ffff8807eb387ed0 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffff8807eb387f28 ffffffff810c4841 ffff8807eb384000 ffff8807eb388000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff81051784>] start_secondary+0x154/0x190
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 6
[Mon Mar 19 00:38:16 2018] CPU: 6 PID: 0 Comm: swapper/6 Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8807eb36d400 ti: ffff8807eb388000 task.ti: ffff8807eb388000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff810645d6>]  [<ffffffff810645d6>] native_safe_halt+0x6/0x10
[Mon Mar 19 00:38:16 2018] RSP: 0000:ffff8807eb38be90  EFLAGS: 00000246
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000000 RBX: ffffffff81f38200 RCX: 0000000000000000
[Mon Mar 19 00:38:16 2018] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 19 00:38:16 2018] RBP: ffff8807eb38be90 R08: ffff8807eeb8dd80 R09: 0000000000000000
[Mon Mar 19 00:38:16 2018] R10: 0000000000000000 R11: ffff8807eacb20c0 R12: 0000000000000006
[Mon Mar 19 00:38:16 2018] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8807eb388000
[Mon Mar 19 00:38:16 2018] FS:  0000000000000000(0000) GS:ffff8807eeb80000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 0000562173636238 CR3: 000000017057a000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  ffff8807eb38beb0 ffffffff81038e1e ffffffff81f38200 ffff8807eb38c000
[Mon Mar 19 00:38:16 2018]  ffff8807eb38bec0 ffffffff8103962f ffff8807eb38bed0 ffffffff810c44da
[Mon Mar 19 00:38:16 2018]  ffff8807eb38bf28 ffffffff810c4841 ffff8807eb388000 ffff8807eb38c000
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81038e1e>] default_idle+0x1e/0xe0
[Mon Mar 19 00:38:16 2018]  [<ffffffff8103962f>] arch_cpu_idle+0xf/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c44da>] default_idle_call+0x2a/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff810c4841>] cpu_startup_entry+0x2f1/0x350
[Mon Mar 19 00:38:16 2018]  [<ffffffff81051784>] start_secondary+0x154/0x190
[Mon Mar 19 00:38:16 2018] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84
[Mon Mar 19 00:38:16 2018] NMI backtrace for cpu 7
[Mon Mar 19 00:38:16 2018] CPU: 7 PID: 6286 Comm: bash Not tainted 4.4.0-62-generic #83-Ubuntu
[Mon Mar 19 00:38:16 2018] Hardware name: Xen HVM domU, BIOS 4.7.1-1.9 02/16/2017
[Mon Mar 19 00:38:16 2018] task: ffff8800343b0e00 ti: ffff880169520000 task.ti: ffff880169520000
[Mon Mar 19 00:38:16 2018] RIP: 0010:[<ffffffff81054caf>]  [<ffffffff81054caf>] default_send_IPI_mask_sequence_phys+0xaf/0xe0
[Mon Mar 19 00:38:16 2018] RSP: 0018:ffff880169523d88  EFLAGS: 00000046
[Mon Mar 19 00:38:16 2018] RAX: 0000000000000400 RBX: 000000000000a1f0 RCX: 0000000000000007
[Mon Mar 19 00:38:16 2018] RDX: 000000000000000e RSI: 0000000000000200 RDI: 0000000000000300
[Mon Mar 19 00:38:16 2018] RBP: ffff880169523dc0 R08: 0000000000000000 R09: 00000000000000ff
[Mon Mar 19 00:38:16 2018] R10: 0000000000000001 R11: 0000000000044936 R12: ffffffff81f3d120
[Mon Mar 19 00:38:16 2018] R13: 0000000000000400 R14: 0000000000000002 R15: 0000000000000007
[Mon Mar 19 00:38:16 2018] FS:  00007f32e4963700(0000) GS:ffff8807eebc0000(0000) knlGS:0000000000000000
[Mon Mar 19 00:38:16 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Mar 19 00:38:16 2018] CR2: 0000000001dcda68 CR3: 0000000675116000 CR4: 00000000001406e0
[Mon Mar 19 00:38:16 2018] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Mon Mar 19 00:38:16 2018] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Mon Mar 19 00:38:16 2018] Stack:
[Mon Mar 19 00:38:16 2018]  0000000000000286 0000000e69523df0 00000000000112a0 0000000000000001
[Mon Mar 19 00:38:16 2018]  ffffffff81055dd0 ffffffff81ebc0c0 0000000000000000 ffff880169523dd0
[Mon Mar 19 00:38:16 2018]  ffffffff8105a68e ffff880169523de0 ffffffff81055deb ffff880169523e28
[Mon Mar 19 00:38:16 2018] Call Trace:
[Mon Mar 19 00:38:16 2018]  [<ffffffff81055dd0>] ? irq_force_complete_move+0x150/0x150
[Mon Mar 19 00:38:16 2018]  [<ffffffff8105a68e>] physflat_send_IPI_mask+0xe/0x10
[Mon Mar 19 00:38:16 2018]  [<ffffffff81055deb>] nmi_raise_cpu_backtrace+0x1b/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff813fc706>] nmi_trigger_all_cpu_backtrace+0x2f6/0x300
[Mon Mar 19 00:38:16 2018]  [<ffffffff81055e49>] arch_trigger_all_cpu_backtrace+0x19/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff814fcd33>] sysrq_handle_showallcpus+0x13/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff814fd3da>] __handle_sysrq+0xea/0x140
[Mon Mar 19 00:38:16 2018]  [<ffffffff814fd85f>] write_sysrq_trigger+0x2f/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff8127bee2>] proc_reg_write+0x42/0x70
[Mon Mar 19 00:38:16 2018]  [<ffffffff8120e168>] __vfs_write+0x18/0x40
[Mon Mar 19 00:38:16 2018]  [<ffffffff8120eaf9>] vfs_write+0xa9/0x1a0
[Mon Mar 19 00:38:16 2018]  [<ffffffff810caeb1>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
[Mon Mar 19 00:38:16 2018]  [<ffffffff8120f7b5>] SyS_write+0x55/0xc0
[Mon Mar 19 00:38:16 2018]  [<ffffffff818385f2>] entry_SYSCALL_64_fastpath+0x16/0x71
[Mon Mar 19 00:38:16 2018] Code: 90 8b 0c 25 00 53 5f ff 80 e5 10 75 f2 89 d0 c1 e0 18 89 04 25 10 53 5f ff 41 83 fe 02 44 89 e8 41 0f 45 c6 89 04 25 00 53 5f ff <eb> 91 48 8b 05 70 31 ee 00 89 55 d4 ff 90 10 01 00 00 8b 55 d4



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
       [not found]         ` <3e96b978-a14e-7e48-a327-a73bc3004256@suse.com>
@ 2018-03-19 11:48           ` Shyam Prasad N
  2018-03-19 11:51             ` Nikolay Borisov
  0 siblings, 1 reply; 8+ messages in thread
From: Shyam Prasad N @ 2018-03-19 11:48 UTC (permalink / raw)
  To: Nikolay Borisov, Btrfs BTRFS

On Mon, Mar 19, 2018 at 4:37 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>
>
> On 19.03.2018 13:02, Shyam Prasad N wrote:
>> Hi,
>>
>> Attaching the sysrq-trigger output.
>
> Has this been obtained while the machine experienced a period of a lot
> of blocked threads? Because the output shows a machine which is idle?
>
Hmm.. No, actually. The threads are still taking up CPU, and not
responding to signals.


-- 
-Shyam

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
  2018-03-19 11:48           ` Shyam Prasad N
@ 2018-03-19 11:51             ` Nikolay Borisov
  2018-03-19 12:01               ` Shyam Prasad N
  0 siblings, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2018-03-19 11:51 UTC (permalink / raw)
  To: Shyam Prasad N, Btrfs BTRFS



On 19.03.2018 13:48, Shyam Prasad N wrote:
> On Mon, Mar 19, 2018 at 4:37 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>>
>>
>> On 19.03.2018 13:02, Shyam Prasad N wrote:
>>> Hi,
>>>
>>> Attaching the sysrq-trigger output.
>>
>> Has this been obtained while the machine experienced a period of a lot
>> of blocked threads? Because the output shows a machine which is idle?
>>
> Hmm.. No, actually. The threads are still taking up CPU, and not
> responding to signals.

Considering all the data you provided I'm inclined to say you have a
problem of different nature than btrfs. Nothing really points at the
direction of btrfs.

> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: High CPU usage of RT threads...
  2018-03-19 11:51             ` Nikolay Borisov
@ 2018-03-19 12:01               ` Shyam Prasad N
  0 siblings, 0 replies; 8+ messages in thread
From: Shyam Prasad N @ 2018-03-19 12:01 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Btrfs BTRFS

Thank you for the analysis, Nikolay.
Will try to upgrade the kernel and check if the issue reproduces.

Regards,
Shyam

On Mon, Mar 19, 2018 at 5:21 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>
>
> On 19.03.2018 13:48, Shyam Prasad N wrote:
>> On Mon, Mar 19, 2018 at 4:37 PM, Nikolay Borisov <nborisov@suse.com> wrote:
>>>
>>>
>>> On 19.03.2018 13:02, Shyam Prasad N wrote:
>>>> Hi,
>>>>
>>>> Attaching the sysrq-trigger output.
>>>
>>> Has this been obtained while the machine experienced a period of a lot
>>> of blocked threads? Because the output shows a machine which is idle?
>>>
>> Hmm.. No, actually. The threads are still taking up CPU, and not
>> responding to signals.
>
> Considering all the data you provided I'm inclined to say you have a
> problem of different nature than btrfs. Nothing really points at the
> direction of btrfs.
>
>>
>>



-- 
-Shyam

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-03-19 12:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-15  7:23 High CPU usage of RT threads Shyam Prasad N
2018-03-15  7:29 ` Nikolay Borisov
2018-03-19  7:13   ` Shyam Prasad N
2018-03-19  7:15     ` Nikolay Borisov
2018-03-19 11:03       ` Shyam Prasad N
     [not found]       ` <CANT5p=pD+6L76-fBN1ax=UYsqyFzh+PQcc0mK9C7poZi7vNVRg@mail.gmail.com>
     [not found]         ` <3e96b978-a14e-7e48-a327-a73bc3004256@suse.com>
2018-03-19 11:48           ` Shyam Prasad N
2018-03-19 11:51             ` Nikolay Borisov
2018-03-19 12:01               ` Shyam Prasad N

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.