All of lore.kernel.org
 help / color / mirror / Atom feed
* 4.9.52: INFO: task XXX blocked for more than 300 seconds.
@ 2017-10-04 18:26 Philipp Hahn
  2017-10-05 10:12 ` Jan Beulich
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Philipp Hahn @ 2017-10-04 18:26 UTC (permalink / raw)
  To: linux-kernel, xen-devel

Hello,

with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
several stuck processes: Here is one (truncated) dump of the Linux
kernel messages:

>  [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160e032>] ? schedule+0x32/0x80
>  [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
>  [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
>  [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
>  [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
>  [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
>  [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
>  [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
>  [<ffffffffc00abd18>] ? jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>  [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
>  [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
>  [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
>  [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
>  [<ffffffff81097660>] ? kthread+0xf0/0x110
>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>  [<ffffffff81097570>] ? kthread_park+0x60/0x60
>  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
> NMI backtrace for cpu 2
> CPU: 2 PID: 35 Comm: khungtaskd Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>  0000000000000000 ffffffff81331935 0000000000000000 0000000000000002
>  ffffffff81335e60 0000000000000002 ffffffff8104cb70 ffff8801f0c90e80
>  ffffffff81335f6a ffff8801f0c90e80 00000000003fffbc ffffffff81128048
> Call Trace:
>  [<ffffffff81331935>] ? dump_stack+0x5c/0x77
>  [<ffffffff81335e60>] ? nmi_cpu_backtrace+0x90/0xa0
>  [<ffffffff8104cb70>] ? irq_force_complete_move+0x140/0x140
>  [<ffffffff81335f6a>] ? nmi_trigger_cpumask_backtrace+0xfa/0x130
>  [<ffffffff81128048>] ? watchdog+0x2b8/0x330
>  [<ffffffff81127d90>] ? reset_hung_task_detector+0x10/0x10
>  [<ffffffff81097660>] ? kthread+0xf0/0x110
>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>  [<ffffffff81097570>] ? kthread_park+0x60/0x60
>  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
> Sending NMI from CPU 2 to CPUs 0-1,3:
> NMI backtrace for cpu 1
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> task: ffff8801f4a02ec0 task.stack: ffffc90040ca4000
> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
> RSP: e02b:ffffc90040ca7ed0  EFLAGS: 00000246
> RAX: 0000000000000000 RBX: ffff8801f4a02ec0 RCX: ffffffff810013aa
> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
> RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: ffff8801f4a02ec0 R15: ffff8801f4a02ec0
> FS:  00007f23ac595700(0000) GS:ffff8801f5a80000(0000) knlGS:0000000000000000
> CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
> CR2: 00007f52537d6d46 CR3: 00000001bba23000 CR4: 0000000000002660
> Stack:
>  ffff8801bb832201 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>  ffff8801f4a02ec0 0000000000000001 ffffffff810bc280 ffff8801f4a02ec0
>  ffff8801f4a02ec0 c0a995961d41240f addce6dcadd009c9 0000000000000000
> Call Trace:
>  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
> NMI backtrace for cpu 3
> CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> task: ffff8801f4a24f00 task.stack: ffffc90040cb4000
> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
> RSP: e02b:ffffc90040cb7ed0  EFLAGS: 00000246
> RAX: 0000000000000000 RBX: ffff8801f4a24f00 RCX: ffffffff810013aa
> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
> RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: ffff8801f4a24f00 R15: ffff8801f4a24f00
> FS:  00007f1a2af19700(0000) GS:ffff8801f5b80000(0000) knlGS:0000000000000000
> CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
> CR2: 00007f4a5416b000 CR3: 00000001d83ec000 CR4: 0000000000002660
> Stack:
>  0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>  ffff8801f4a24f00 0000000000000003 ffffffff810bc280 ffff8801f4a24f00
>  ffff8801f4a24f00 77816deb133b9979 addce6dcadd009c9 0000000000000000
> Call Trace:
>  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
> NMI backtrace for cpu 0
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> task: ffffffff81c0e540 task.stack: ffffffff81c00000
> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
> RSP: e02b:ffffffff81c03e90  EFLAGS: 00000246
> RAX: 0000000000000000 RBX: ffffffff81c0e540 RCX: ffffffff810013aa
> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
> RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: ffffffff81c0e540 R15: ffffffff81c0e540
> FS:  00007f977d3c2700(0000) GS:ffff8801f5a00000(0000) knlGS:0000000000000000
> CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000055ae9becabc8 CR3: 00000001d6a40000 CR4: 0000000000002660
> Stack:
>  0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>  ffffffff81c0e540 0000000000000000 ffffffff810bc280 ffffffff81c0e540
>  ffffffff81c0e540 ebeac214e330bd3f addce6dcadd009c9 ffffffffffffffff
> Call Trace:
>  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>  [<ffffffff81d4bf84>] ? start_kernel+0x46d/0x48d
>  [<ffffffff81d510e6>] ? xen_start_kernel+0x52e/0x538
> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 


Looking at the dis-assembly of xen_clocksource_get_cycles() in
arch/x86/xen/time.c I see no path how that should call
__blk_mq_run_hw_queue():

> 00000000000001a0 <xen_clocksource_get_cycles> mov    %gs:0x0(%rip),%rdi        # 00000000000001a8 <xen_clocksource_get_cycles+0x8>
> 00000000000001a8 <xen_clocksource_get_cycles+0x8> add    $0x20,%rdi
>         ret = pvclock_clocksource_read(src);
> 00000000000001ac <xen_clocksource_get_cycles+0xc> callq  00000000000001b1 <xen_clocksource_get_cycles+0x11>
> }
> 
> static cycle_t xen_clocksource_get_cycles(struct clocksource *cs)
> {
>         return xen_clocksource_read();
> }
> 00000000000001b1 <xen_clocksource_get_cycles+0x11> retq   
> 00000000000001b2 <xen_clocksource_get_cycles+0x12> data16 data16 data16 data16 nopw %cs:0x0(%rax,%rax,1)
> 
> static void xen_read_wallclock(struct timespec *ts)
> {
>         struct shared_info *s = HYPERVISOR_shared_info;
>         struct pvclock_wall_clock *wall_clock = &(s->wc);
> 00000000000001c0 <xen_get_wallclock> mov    0x0(%rip),%rax        # 00000000000001c7 <xen_get_wallclock+0x7>


Here's another dump, which diverges from xen_clocksource_get_cycles() to
some completely other function:

> INFO: task btrfs-transacti:522 blocked for more than 300 seconds.
> btrfs-transacti D    0   522      2 0x00000000
>  ffff8801f3836000 0000000000000000 ffff8801f4a24f00 ffff8801f32da1c0
>  ffff8801f5b18780 ffffc9004199fa40 ffffffff8160cd2d 0000000000000000
>  ffff8801741a58a8 0000000000000102 000000000000000e ffff8801f32da1c0
> Call Trace:
>  [<ffffffff8160cd2d>] ? __schedule+0x23d/0x6d0
>  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160d1f2>] ? schedule+0x32/0x80
>  [<ffffffff81610729>] ? schedule_timeout+0x249/0x300
***
>  [<ffffffff8101b7d1>] ? xen_clocksource_get_cycles+0x11/0x20
>  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160ca74>] ? io_schedule_timeout+0xb4/0x130
>  [<ffffffff810bb5e7>] ? prepare_to_wait+0x57/0x80
>  [<ffffffff8160da77>] ? bit_wait_io+0x17/0x60
>  [<ffffffff8160d56c>] ? __wait_on_bit+0x5c/0x90
>  [<ffffffff8117e18e>] ? find_get_pages_tag+0x15e/0x300
>  [<ffffffff8117d016>] ? wait_on_page_bit+0x86/0xa0
>  [<ffffffff810bb910>] ? autoremove_wake_function+0x40/0x40
>  [<ffffffff8117d107>] ? __filemap_fdatawait_range+0xd7/0x150
>  [<ffffffff8117d18f>] ? filemap_fdatawait_range+0xf/0x30
>  [<ffffffffc0175643>] ? btrfs_wait_ordered_range+0x73/0x110 [btrfs]
>  [<ffffffffc01a2a0d>] ? btrfs_wait_cache_io+0x5d/0x1f0 [btrfs]
>  [<ffffffffc0141a36>] ? btrfs_write_dirty_block_groups+0x106/0x380 [btrfs]
>  [<ffffffffc0140c5d>] ? btrfs_run_delayed_refs+0x1fd/0x2b0 [btrfs]
>  [<ffffffffc01551d7>] ? commit_cowonly_roots+0x257/0x2f0 [btrfs]
>  [<ffffffffc0157a24>] ? btrfs_commit_transaction+0x4e4/0xa40 [btrfs]
>  [<ffffffffc015801d>] ? start_transaction+0x9d/0x4a0 [btrfs]
>  [<ffffffffc01523a2>] ? transaction_kthread+0x1b2/0x1f0 [btrfs]
>  [<ffffffffc01521f0>] ? btrfs_cleanup_transaction+0x580/0x580 [btrfs]
>  [<ffffffff810975c0>] ? kthread+0xf0/0x110
>  [<ffffffff8102476b>] ? __switch_to+0x2bb/0x700
>  [<ffffffff810974d0>] ? kthread_park+0x60/0x60
>  [<ffffffff81611d35>] ? ret_from_fork+0x25/0x30

And another one:
> INFO: task smbd:20101 blocked for more than 300 seconds.
> smbd            D    0 20101   1714 0x00000000
>  ffff8801f01cac00 0000000000000000 ffff8801f4a241c0 ffff88007f9a5240
>  ffff8801f5b98780 ffffc90049a33bc0 ffffffff8160cd2d ffff8800f78bb5b0
>  fd2236313fb5274b 000000008dc3c1a0 ffff8801f3ef0a40 ffff88007f9a5240
> Call Trace:
>  [<ffffffff8160cd2d>] ? __schedule+0x23d/0x6d0
>  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160d1f2>] ? schedule+0x32/0x80
>  [<ffffffff81610729>] ? schedule_timeout+0x249/0x300
>  [<ffffffff81336486>] ? __radix_tree_lookup+0x76/0xe0
***
>  [<ffffffff8101b7d1>] ? xen_clocksource_get_cycles+0x11/0x20
>  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160ca74>] ? io_schedule_timeout+0xb4/0x130
>  [<ffffffff810bb5e7>] ? prepare_to_wait+0x57/0x80
>  [<ffffffff8160da77>] ? bit_wait_io+0x17/0x60
>  [<ffffffff8160d56c>] ? __wait_on_bit+0x5c/0x90
>  [<ffffffffc00f290b>] ? __jbd2_journal_file_buffer+0xcb/0x180 [jbd2]
>  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>  [<ffffffff8160d6ce>] ? out_of_line_wait_on_bit+0x7e/0xa0
>  [<ffffffff810bb910>] ? autoremove_wake_function+0x40/0x40
>  [<ffffffffc00f2bc8>] ? do_get_write_access+0x208/0x420 [jbd2]
>  [<ffffffffc00f2e0e>] ? jbd2_journal_get_write_access+0x2e/0x60 [jbd2]
>  [<ffffffffc02c78a6>] ? __ext4_journal_get_write_access+0x36/0x70 [ext4]
>  [<ffffffffc02a3913>] ? ext4_orphan_add+0xd3/0x230 [ext4]
>  [<ffffffffc0296b4a>] ? ext4_mark_inode_dirty+0x6a/0x200 [ext4]
>  [<ffffffffc02a4b8a>] ? ext4_unlink+0x36a/0x380 [ext4]
>  [<ffffffff81211677>] ? vfs_unlink+0xe7/0x180
>  [<ffffffff81214439>] ? do_unlinkat+0x289/0x300
>  [<ffffffff81611abb>] ? system_call_fast_compare_end+0xc/0x9b

This does not look normal to me or did I miss something?

Where can I get more information on why there is no progress for 300s,
what should I do to debug which task is waiting for what?

The traces of the of other CPUs look normal to me: the one posted first
above is the shortest, in all other cases they were sooner or later
waiting for IO (my interpretation, but I can post them if necessary.)

This problem occurs since the upgrade of the Linux kernel inside the VM
from 4.1.x to 4.9.32 and now 4.9.52.

Any help is appreciated.
Philipp Hahn
-- 
Philipp Hahn
Open Source Software Engineer

Univention GmbH
be open.
Mary-Somerville-Str. 1
D-28359 Bremen
Tel.: +49 421 22232-0
Fax : +49 421 22232-99
hahn@univention.de

http://www.univention.de/
Geschäftsführer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-02876

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 4.9.52: INFO: task XXX blocked for more than 300 seconds.
  2017-10-04 18:26 4.9.52: INFO: task XXX blocked for more than 300 seconds Philipp Hahn
@ 2017-10-05 10:12 ` Jan Beulich
  2017-10-05 12:04   ` Philipp Hahn
  2017-10-05 10:38 ` Roger Pau Monné
  2017-10-05 13:20 ` Konrad Rzeszutek Wilk
  2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2017-10-05 10:12 UTC (permalink / raw)
  To: Philipp Hahn; +Cc: xen-devel, linux-kernel

>>> On 04.10.17 at 20:26, <hahn@univention.de> wrote:
> Hello,
> 
> with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
> several stuck processes: Here is one (truncated) dump of the Linux
> kernel messages:
> 
>>  [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
>>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>  [<ffffffff8160e032>] ? schedule+0x32/0x80
>>  [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
>>  [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
>>  [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
>>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>  [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
>>  [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
>>  [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
>>  [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
>>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>  [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
>>  [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
>>  [<ffffffffc00abd18>] ? jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
>>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>  [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
>>  [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
>>  [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
>>  [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
>>  [<ffffffff81097660>] ? kthread+0xf0/0x110
>>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>  [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
>> NMI backtrace for cpu 2
>> CPU: 2 PID: 35 Comm: khungtaskd Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>>  0000000000000000 ffffffff81331935 0000000000000000 0000000000000002
>>  ffffffff81335e60 0000000000000002 ffffffff8104cb70 ffff8801f0c90e80
>>  ffffffff81335f6a ffff8801f0c90e80 00000000003fffbc ffffffff81128048
>> Call Trace:
>>  [<ffffffff81331935>] ? dump_stack+0x5c/0x77
>>  [<ffffffff81335e60>] ? nmi_cpu_backtrace+0x90/0xa0
>>  [<ffffffff8104cb70>] ? irq_force_complete_move+0x140/0x140
>>  [<ffffffff81335f6a>] ? nmi_trigger_cpumask_backtrace+0xfa/0x130
>>  [<ffffffff81128048>] ? watchdog+0x2b8/0x330
>>  [<ffffffff81127d90>] ? reset_hung_task_detector+0x10/0x10
>>  [<ffffffff81097660>] ? kthread+0xf0/0x110
>>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>  [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
>> Sending NMI from CPU 2 to CPUs 0-1,3:
>> NMI backtrace for cpu 1
>> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.9.0-ucs105-amd64 #1 Debian 
> 4.9.30-2A~4.2.0.201709271649
>> task: ffff8801f4a02ec0 task.stack: ffffc90040ca4000
>> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
>> RSP: e02b:ffffc90040ca7ed0  EFLAGS: 00000246
>> RAX: 0000000000000000 RBX: ffff8801f4a02ec0 RCX: ffffffff810013aa
>> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
>> RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
>> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
>> R13: 0000000000000000 R14: ffff8801f4a02ec0 R15: ffff8801f4a02ec0
>> FS:  00007f23ac595700(0000) GS:ffff8801f5a80000(0000) knlGS:0000000000000000
>> CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
>> CR2: 00007f52537d6d46 CR3: 00000001bba23000 CR4: 0000000000002660
>> Stack:
>>  ffff8801bb832201 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>>  ffff8801f4a02ec0 0000000000000001 ffffffff810bc280 ffff8801f4a02ec0
>>  ffff8801f4a02ec0 c0a995961d41240f addce6dcadd009c9 0000000000000000
>> Call Trace:
>>  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>>  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>>  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
>> NMI backtrace for cpu 3
>> CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>> task: ffff8801f4a24f00 task.stack: ffffc90040cb4000
>> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
>> RSP: e02b:ffffc90040cb7ed0  EFLAGS: 00000246
>> RAX: 0000000000000000 RBX: ffff8801f4a24f00 RCX: ffffffff810013aa
>> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
>> RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
>> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
>> R13: 0000000000000000 R14: ffff8801f4a24f00 R15: ffff8801f4a24f00
>> FS:  00007f1a2af19700(0000) GS:ffff8801f5b80000(0000) knlGS:0000000000000000
>> CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
>> CR2: 00007f4a5416b000 CR3: 00000001d83ec000 CR4: 0000000000002660
>> Stack:
>>  0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>>  ffff8801f4a24f00 0000000000000003 ffffffff810bc280 ffff8801f4a24f00
>>  ffff8801f4a24f00 77816deb133b9979 addce6dcadd009c9 0000000000000000
>> Call Trace:
>>  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>>  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>>  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
>> NMI backtrace for cpu 0
>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>> task: ffffffff81c0e540 task.stack: ffffffff81c00000
>> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
>> RSP: e02b:ffffffff81c03e90  EFLAGS: 00000246
>> RAX: 0000000000000000 RBX: ffffffff81c0e540 RCX: ffffffff810013aa
>> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
>> RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
>> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
>> R13: 0000000000000000 R14: ffffffff81c0e540 R15: ffffffff81c0e540
>> FS:  00007f977d3c2700(0000) GS:ffff8801f5a00000(0000) knlGS:0000000000000000
>> CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 000055ae9becabc8 CR3: 00000001d6a40000 CR4: 0000000000002660
>> Stack:
>>  0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>>  ffffffff81c0e540 0000000000000000 ffffffff810bc280 ffffffff81c0e540
>>  ffffffff81c0e540 ebeac214e330bd3f addce6dcadd009c9 ffffffffffffffff
>> Call Trace:
>>  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>>  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>>  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>>  [<ffffffff81d4bf84>] ? start_kernel+0x46d/0x48d
>>  [<ffffffff81d510e6>] ? xen_start_kernel+0x52e/0x538
>> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
> 
> 
> Looking at the dis-assembly of xen_clocksource_get_cycles() in
> arch/x86/xen/time.c I see no path how that should call
> __blk_mq_run_hw_queue():

Hence the question marks ahead of the stack entries: What you see
there are likely leftovers from prior call trees. It just so happens
that the old return address slots haven't got overwritten yet. You
need to first sanitize the stack trace e.g. by having the kernel
dump more of the stack in raw hex form, and then looking at the
disassembly to figure out how large each stack frame is, starting
at the top-most address (i.e. the one in RIP).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 4.9.52: INFO: task XXX blocked for more than 300 seconds.
  2017-10-04 18:26 4.9.52: INFO: task XXX blocked for more than 300 seconds Philipp Hahn
  2017-10-05 10:12 ` Jan Beulich
@ 2017-10-05 10:38 ` Roger Pau Monné
  2017-10-05 13:20 ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 8+ messages in thread
From: Roger Pau Monné @ 2017-10-05 10:38 UTC (permalink / raw)
  To: Philipp Hahn; +Cc: xen-devel, linux-kernel

On Wed, Oct 04, 2017 at 06:26:27PM +0000, Philipp Hahn wrote:
> > INFO: task btrfs-transacti:522 blocked for more than 300 seconds.
[...] 
> And another one:
> > INFO: task smbd:20101 blocked for more than 300 seconds.
[...] 
> This does not look normal to me or did I miss something?

So I see that both of the stuck processes listed above (smbd and
btrfs-*) are disk related processes. Might I ask how many disk/nics
(PV) do you have attached to this DomU, and how many queues does each
have?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 4.9.52: INFO: task XXX blocked for more than 300 seconds.
  2017-10-05 10:12 ` Jan Beulich
@ 2017-10-05 12:04   ` Philipp Hahn
  0 siblings, 0 replies; 8+ messages in thread
From: Philipp Hahn @ 2017-10-05 12:04 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monné; +Cc: xen-devel, linux-kernel

Hello Jan,

thank you for you answer.

Am 05.10.2017 um 12:12 schrieb Jan Beulich:
>>>> On 04.10.17 at 20:26, <hahn@univention.de> wrote:
>> with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
>> several stuck processes: Here is one (truncated) dump of the Linux
>> kernel messages:
>>
>>>  [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
>>>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>  [<ffffffff8160e032>] ? schedule+0x32/0x80
>>>  [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
>>>  [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
>>>  [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
>>>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>  [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
>>>  [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
>>>  [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
>>>  [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
>>>  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>  [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
>>>  [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
>>>  [<ffffffffc00abd18>] ? jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
>>>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>  [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
>>>  [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
>>>  [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
>>>  [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
>>>  [<ffffffff81097660>] ? kthread+0xf0/0x110
>>>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>  [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>>  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
>>> NMI backtrace for cpu 2
>>> CPU: 2 PID: 35 Comm: khungtaskd Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>>>  0000000000000000 ffffffff81331935 0000000000000000 0000000000000002
>>>  ffffffff81335e60 0000000000000002 ffffffff8104cb70 ffff8801f0c90e80
>>>  ffffffff81335f6a ffff8801f0c90e80 00000000003fffbc ffffffff81128048
>>> Call Trace:
>>>  [<ffffffff81331935>] ? dump_stack+0x5c/0x77
>>>  [<ffffffff81335e60>] ? nmi_cpu_backtrace+0x90/0xa0
>>>  [<ffffffff8104cb70>] ? irq_force_complete_move+0x140/0x140
>>>  [<ffffffff81335f6a>] ? nmi_trigger_cpumask_backtrace+0xfa/0x130
>>>  [<ffffffff81128048>] ? watchdog+0x2b8/0x330
>>>  [<ffffffff81127d90>] ? reset_hung_task_detector+0x10/0x10
>>>  [<ffffffff81097660>] ? kthread+0xf0/0x110
>>>  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>  [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>>  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
...
>> Looking at the dis-assembly of xen_clocksource_get_cycles() in
>> arch/x86/xen/time.c I see no path how that should call
>> __blk_mq_run_hw_queue():
> 
> Hence the question marks ahead of the stack entries: What you see
> there are likely leftovers from prior call trees. It just so happens
> that the old return address slots haven't got overwritten yet. You
> need to first sanitize the stack trace e.g. by having the kernel
> dump more of the stack in raw hex form, and then looking at the
> disassembly to figure out how large each stack frame is, starting
> at the top-most address (i.e. the one in RIP).

That explains the strange call trace for me, thank you for the
enlightenment.

> On Wed, Oct 04, 2017 at 06:26:27PM +0000, Philipp Hahn wrote:
>>> INFO: task btrfs-transacti:522 blocked for more than 300 seconds.
> [...] 
>> And another one:
>>> INFO: task smbd:20101 blocked for more than 300 seconds.
> [...] 
>> This does not look normal to me or did I miss something?
> 
> So I see that both of the stuck processes listed above (smbd and
> btrfs-*) are disk related processes. Might I ask how many disk/nics
> (PV) do you have attached to this DomU, and how many queues does each
> have?

Nothing special configured, how would I best fetch that info?


Which leads me back to my original problem: How can I diagnose *why* the
task is blocked for that time? From my understanding this can happen if
IO is too slow and task just have to wait for too long. ¹
Even if IO is slow the system should stabilize itself when no new IO is
generated and the old one has been processed, right? So looking at
`vmstat` or `blktrace` should tell me, that Xen/Linux/whatever is busy
with IO and it is simply not fast enough to keep up with the load.

Thanks again, but any hint how to diagnose this does help.

Philipp

¹
<https://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue-fix-hung_task_timeout_secs-blocked-120-seconds-problem/>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 4.9.52: INFO: task XXX blocked for more than 300 seconds.
  2017-10-04 18:26 4.9.52: INFO: task XXX blocked for more than 300 seconds Philipp Hahn
  2017-10-05 10:12 ` Jan Beulich
  2017-10-05 10:38 ` Roger Pau Monné
@ 2017-10-05 13:20 ` Konrad Rzeszutek Wilk
  2017-10-05 17:59   ` Ankur Arora
  2 siblings, 1 reply; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-10-05 13:20 UTC (permalink / raw)
  To: Philipp Hahn, Ankur Arora; +Cc: xen-devel, linux-kernel

On Wed, Oct 04, 2017 at 08:26:27PM +0200, Philipp Hahn wrote:
> Hello,
> 

Adding Ankur to this as I think he saw something similar. 

But in the meantime - do you see this with the latest version of Linux?
> with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
> several stuck processes: Here is one (truncated) dump of the Linux
> kernel messages:
> 
> >  [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
> >  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160e032>] ? schedule+0x32/0x80
> >  [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
> >  [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
> >  [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
> >  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
> >  [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
> >  [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
> >  [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
> >  [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
> >  [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
> >  [<ffffffffc00abd18>] ? jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
> >  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
> >  [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
> >  [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
> >  [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
> >  [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
> >  [<ffffffff81097660>] ? kthread+0xf0/0x110
> >  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
> >  [<ffffffff81097570>] ? kthread_park+0x60/0x60
> >  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
> > NMI backtrace for cpu 2
> > CPU: 2 PID: 35 Comm: khungtaskd Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> >  0000000000000000 ffffffff81331935 0000000000000000 0000000000000002
> >  ffffffff81335e60 0000000000000002 ffffffff8104cb70 ffff8801f0c90e80
> >  ffffffff81335f6a ffff8801f0c90e80 00000000003fffbc ffffffff81128048
> > Call Trace:
> >  [<ffffffff81331935>] ? dump_stack+0x5c/0x77
> >  [<ffffffff81335e60>] ? nmi_cpu_backtrace+0x90/0xa0
> >  [<ffffffff8104cb70>] ? irq_force_complete_move+0x140/0x140
> >  [<ffffffff81335f6a>] ? nmi_trigger_cpumask_backtrace+0xfa/0x130
> >  [<ffffffff81128048>] ? watchdog+0x2b8/0x330
> >  [<ffffffff81127d90>] ? reset_hung_task_detector+0x10/0x10
> >  [<ffffffff81097660>] ? kthread+0xf0/0x110
> >  [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
> >  [<ffffffff81097570>] ? kthread_park+0x60/0x60
> >  [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
> > Sending NMI from CPU 2 to CPUs 0-1,3:
> > NMI backtrace for cpu 1
> > CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> > task: ffff8801f4a02ec0 task.stack: ffffc90040ca4000
> > RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
> > RSP: e02b:ffffc90040ca7ed0  EFLAGS: 00000246
> > RAX: 0000000000000000 RBX: ffff8801f4a02ec0 RCX: ffffffff810013aa
> > RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
> > RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
> > R13: 0000000000000000 R14: ffff8801f4a02ec0 R15: ffff8801f4a02ec0
> > FS:  00007f23ac595700(0000) GS:ffff8801f5a80000(0000) knlGS:0000000000000000
> > CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
> > CR2: 00007f52537d6d46 CR3: 00000001bba23000 CR4: 0000000000002660
> > Stack:
> >  ffff8801bb832201 0000000000000001 ffffffff8101b55c ffffffff81611ec8
> >  ffff8801f4a02ec0 0000000000000001 ffffffff810bc280 ffff8801f4a02ec0
> >  ffff8801f4a02ec0 c0a995961d41240f addce6dcadd009c9 0000000000000000
> > Call Trace:
> >  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
> >  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
> >  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
> > Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
> > NMI backtrace for cpu 3
> > CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> > task: ffff8801f4a24f00 task.stack: ffffc90040cb4000
> > RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
> > RSP: e02b:ffffc90040cb7ed0  EFLAGS: 00000246
> > RAX: 0000000000000000 RBX: ffff8801f4a24f00 RCX: ffffffff810013aa
> > RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
> > RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
> > R13: 0000000000000000 R14: ffff8801f4a24f00 R15: ffff8801f4a24f00
> > FS:  00007f1a2af19700(0000) GS:ffff8801f5b80000(0000) knlGS:0000000000000000
> > CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
> > CR2: 00007f4a5416b000 CR3: 00000001d83ec000 CR4: 0000000000002660
> > Stack:
> >  0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
> >  ffff8801f4a24f00 0000000000000003 ffffffff810bc280 ffff8801f4a24f00
> >  ffff8801f4a24f00 77816deb133b9979 addce6dcadd009c9 0000000000000000
> > Call Trace:
> >  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
> >  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
> >  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
> > Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
> > NMI backtrace for cpu 0
> > CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
> > task: ffffffff81c0e540 task.stack: ffffffff81c00000
> > RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
> > RSP: e02b:ffffffff81c03e90  EFLAGS: 00000246
> > RAX: 0000000000000000 RBX: ffffffff81c0e540 RCX: ffffffff810013aa
> > RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
> > RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
> > R13: 0000000000000000 R14: ffffffff81c0e540 R15: ffffffff81c0e540
> > FS:  00007f977d3c2700(0000) GS:ffff8801f5a00000(0000) knlGS:0000000000000000
> > CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 000055ae9becabc8 CR3: 00000001d6a40000 CR4: 0000000000002660
> > Stack:
> >  0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
> >  ffffffff81c0e540 0000000000000000 ffffffff810bc280 ffffffff81c0e540
> >  ffffffff81c0e540 ebeac214e330bd3f addce6dcadd009c9 ffffffffffffffff
> > Call Trace:
> >  [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
> >  [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
> >  [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
> >  [<ffffffff81d4bf84>] ? start_kernel+0x46d/0x48d
> >  [<ffffffff81d510e6>] ? xen_start_kernel+0x52e/0x538
> > Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 
> 
> 
> Looking at the dis-assembly of xen_clocksource_get_cycles() in
> arch/x86/xen/time.c I see no path how that should call
> __blk_mq_run_hw_queue():
> 
> > 00000000000001a0 <xen_clocksource_get_cycles> mov    %gs:0x0(%rip),%rdi        # 00000000000001a8 <xen_clocksource_get_cycles+0x8>
> > 00000000000001a8 <xen_clocksource_get_cycles+0x8> add    $0x20,%rdi
> >         ret = pvclock_clocksource_read(src);
> > 00000000000001ac <xen_clocksource_get_cycles+0xc> callq  00000000000001b1 <xen_clocksource_get_cycles+0x11>
> > }
> > 
> > static cycle_t xen_clocksource_get_cycles(struct clocksource *cs)
> > {
> >         return xen_clocksource_read();
> > }
> > 00000000000001b1 <xen_clocksource_get_cycles+0x11> retq   
> > 00000000000001b2 <xen_clocksource_get_cycles+0x12> data16 data16 data16 data16 nopw %cs:0x0(%rax,%rax,1)
> > 
> > static void xen_read_wallclock(struct timespec *ts)
> > {
> >         struct shared_info *s = HYPERVISOR_shared_info;
> >         struct pvclock_wall_clock *wall_clock = &(s->wc);
> > 00000000000001c0 <xen_get_wallclock> mov    0x0(%rip),%rax        # 00000000000001c7 <xen_get_wallclock+0x7>
> 
> 
> Here's another dump, which diverges from xen_clocksource_get_cycles() to
> some completely other function:
> 
> > INFO: task btrfs-transacti:522 blocked for more than 300 seconds.
> > btrfs-transacti D    0   522      2 0x00000000
> >  ffff8801f3836000 0000000000000000 ffff8801f4a24f00 ffff8801f32da1c0
> >  ffff8801f5b18780 ffffc9004199fa40 ffffffff8160cd2d 0000000000000000
> >  ffff8801741a58a8 0000000000000102 000000000000000e ffff8801f32da1c0
> > Call Trace:
> >  [<ffffffff8160cd2d>] ? __schedule+0x23d/0x6d0
> >  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160d1f2>] ? schedule+0x32/0x80
> >  [<ffffffff81610729>] ? schedule_timeout+0x249/0x300
> ***
> >  [<ffffffff8101b7d1>] ? xen_clocksource_get_cycles+0x11/0x20
> >  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160ca74>] ? io_schedule_timeout+0xb4/0x130
> >  [<ffffffff810bb5e7>] ? prepare_to_wait+0x57/0x80
> >  [<ffffffff8160da77>] ? bit_wait_io+0x17/0x60
> >  [<ffffffff8160d56c>] ? __wait_on_bit+0x5c/0x90
> >  [<ffffffff8117e18e>] ? find_get_pages_tag+0x15e/0x300
> >  [<ffffffff8117d016>] ? wait_on_page_bit+0x86/0xa0
> >  [<ffffffff810bb910>] ? autoremove_wake_function+0x40/0x40
> >  [<ffffffff8117d107>] ? __filemap_fdatawait_range+0xd7/0x150
> >  [<ffffffff8117d18f>] ? filemap_fdatawait_range+0xf/0x30
> >  [<ffffffffc0175643>] ? btrfs_wait_ordered_range+0x73/0x110 [btrfs]
> >  [<ffffffffc01a2a0d>] ? btrfs_wait_cache_io+0x5d/0x1f0 [btrfs]
> >  [<ffffffffc0141a36>] ? btrfs_write_dirty_block_groups+0x106/0x380 [btrfs]
> >  [<ffffffffc0140c5d>] ? btrfs_run_delayed_refs+0x1fd/0x2b0 [btrfs]
> >  [<ffffffffc01551d7>] ? commit_cowonly_roots+0x257/0x2f0 [btrfs]
> >  [<ffffffffc0157a24>] ? btrfs_commit_transaction+0x4e4/0xa40 [btrfs]
> >  [<ffffffffc015801d>] ? start_transaction+0x9d/0x4a0 [btrfs]
> >  [<ffffffffc01523a2>] ? transaction_kthread+0x1b2/0x1f0 [btrfs]
> >  [<ffffffffc01521f0>] ? btrfs_cleanup_transaction+0x580/0x580 [btrfs]
> >  [<ffffffff810975c0>] ? kthread+0xf0/0x110
> >  [<ffffffff8102476b>] ? __switch_to+0x2bb/0x700
> >  [<ffffffff810974d0>] ? kthread_park+0x60/0x60
> >  [<ffffffff81611d35>] ? ret_from_fork+0x25/0x30
> 
> And another one:
> > INFO: task smbd:20101 blocked for more than 300 seconds.
> > smbd            D    0 20101   1714 0x00000000
> >  ffff8801f01cac00 0000000000000000 ffff8801f4a241c0 ffff88007f9a5240
> >  ffff8801f5b98780 ffffc90049a33bc0 ffffffff8160cd2d ffff8800f78bb5b0
> >  fd2236313fb5274b 000000008dc3c1a0 ffff8801f3ef0a40 ffff88007f9a5240
> > Call Trace:
> >  [<ffffffff8160cd2d>] ? __schedule+0x23d/0x6d0
> >  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160d1f2>] ? schedule+0x32/0x80
> >  [<ffffffff81610729>] ? schedule_timeout+0x249/0x300
> >  [<ffffffff81336486>] ? __radix_tree_lookup+0x76/0xe0
> ***
> >  [<ffffffff8101b7d1>] ? xen_clocksource_get_cycles+0x11/0x20
> >  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160ca74>] ? io_schedule_timeout+0xb4/0x130
> >  [<ffffffff810bb5e7>] ? prepare_to_wait+0x57/0x80
> >  [<ffffffff8160da77>] ? bit_wait_io+0x17/0x60
> >  [<ffffffff8160d56c>] ? __wait_on_bit+0x5c/0x90
> >  [<ffffffffc00f290b>] ? __jbd2_journal_file_buffer+0xcb/0x180 [jbd2]
> >  [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
> >  [<ffffffff8160d6ce>] ? out_of_line_wait_on_bit+0x7e/0xa0
> >  [<ffffffff810bb910>] ? autoremove_wake_function+0x40/0x40
> >  [<ffffffffc00f2bc8>] ? do_get_write_access+0x208/0x420 [jbd2]
> >  [<ffffffffc00f2e0e>] ? jbd2_journal_get_write_access+0x2e/0x60 [jbd2]
> >  [<ffffffffc02c78a6>] ? __ext4_journal_get_write_access+0x36/0x70 [ext4]
> >  [<ffffffffc02a3913>] ? ext4_orphan_add+0xd3/0x230 [ext4]
> >  [<ffffffffc0296b4a>] ? ext4_mark_inode_dirty+0x6a/0x200 [ext4]
> >  [<ffffffffc02a4b8a>] ? ext4_unlink+0x36a/0x380 [ext4]
> >  [<ffffffff81211677>] ? vfs_unlink+0xe7/0x180
> >  [<ffffffff81214439>] ? do_unlinkat+0x289/0x300
> >  [<ffffffff81611abb>] ? system_call_fast_compare_end+0xc/0x9b
> 
> This does not look normal to me or did I miss something?
> 
> Where can I get more information on why there is no progress for 300s,
> what should I do to debug which task is waiting for what?
> 
> The traces of the of other CPUs look normal to me: the one posted first
> above is the shortest, in all other cases they were sooner or later
> waiting for IO (my interpretation, but I can post them if necessary.)
> 
> This problem occurs since the upgrade of the Linux kernel inside the VM
> from 4.1.x to 4.9.32 and now 4.9.52.
> 
> Any help is appreciated.
> Philipp Hahn
> -- 
> Philipp Hahn
> Open Source Software Engineer
> 
> Univention GmbH
> be open.
> Mary-Somerville-Str. 1
> D-28359 Bremen
> Tel.: +49 421 22232-0
> Fax : +49 421 22232-99
> hahn@univention.de
> 
> http://www.univention.de/
> Geschäftsführer: Peter H. Ganten
> HRB 20755 Amtsgericht Bremen
> Steuer-Nr.: 71-597-02876
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 4.9.52: INFO: task XXX blocked for more than 300 seconds.
  2017-10-05 13:20 ` Konrad Rzeszutek Wilk
@ 2017-10-05 17:59   ` Ankur Arora
  2017-10-19 14:34     ` 4.9.52: INFO: task X " Philipp Hahn
  2017-10-19 14:34     ` [Xen-devel] " Philipp Hahn
  0 siblings, 2 replies; 8+ messages in thread
From: Ankur Arora @ 2017-10-05 17:59 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Philipp Hahn; +Cc: xen-devel, linux-kernel

On 2017-10-05 06:20 AM, Konrad Rzeszutek Wilk wrote:
> On Wed, Oct 04, 2017 at 08:26:27PM +0200, Philipp Hahn wrote:
>> Hello,
>>
> 
> Adding Ankur to this as I think he saw something similar.
> 
> But in the meantime - do you see this with the latest version of Linux?
>> with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
>> several stuck processes: Here is one (truncated) dump of the Linux
>> kernel messages:
>>
>>>   [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160e032>] ? schedule+0x32/0x80
>>>   [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
>>>   [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
>>>   [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
>>>   [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
>>>   [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
>>>   [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
>>>   [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
>>>   [<ffffffffc00abd18>] ? jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>   [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
>>>   [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
>>>   [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
>>>   [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
>>>   [<ffffffff81097660>] ? kthread+0xf0/0x110
>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>   [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>>   [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
This looks like this race: https://patchwork.kernel.org/patch/9853443/

Can you dump the output of: cat /sys/block/$xen-frontend-device/mq/*/tags

If you've hit this bug, one or more of the MQs would be wedged and
the nr_free in one or more of the queues would be 0 and will not
change.

Ankur

>>> NMI backtrace for cpu 2
>>> CPU: 2 PID: 35 Comm: khungtaskd Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>>>   0000000000000000 ffffffff81331935 0000000000000000 0000000000000002
>>>   ffffffff81335e60 0000000000000002 ffffffff8104cb70 ffff8801f0c90e80
>>>   ffffffff81335f6a ffff8801f0c90e80 00000000003fffbc ffffffff81128048
>>> Call Trace:
>>>   [<ffffffff81331935>] ? dump_stack+0x5c/0x77
>>>   [<ffffffff81335e60>] ? nmi_cpu_backtrace+0x90/0xa0
>>>   [<ffffffff8104cb70>] ? irq_force_complete_move+0x140/0x140
>>>   [<ffffffff81335f6a>] ? nmi_trigger_cpumask_backtrace+0xfa/0x130
>>>   [<ffffffff81128048>] ? watchdog+0x2b8/0x330
>>>   [<ffffffff81127d90>] ? reset_hung_task_detector+0x10/0x10
>>>   [<ffffffff81097660>] ? kthread+0xf0/0x110
>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>   [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>>   [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
>>> Sending NMI from CPU 2 to CPUs 0-1,3:
>>> NMI backtrace for cpu 1
>>> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>>> task: ffff8801f4a02ec0 task.stack: ffffc90040ca4000
>>> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
>>> RSP: e02b:ffffc90040ca7ed0  EFLAGS: 00000246
>>> RAX: 0000000000000000 RBX: ffff8801f4a02ec0 RCX: ffffffff810013aa
>>> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
>>> RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
>>> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
>>> R13: 0000000000000000 R14: ffff8801f4a02ec0 R15: ffff8801f4a02ec0
>>> FS:  00007f23ac595700(0000) GS:ffff8801f5a80000(0000) knlGS:0000000000000000
>>> CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
>>> CR2: 00007f52537d6d46 CR3: 00000001bba23000 CR4: 0000000000002660
>>> Stack:
>>>   ffff8801bb832201 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>>>   ffff8801f4a02ec0 0000000000000001 ffffffff810bc280 ffff8801f4a02ec0
>>>   ffff8801f4a02ec0 c0a995961d41240f addce6dcadd009c9 0000000000000000
>>> Call Trace:
>>>   [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>>>   [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>>>   [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>>> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
>>> NMI backtrace for cpu 3
>>> CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>>> task: ffff8801f4a24f00 task.stack: ffffc90040cb4000
>>> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
>>> RSP: e02b:ffffc90040cb7ed0  EFLAGS: 00000246
>>> RAX: 0000000000000000 RBX: ffff8801f4a24f00 RCX: ffffffff810013aa
>>> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
>>> RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
>>> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
>>> R13: 0000000000000000 R14: ffff8801f4a24f00 R15: ffff8801f4a24f00
>>> FS:  00007f1a2af19700(0000) GS:ffff8801f5b80000(0000) knlGS:0000000000000000
>>> CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
>>> CR2: 00007f4a5416b000 CR3: 00000001d83ec000 CR4: 0000000000002660
>>> Stack:
>>>   0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>>>   ffff8801f4a24f00 0000000000000003 ffffffff810bc280 ffff8801f4a24f00
>>>   ffff8801f4a24f00 77816deb133b9979 addce6dcadd009c9 0000000000000000
>>> Call Trace:
>>>   [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>>>   [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>>>   [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>>> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
>>> NMI backtrace for cpu 0
>>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201709271649
>>> task: ffffffff81c0e540 task.stack: ffffffff81c00000
>>> RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
>>> RSP: e02b:ffffffff81c03e90  EFLAGS: 00000246
>>> RAX: 0000000000000000 RBX: ffffffff81c0e540 RCX: ffffffff810013aa
>>> RDX: ffffffff81c4ba70 RSI: 0000000000000000 RDI: 0000000000000001
>>> RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
>>> R10: 0000000000007ff0 R11: 0000000000000246 R12: 0000000000000000
>>> R13: 0000000000000000 R14: ffffffff81c0e540 R15: ffffffff81c0e540
>>> FS:  00007f977d3c2700(0000) GS:ffff8801f5a00000(0000) knlGS:0000000000000000
>>> CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> CR2: 000055ae9becabc8 CR3: 00000001d6a40000 CR4: 0000000000002660
>>> Stack:
>>>   0000000000000001 0000000000000001 ffffffff8101b55c ffffffff81611ec8
>>>   ffffffff81c0e540 0000000000000000 ffffffff810bc280 ffffffff81c0e540
>>>   ffffffff81c0e540 ebeac214e330bd3f addce6dcadd009c9 ffffffffffffffff
>>> Call Trace:
>>>   [<ffffffff8101b55c>] ? xen_safe_halt+0xc/0x20
>>>   [<ffffffff81611ec8>] ? default_idle+0x18/0xd0
>>>   [<ffffffff810bc280>] ? cpu_startup_entry+0x1f0/0x260
>>>   [<ffffffff81d4bf84>] ? start_kernel+0x46d/0x48d
>>>   [<ffffffff81d510e6>] ? xen_start_kernel+0x52e/0x538
>>> Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
>>
>>
>> Looking at the dis-assembly of xen_clocksource_get_cycles() in
>> arch/x86/xen/time.c I see no path how that should call
>> __blk_mq_run_hw_queue():
>>
>>> 00000000000001a0 <xen_clocksource_get_cycles> mov    %gs:0x0(%rip),%rdi        # 00000000000001a8 <xen_clocksource_get_cycles+0x8>
>>> 00000000000001a8 <xen_clocksource_get_cycles+0x8> add    $0x20,%rdi
>>>          ret = pvclock_clocksource_read(src);
>>> 00000000000001ac <xen_clocksource_get_cycles+0xc> callq  00000000000001b1 <xen_clocksource_get_cycles+0x11>
>>> }
>>>
>>> static cycle_t xen_clocksource_get_cycles(struct clocksource *cs)
>>> {
>>>          return xen_clocksource_read();
>>> }
>>> 00000000000001b1 <xen_clocksource_get_cycles+0x11> retq
>>> 00000000000001b2 <xen_clocksource_get_cycles+0x12> data16 data16 data16 data16 nopw %cs:0x0(%rax,%rax,1)
>>>
>>> static void xen_read_wallclock(struct timespec *ts)
>>> {
>>>          struct shared_info *s = HYPERVISOR_shared_info;
>>>          struct pvclock_wall_clock *wall_clock = &(s->wc);
>>> 00000000000001c0 <xen_get_wallclock> mov    0x0(%rip),%rax        # 00000000000001c7 <xen_get_wallclock+0x7>
>>
>>
>> Here's another dump, which diverges from xen_clocksource_get_cycles() to
>> some completely other function:
>>
>>> INFO: task btrfs-transacti:522 blocked for more than 300 seconds.
>>> btrfs-transacti D    0   522      2 0x00000000
>>>   ffff8801f3836000 0000000000000000 ffff8801f4a24f00 ffff8801f32da1c0
>>>   ffff8801f5b18780 ffffc9004199fa40 ffffffff8160cd2d 0000000000000000
>>>   ffff8801741a58a8 0000000000000102 000000000000000e ffff8801f32da1c0
>>> Call Trace:
>>>   [<ffffffff8160cd2d>] ? __schedule+0x23d/0x6d0
>>>   [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160d1f2>] ? schedule+0x32/0x80
>>>   [<ffffffff81610729>] ? schedule_timeout+0x249/0x300
>> ***
>>>   [<ffffffff8101b7d1>] ? xen_clocksource_get_cycles+0x11/0x20
>>>   [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160ca74>] ? io_schedule_timeout+0xb4/0x130
>>>   [<ffffffff810bb5e7>] ? prepare_to_wait+0x57/0x80
>>>   [<ffffffff8160da77>] ? bit_wait_io+0x17/0x60
>>>   [<ffffffff8160d56c>] ? __wait_on_bit+0x5c/0x90
>>>   [<ffffffff8117e18e>] ? find_get_pages_tag+0x15e/0x300
>>>   [<ffffffff8117d016>] ? wait_on_page_bit+0x86/0xa0
>>>   [<ffffffff810bb910>] ? autoremove_wake_function+0x40/0x40
>>>   [<ffffffff8117d107>] ? __filemap_fdatawait_range+0xd7/0x150
>>>   [<ffffffff8117d18f>] ? filemap_fdatawait_range+0xf/0x30
>>>   [<ffffffffc0175643>] ? btrfs_wait_ordered_range+0x73/0x110 [btrfs]
>>>   [<ffffffffc01a2a0d>] ? btrfs_wait_cache_io+0x5d/0x1f0 [btrfs]
>>>   [<ffffffffc0141a36>] ? btrfs_write_dirty_block_groups+0x106/0x380 [btrfs]
>>>   [<ffffffffc0140c5d>] ? btrfs_run_delayed_refs+0x1fd/0x2b0 [btrfs]
>>>   [<ffffffffc01551d7>] ? commit_cowonly_roots+0x257/0x2f0 [btrfs]
>>>   [<ffffffffc0157a24>] ? btrfs_commit_transaction+0x4e4/0xa40 [btrfs]
>>>   [<ffffffffc015801d>] ? start_transaction+0x9d/0x4a0 [btrfs]
>>>   [<ffffffffc01523a2>] ? transaction_kthread+0x1b2/0x1f0 [btrfs]
>>>   [<ffffffffc01521f0>] ? btrfs_cleanup_transaction+0x580/0x580 [btrfs]
>>>   [<ffffffff810975c0>] ? kthread+0xf0/0x110
>>>   [<ffffffff8102476b>] ? __switch_to+0x2bb/0x700
>>>   [<ffffffff810974d0>] ? kthread_park+0x60/0x60
>>>   [<ffffffff81611d35>] ? ret_from_fork+0x25/0x30
>>
>> And another one:
>>> INFO: task smbd:20101 blocked for more than 300 seconds.
>>> smbd            D    0 20101   1714 0x00000000
>>>   ffff8801f01cac00 0000000000000000 ffff8801f4a241c0 ffff88007f9a5240
>>>   ffff8801f5b98780 ffffc90049a33bc0 ffffffff8160cd2d ffff8800f78bb5b0
>>>   fd2236313fb5274b 000000008dc3c1a0 ffff8801f3ef0a40 ffff88007f9a5240
>>> Call Trace:
>>>   [<ffffffff8160cd2d>] ? __schedule+0x23d/0x6d0
>>>   [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160d1f2>] ? schedule+0x32/0x80
>>>   [<ffffffff81610729>] ? schedule_timeout+0x249/0x300
>>>   [<ffffffff81336486>] ? __radix_tree_lookup+0x76/0xe0
>> ***
>>>   [<ffffffff8101b7d1>] ? xen_clocksource_get_cycles+0x11/0x20
>>>   [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160ca74>] ? io_schedule_timeout+0xb4/0x130
>>>   [<ffffffff810bb5e7>] ? prepare_to_wait+0x57/0x80
>>>   [<ffffffff8160da77>] ? bit_wait_io+0x17/0x60
>>>   [<ffffffff8160d56c>] ? __wait_on_bit+0x5c/0x90
>>>   [<ffffffffc00f290b>] ? __jbd2_journal_file_buffer+0xcb/0x180 [jbd2]
>>>   [<ffffffff8160da60>] ? bit_wait_timeout+0x90/0x90
>>>   [<ffffffff8160d6ce>] ? out_of_line_wait_on_bit+0x7e/0xa0
>>>   [<ffffffff810bb910>] ? autoremove_wake_function+0x40/0x40
>>>   [<ffffffffc00f2bc8>] ? do_get_write_access+0x208/0x420 [jbd2]
>>>   [<ffffffffc00f2e0e>] ? jbd2_journal_get_write_access+0x2e/0x60 [jbd2]
>>>   [<ffffffffc02c78a6>] ? __ext4_journal_get_write_access+0x36/0x70 [ext4]
>>>   [<ffffffffc02a3913>] ? ext4_orphan_add+0xd3/0x230 [ext4]
>>>   [<ffffffffc0296b4a>] ? ext4_mark_inode_dirty+0x6a/0x200 [ext4]
>>>   [<ffffffffc02a4b8a>] ? ext4_unlink+0x36a/0x380 [ext4]
>>>   [<ffffffff81211677>] ? vfs_unlink+0xe7/0x180
>>>   [<ffffffff81214439>] ? do_unlinkat+0x289/0x300
>>>   [<ffffffff81611abb>] ? system_call_fast_compare_end+0xc/0x9b
>>
>> This does not look normal to me or did I miss something?
>>
>> Where can I get more information on why there is no progress for 300s,
>> what should I do to debug which task is waiting for what?
>>
>> The traces of the of other CPUs look normal to me: the one posted first
>> above is the shortest, in all other cases they were sooner or later
>> waiting for IO (my interpretation, but I can post them if necessary.)
>>
>> This problem occurs since the upgrade of the Linux kernel inside the VM
>> from 4.1.x to 4.9.32 and now 4.9.52.
>>
>> Any help is appreciated.
>> Philipp Hahn
>> -- 
>> Philipp Hahn
>> Open Source Software Engineer
>>
>> Univention GmbH
>> be open.
>> Mary-Somerville-Str. 1
>> D-28359 Bremen
>> Tel.: +49 421 22232-0
>> Fax : +49 421 22232-99
>> hahn@univention.de
>>
>> http://www.univention.de/
>> Geschäftsführer: Peter H. Ganten
>> HRB 20755 Amtsgericht Bremen
>> Steuer-Nr.: 71-597-02876
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xen-devel] 4.9.52: INFO: task X blocked for more than 300 seconds.
  2017-10-05 17:59   ` Ankur Arora
  2017-10-19 14:34     ` 4.9.52: INFO: task X " Philipp Hahn
@ 2017-10-19 14:34     ` Philipp Hahn
  1 sibling, 0 replies; 8+ messages in thread
From: Philipp Hahn @ 2017-10-19 14:34 UTC (permalink / raw)
  To: Ankur Arora, Konrad Rzeszutek Wilk; +Cc: linux-kernel, xen-devel

[-- Attachment #1: Type: text/plain, Size: 5514 bytes --]

Hello Ankur,

Am 05.10.2017 um 19:59 schrieb Ankur Arora:
> On 2017-10-05 06:20 AM, Konrad Rzeszutek Wilk wrote:
>> On Wed, Oct 04, 2017 at 08:26:27PM +0200, Philipp Hahn wrote:
...
>> Adding Ankur to this as I think he saw something similar.
>>
>> But in the meantime - do you see this with the latest version of Linux?
>>> with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
>>> several stuck processes: Here is one (truncated) dump of the Linux
>>> kernel messages:
>>>
>>>>   [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
>>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>>   [<ffffffff8160e032>] ? schedule+0x32/0x80
>>>>   [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
>>>>   [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
>>>>   [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
>>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>>   [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
>>>>   [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
>>>>   [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
>>>>   [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
>>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>>   [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
>>>>   [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
>>>>   [<ffffffffc00abd18>] ?
>>>> jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
>>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>>   [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
>>>>   [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
>>>>   [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
>>>>   [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
>>>>   [<ffffffff81097660>] ? kthread+0xf0/0x110
>>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>>   [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>>>   [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
> This looks like this race: https://patchwork.kernel.org/patch/9853443/

I built a new kernel, for which I picked that patch on top of 4.9.56. We
are currently testing that, but it crashed again yesterday evening. Here
is the dmesg output:

> INFO: task systemd:1 blocked for more than 120 seconds.
>       Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201710161640
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> systemd         D    0     1      0 0x00000000
>  ffff8801f1797c00 0000000000000000 ffffffff81c0e540 ffff8801f4956040
>  ffff8801f5a187c0 ffffc90040c4f880 ffffffff8160ebbd ffffffff81186295
>  ffffffff81350b39 0000000002cebf80 ffffffff811863c8 ffff8801f4956040
> Call Trace:
>  [<ffffffff8160ebbd>] ? __schedule+0x23d/0x6d0
>  [<ffffffff81186295>] ? move_freepages+0x95/0xd0
>  [<ffffffff81350b39>] ? list_del+0x9/0x20
>  [<ffffffff811863c8>] ? __rmqueue+0x88/0x3e0
>  [<ffffffff8160f082>] ? schedule+0x32/0x80
>  [<ffffffff8161255c>] ? schedule_timeout+0x1ec/0x360
>  [<ffffffff81187530>] ? get_page_from_freelist+0x350/0xad0
>  [<ffffffff8160e904>] ? io_schedule_timeout+0xb4/0x130
>  [<ffffffff81364fd4>] ? __sbitmap_queue_get+0x24/0x90
>  [<ffffffff813114f9>] ? bt_get.isra.6+0x129/0x1c0
>  [<ffffffff81350b39>] ? list_del+0x9/0x20
>  [<ffffffff810bba00>] ? wake_up_atomic_t+0x30/0x30
>  [<ffffffff81311843>] ? blk_mq_get_tag+0x23/0x90
>  [<ffffffff8130d1aa>] ? __blk_mq_alloc_request+0x1a/0x220
>  [<ffffffff8130dffd>] ? blk_mq_map_request+0xcd/0x170
>  [<ffffffff8131016a>] ? blk_sq_make_request+0xca/0x4c0
>  [<ffffffff81303e1a>] ? generic_make_request_checks+0x22a/0x4f0
>  [<ffffffff81304761>] ? generic_make_request+0x121/0x2c0
>  [<ffffffff8117e823>] ? __add_to_page_cache_locked+0x183/0x230
>  [<ffffffff81304976>] ? submit_bio+0x76/0x150
>  [<ffffffff8117e964>] ? add_to_page_cache_lru+0x84/0xe0
>  [<ffffffffc017b9d9>] ? ext4_mpage_readpages+0x2b9/0x8b0 [ext4]
>  [<ffffffff811d94fa>] ? alloc_pages_current+0x8a/0x110
>  [<ffffffff8118e745>] ? __do_page_cache_readahead+0x195/0x240
>  [<ffffffff8117f627>] ? pagecache_get_page+0x27/0x2b0
>  [<ffffffff811818e6>] ? filemap_fault+0x276/0x590
>  [<ffffffffc0138491>] ? ext4_filemap_fault+0x31/0x50 [ext4]
>  [<ffffffff811b23a4>] ? __do_fault+0x84/0x190
>  [<ffffffff811b783e>] ? handle_mm_fault+0xede/0x1680
>  [<ffffffff8124d25e>] ? ep_poll+0x13e/0x360
>  [<ffffffff8105feda>] ? __do_page_fault+0x26a/0x500
>  [<ffffffff81205ba2>] ? SyS_read+0x52/0xc0
>  [<ffffffff81614d58>] ? page_fault+0x28/0x30

I haven't been able to get the address of the queue object yet to get
its state. (timeout for today)

> Can you dump the output of: cat /sys/block/$xen-frontend-device/mq/*/tags
> 
> If you've hit this bug, one or more of the MQs would be wedged and
> the nr_free in one or more of the queues would be 0 and will not
> change.

As soon as the bug occurs, we can no longer access the VM via ssh or the
Xen (serial) console: the connection stalls after entering 2-3 characters.

I have a Xen crash-dump file of one such crash, but following
/sys/block/xvd?/mq/*/tags manually to get the kobject address using
"crash" (gdb) is very time consuming. So far I only did it once for
xvda, but I have to to that for the other 15 block devices as well to
find one culprit.

Philipp Hahn
-- 
Philipp Hahn
Open Source Software Engineer

Univention GmbH
be open.
Mary-Somerville-Str. 1
D-28359 Bremen
Tel.: +49 421 22232-0
Fax : +49 421 22232-99
hahn@univention.de

http://www.univention.de/
Geschäftsführer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-0287

[-- Attachment #2: blk_sq_make_request.txt --]
[-- Type: text/plain, Size: 9721 bytes --]

PID: 1      TASK: ffff8801f4956040  CPU: 0   COMMAND: "systemd"
 #0 [ffffc90040c4f800] __schedule at ffffffff8160ebbd
    kernel/sched/core.c: 2902
    ffffc90040c4f808: 0000000000000000 init_task
    ffffc90040c4f818: [ffff8801f4956040:task_struct] ffff8801f5a187c0
    ffffc90040c4f828: ffffc90040c4f880 __schedule+573
    ffffc90040c4f838: move_freepages+149 list_del+9
    ffffc90040c4f848: 0000000002cebf80 __rmqueue+136
    ffffc90040c4f858: [ffff8801f4956040:task_struct] 7fffffffffffffff
    ffffc90040c4f868: [ffff8801f129a850:kmalloc-192] 7fffffffffffffff
    ffffc90040c4f878: [ffff8801f0a9c040:request_queue] ffffc90040c4f918
    ffffc90040c4f888: schedule+50
 #1 [ffffc90040c4f888] schedule at ffffffff8160f082
    arch/x86/include/asm/bitops.h: 311
    ffffc90040c4f890: 0000000000000000 schedule_timeout+492
 #2 [ffffc90040c4f898] schedule_timeout at ffffffff8161255c
    kernel/time/timer.c: 1771
    ffffc90040c4f8a0: 0000000000000001 ffffea0004f18880
    ffffc90040c4f8b0: ffffea0004f18840 ffff8801ff01bb80
    ffffc90040c4f8c0: get_page_from_freelist+848 ffffc90040c4f8c8
    ffffc90040c4f8d0: ffffc90040c4f8c8 000059bd97399f43
    ffffc90040c4f8e0: 0000000000000000 78bd7181eeb35a00
    ffffc90040c4f8f0: 0000000000000000 7fffffffffffffff
    ffffc90040c4f900: [ffff8801f129a850:kmalloc-192] [ffff8801f16e4300:kmalloc-256]
    ffffc90040c4f910: [ffff8801f0a9c040:request_queue] ffff8801f5a187c0
    ffffc90040c4f920: io_schedule_timeout+180
 #3 [ffffc90040c4f920] io_schedule_timeout at ffffffff8160e904
    arch/x86/include/asm/current.h: 14
    ffffc90040c4f928: __sbitmap_queue_get+36 [ffff8801f120e988:kmalloc-512]
    ffffc90040c4f938: [ffff8801f1037000:kmalloc-1024] ffffc90040c4fa00
    ffffc90040c4f948: bt_get+297
 #4 [ffffc90040c4f948] bt_get at ffffffff813114f9
    block/blk-mq-tag.c: 143
    ffffc90040c4f950: list_del+9       0000000000000000
    ffffc90040c4f960: [ffff8801f4956040:task_struct] autoremove_wake_function
    ffffc90040c4f970: [ffff8801f120e990:kmalloc-512] [ffff8801f120e990:kmalloc-512]
    ffffc90040c4f980: 78bd7181eeb35a00 ffffc90040c4fa00
    ffffc90040c4f990: 0000000000000008 0000000000000000
    ffffc90040c4f9a0: ffffc90040c4fa00 blk_mq_get_tag+35
 #5 [ffffc90040c4f9a8] blk_mq_get_tag at ffffffff81311843
    block/blk-mq-tag.c: 165
    ffffc90040c4f9b0: 0000000000000001 __blk_mq_alloc_request+26
 #6 [ffffc90040c4f9b8] __blk_mq_alloc_request at ffffffff8130d1aa
    block/blk-mq.c: 197
    ffffc90040c4f9c0: 0000000000000001 ffffe8ffffc0a140
    ffffc90040c4f9d0: 0000000000000000 0000000000000008
    ffffc90040c4f9e0: [ffff8801f16e4300:kmalloc-256] blk_mq_map_request+205
 #7 [ffffc90040c4f9e8] blk_mq_map_request at ffffffff8130dffd
    block/blk-mq.c: 1209
    ffffc90040c4f9f0: [ffff8801f1037000:kmalloc-1024] ffffc90040c4fa90
    ffffc90040c4fa00: [ffff8801f0a9c040:request_queue] 0000000000000000
    ffffc90040c4fa10: ffffe8ffffc0a140 [ffff8801f1037000:kmalloc-1024]
    ffffc90040c4fa20: 78bd7181eeb35a00 0000000000000001
    ffffc90040c4fa30: 0000000000000000 00000000ffffffff
    ffffc90040c4fa40: [ffff8801f0a9c040:request_queue] 0000000000000001
    ffffc90040c4fa50: 0000000000000001 blk_sq_make_request+202
 #8 [ffffc90040c4fa58] blk_sq_make_request at ffffffff8131016a
    block/blk-mq.c: 1375
    ffffc90040c4fa60: [ffff8801f10379e0:kmalloc-1024] [ffff8801f0a9c040:request_queue]
    ffffc90040c4fa70: generic_make_request_checks+554 ffffc90040c4fad0
    ffffc90040c4fa80: [ffff8801f16e4300:kmalloc-256] 0000000088000102
    ffffc90040c4fa90: 78bd7181eeb35a00 [ffff8801f3fa6890:ext4_inode_cache]
    ffffc90040c4faa0: 78bd7181eeb35a00 [ffff8801f16e4300:kmalloc-256]
    ffffc90040c4fab0: ffffc90040c4fb30 00000000ffffffff
    ffffc90040c4fac0: [ffff8801f0a9c040:request_queue] 0000000000000001
    ffffc90040c4fad0: 0000000000000001 generic_make_request+289
 #9 [ffffc90040c4fad8] generic_make_request at ffffffff81304761
    block/blk-core.c: 2051
    ffffc90040c4fae0: 0000000000000000 0000000000000000
    ffffc90040c4faf0: 0000000000000000 0000000000000000
    ffffc90040c4fb00: __add_to_page_cache_locked+387 78bd7181eeb35a00
    ffffc90040c4fb10: 00000000028e2388 [ffff8801f16e4300:kmalloc-256]
    ffffc90040c4fb20: ffffea0004a376c0 0000000000000001
    ffffc90040c4fb30: 0000000000000020 submit_bio+118
#10 [ffffc90040c4fb38] submit_bio at ffffffff81304976
    block/blk-core.c: 2123
    ffffc90040c4fb40: ffffea0004a376c0 0000000000000000
    ffffc90040c4fb50: add_to_page_cache_lru+132 00028e2388000102
    ffffc90040c4fb60: 78bd7181eeb35a00 78bd7181eeb35a00
    ffffc90040c4fb70: 00000000000000f0 0000000000000013
    ffffc90040c4fb80: ext4_mpage_readpages+697
#11 [ffffc90040c4fb80] ext4_mpage_readpages at ffffffffc017b9d9 [ext4]
    ffffc90040c4fb88: 024200caffffffff [ffff8801f3fa6718:ext4_inode_cache]
    ffffc90040c4fb98: ffffea0006bd4400 ffffc90040c4fcd0
    ffffc90040c4fba8: [ffff8801f16e4300:kmalloc-256] 0000001500000015
    ffffc90040c4fbb8: [ffff8801f3fa6890:ext4_inode_cache] 000010000000000c
    ffffc90040c4fbc8: 0000000001652086 [ffff8801f4956040:task_struct]
    ffffc90040c4fbd8: [ffff8801f3eaaa40:bdev_cache] [ffff8801f4956040:task_struct]
    ffffc90040c4fbe8: [ffff8801f4956040:task_struct] [ffff8801f4956040:task_struct]
    ffffc90040c4fbf8: 0000000001652083 00000019000000e9
    ffffc90040c4fc08: 0000000000000020 0000000001652089
    ffffc90040c4fc18: preferred_node_policy 00000000024213ca
    ffffc90040c4fc28: 0000000000000000 0000000000000000
    ffffc90040c4fc38: [ffff8801f4956040:task_struct] [ffff8801f4956040:task_struct]
    ffffc90040c4fc48: alloc_pages_current+138 78bd7181eeb35a00
    ffffc90040c4fc58: 0000000000000020 ffffc90040c4fd50
    ffffc90040c4fc68: 0000000000000108 0000000000000020
    ffffc90040c4fc78: 0000000000000019 00000000000000e9
    ffffc90040c4fc88: __do_page_cache_readahead+405
#12 [ffffc90040c4fc88] __do_page_cache_readahead at ffffffff8118e745
    mm/readahead.c: 123
    ffffc90040c4fc90: [ffff8801f179d300:kmalloc-256] [ffff8801f3fa6890:ext4_inode_cache]
    ffffc90040c4fca0: 0000000000000018 024213ca81337a06
    ffffc90040c4fcb0: 0000000000000140 ffffea0006b34c80
    ffffc90040c4fcc0: [ffff8801f3fa6898:ext4_inode_cache] 0000000000000020
    ffffc90040c4fcd0: ffffea0006b34ca0 ffffea00069b9a60
    ffffc90040c4fce0: ffffc90040c4fce0 ffffc90040c4fce0
    ffffc90040c4fcf0: ffffc90040c4fcf0 ffffc90040c4fcf0
    ffffc90040c4fd00: ffffc90040c4fd00 ffffc90040c4fd00
    ffffc90040c4fd10: 0000000000000000 78bd7181eeb35a00
    ffffc90040c4fd20: pagecache_get_page+39 00000000000000f9
    ffffc90040c4fd30: [ffff8801f179d300:kmalloc-256] 0000000000000000
    ffffc90040c4fd40: ffffc90040c4fde8 [ffff8801f3fa6718:ext4_inode_cache]
    ffffc90040c4fd50: [ffff8801f179d3a0:kmalloc-256] filemap_fault+630
#13 [ffffc90040c4fd58] filemap_fault at ffffffff811818e6
    mm/filemap.c: 2133
    ffffc90040c4fd60: 0000000000000101 [ffff8801f3fa6890:ext4_inode_cache]
    ffffc90040c4fd70: ffffc90040c4fde8 [ffff8801f1794af0:vm_area_struct]
    ffffc90040c4fd80: [ffff8801f1794af0:vm_area_struct] [ffff8801f3fa66f0:ext4_inode_cache]
    ffffc90040c4fd90: [ffff8801f1794af0:vm_area_struct] ffffc90040c4fde8
    ffffc90040c4fda0: 0000564c1f3b7223 0000000000000000
    ffffc90040c4fdb0: 0000000000000000 ext4_filemap_fault+49
#14 [ffffc90040c4fdb8] ext4_filemap_fault at ffffffffc0138491 [ext4]
    ffffc90040c4fdc0: [ffff8801f1794af0:vm_area_struct] ffffc90040c4fe68
    ffffc90040c4fdd0: 0000000000000000 __do_fault+132
#15 [ffffc90040c4fdd8] __do_fault at ffffffff811b23a4
    mm/memory.c: 2828
    ffffc90040c4fde0: 0000000000000040 024200ca00000054
    ffffc90040c4fdf0: 00000000000000f9 0000564c1f3b7000
    ffffc90040c4fe00: 0000000000000000 0000000000000000
    ffffc90040c4fe10: ffff880000000d88 78bd7181eeb35a00
    ffffc90040c4fe20: [ffff8801f1794af0:vm_area_struct] 0000000000000040
    ffffc90040c4fe30: 00000000000000f9 handle_mm_fault+3806
#16 [ffffc90040c4fe38] handle_mm_fault at ffffffff811b783e
    mm/memory.c: 3155
    ffffc90040c4fe40: ffffc90040c4fe70 0000000000000000
    ffffc90040c4fe50: [ffff8801f4956040:task_struct] 00007ffca20e0e00
    ffffc90040c4fe60: ep_poll+318      [ffff8801f3d09db0:inode_cache]
    ffffc90040c4fe70: [ffff8801f1794af0:vm_area_struct] 0000564c1f3b7223
    ffffc90040c4fe80: 0000000000000054 ffff8801f17627c8
    ffffc90040c4fe90: 0000000000000000 ffffea0007cd03b0
    ffffc90040c4fea0: 0000000000000000 78bd7181eeb35a00
    ffffc90040c4feb0: 0000000000000004 0000564c1f3b7223
    ffffc90040c4fec0: [ffff8801f1797c00:mm_struct] ffffc90040c4ff58
    ffffc90040c4fed0: 0000000000000054 [ffff8801f4956040:task_struct]
    ffffc90040c4fee0: __do_page_fault+618
#17 [ffffc90040c4fee0] __do_page_fault at ffffffff8105feda
    arch/x86/mm/fault.c: 1398
    ffffc90040c4fee8: [ffff8801f1797c68:mm_struct] 00000000f1275c00
    ffffc90040c4fef8: [ffff8801f1794af0:vm_area_struct] 0000000000000000
    ffffc90040c4ff08: 0000000000000000 sys_read+82
    ffffc90040c4ff18: 78bd7181eeb35a00 0000000000000000
    ffffc90040c4ff28: 00007ffca20e0ab0 0000564c1f39dc13
    ffffc90040c4ff38: 0000000000000000 0000000000000000
    ffffc90040c4ff48: 00007ffca20e0c38 page_fault+40
#18 [ffffc90040c4ff50] page_fault at ffffffff81614d58
    arch/x86/entry/entry_64.S: 1026
    RIP: 00007fdaa3eb9dcc  RSP: 00007ffca20e04e0  RFLAGS: 00010286
    RAX: 0000000000000000  RBX: 00007ffca20e0ac0  RCX: ffffffffffffffff
    RDX: 0000000000000030  RSI: 0000564c1f39dc13  RDI: 0000564c1f3b7223
    RBP: 00007ffca20e0ab0   R8: 0000564c1f3b7223   R9: 00007fdaa3eba99a
    R10: 0000564c21226180  R11: 0000000000000001  R12: 0000564c1f39dc13
    R13: 0000000000000000  R14: 0000000000000000  R15: 00007ffca20e0c38
    ORIG_RAX: ffffffffffffffff  CS: e033  SS: e02b

[-- Attachment #3: blk_sq_make_request.s --]
[-- Type: text/plain, Size: 26172 bytes --]

/*
 * Single hardware queue variant. This will attempt to use any per-process
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
<blk_sq_make_request> callq  0000000000002fe5 <blk_sq_make_request+0x5>
<blk_sq_make_request+0x5> push   %r15
<blk_sq_make_request+0x7> push   %r14
<blk_sq_make_request+0x9> push   %r13
<blk_sq_make_request+0xb> push   %r12
<blk_sq_make_request+0xd> mov    %rdi,%r13
<blk_sq_make_request+0x10> push   %rbp
<blk_sq_make_request+0x11> push   %rbx
<blk_sq_make_request+0x12> mov    $0x1,%ebx
<blk_sq_make_request+0x17> sub    $0x48,%rsp
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
<blk_sq_make_request+0x1b> mov    0x14(%rsi),%eax
/*
 * Single hardware queue variant. This will attempt to use any per-process
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
<blk_sq_make_request+0x1e> mov    %gs:0x28,%rcx
<blk_sq_make_request+0x27> mov    %rcx,0x40(%rsp)
<blk_sq_make_request+0x2c> xor    %ecx,%ecx
<blk_sq_make_request+0x2e> mov    %rsi,0x20(%rsp)
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
<blk_sq_make_request+0x33> mov    %eax,%edx
<blk_sq_make_request+0x35> shr    $0x1d,%edx
<blk_sq_make_request+0x38> test   %edx,%edx
<blk_sq_make_request+0x3a> je     0000000000003024 <blk_sq_make_request+0x44>
<blk_sq_make_request+0x3c> mov    %eax,%ebx
<blk_sq_make_request+0x3e> shr    $0x3,%ebx
<blk_sq_make_request+0x41> and    $0x1,%ebx
	unsigned int request_count = 0;
	struct blk_map_ctx data;
	struct request *rq;
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);
<blk_sq_make_request+0x44> lea    0x20(%rsp),%rsi
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
	const int is_flush_fua = bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
<blk_sq_make_request+0x49> and    $0x300,%eax
	unsigned int request_count = 0;
	struct blk_map_ctx data;
	struct request *rq;
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);
<blk_sq_make_request+0x4e> mov    %r13,%rdi
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
	const int is_flush_fua = bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
<blk_sq_make_request+0x51> mov    %eax,%ebp
	struct blk_plug *plug;
	unsigned int request_count = 0;
<blk_sq_make_request+0x53> movl   $0x0,0x2c(%rsp)
	struct blk_map_ctx data;
	struct request *rq;
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);
<blk_sq_make_request+0x5b> callq  0000000000003040 <blk_sq_make_request+0x60>

	if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
<blk_sq_make_request+0x60> mov    0x20(%rsp),%rdi
<blk_sq_make_request+0x65> callq  000000000000304a <blk_sq_make_request+0x6a>
<blk_sq_make_request+0x6a> test   %al,%al
<blk_sq_make_request+0x6c> je     0000000000003060 <blk_sq_make_request+0x80>
<blk_sq_make_request+0x6e> mov    0x20(%rsp),%rdi
<blk_sq_make_request+0x73> callq  0000000000003058 <blk_sq_make_request+0x78>
<blk_sq_make_request+0x78> test   %eax,%eax
<blk_sq_make_request+0x7a> jne    0000000000003133 <blk_sq_make_request+0x153>
		bio_io_error(bio);
		return BLK_QC_T_NONE;
	}

	blk_queue_split(q, &bio, q->bio_split);
<blk_sq_make_request+0x80> mov    0x8b0(%r13),%rdx
<blk_sq_make_request+0x87> lea    0x20(%rsp),%rsi
<blk_sq_make_request+0x8c> mov    %r13,%rdi
<blk_sq_make_request+0x8f> callq  0000000000003074 <blk_sq_make_request+0x94>

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
<blk_sq_make_request+0x94> test   %ebp,%ebp
<blk_sq_make_request+0x96> jne    0000000000003170 <blk_sq_make_request+0x190>
}

static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
{
	return ((1UL << (nr & (BITS_PER_LONG-1))) &
		(addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
<blk_sq_make_request+0x9c> mov    0x510(%r13),%rax
<blk_sq_make_request+0xa3> test   $0x1,%ah
<blk_sq_make_request+0xa6> je     00000000000031f4 <blk_sq_make_request+0x214>
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);
<blk_sq_make_request+0xac> mov    %r13,%rdi
<blk_sq_make_request+0xaf> callq  0000000000003094 <blk_sq_make_request+0xb4>
<blk_sq_make_request+0xb4> mov    %eax,0x2c(%rsp)

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0xb8> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0xbd> lea    0x30(%rsp),%rdx
<blk_sq_make_request+0xc2> mov    %r13,%rdi
<blk_sq_make_request+0xc5> callq  0000000000000e70 <blk_mq_map_request>
	if (unlikely(!rq))
<blk_sq_make_request+0xca> test   %rax,%rax
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0xcd> mov    %rax,%rbp
	if (unlikely(!rq))
<blk_sq_make_request+0xd0> je     0000000000003210 <blk_sq_make_request+0x230>
		return BLK_QC_T_NONE;

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);
<blk_sq_make_request+0xd6> mov    0x30(%rsp),%rax
<blk_sq_make_request+0xdb> mov    %rax,(%rsp)
	return cookie != BLK_QC_T_NONE;
}

static inline blk_qc_t blk_tag_to_qc_t(unsigned int tag, unsigned int queue_num)
{
	return tag | (queue_num << BLK_QC_T_SHIFT);
<blk_sq_make_request+0xdf> mov    0x144(%rax),%eax
<blk_sq_make_request+0xe5> mov    %eax,0x8(%rsp)
<blk_sq_make_request+0xe9> mov    %eax,%r12d

DECLARE_PER_CPU(struct task_struct *, current_task);

static __always_inline struct task_struct *get_current(void)
{
	return this_cpu_read_stable(current_task);
<blk_sq_make_request+0xec> mov    %gs:0x0,%rax
	/*
	 * A task plug currently exists. Since this is completely lockless,
	 * utilize that to temporarily store requests until the task is
	 * either done or scheduled away.
	 */
	plug = current->plug;
<blk_sq_make_request+0xf5> mov    0x750(%rax),%r14
<blk_sq_make_request+0xfc> shl    $0x10,%r12d
<blk_sq_make_request+0x100> or     0x100(%rbp),%r12d
	if (plug) {
<blk_sq_make_request+0x107> test   %r14,%r14
<blk_sq_make_request+0x10a> je     000000000000327e <blk_sq_make_request+0x29e>
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x110> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x115> mov    %rbp,%rdi
<blk_sq_make_request+0x118> callq  00000000000030fd <blk_sq_make_request+0x11d>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x11d> mov    $0x1,%esi
<blk_sq_make_request+0x122> mov    %rbp,%rdi
<blk_sq_make_request+0x125> callq  000000000000310a <blk_sq_make_request+0x12a>
	 * either done or scheduled away.
	 */
	plug = current->plug;
	if (plug) {
		blk_mq_bio_to_request(rq, bio);
		if (!request_count)
<blk_sq_make_request+0x12a> mov    0x2c(%rsp),%eax
<blk_sq_make_request+0x12e> test   %eax,%eax
<blk_sq_make_request+0x130> je     000000000000322f <blk_sq_make_request+0x24f>
			trace_block_plug(q);

		blk_mq_put_ctx(data.ctx);

		if (request_count >= BLK_MAX_REQUEST_COUNT) {
<blk_sq_make_request+0x136> cmpl   $0xf,0x2c(%rsp)
<blk_sq_make_request+0x13b> ja     000000000000321b <blk_sq_make_request+0x23b>
<blk_sq_make_request+0x141> mov    0x18(%r14),%rsi
			blk_flush_plug_list(plug, false);
			trace_block_plug(q);
		}

		list_add_tail(&rq->queuelist, &plug->mq_list);
<blk_sq_make_request+0x145> lea    0x10(%r14),%rdx
<blk_sq_make_request+0x149> mov    %rbp,%rdi
<blk_sq_make_request+0x14c> callq  0000000000003131 <blk_sq_make_request+0x151>
		return cookie;
<blk_sq_make_request+0x151> jmp    000000000000314a <blk_sq_make_request+0x16a>
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);

	if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
		bio_io_error(bio);
<blk_sq_make_request+0x153> mov    0x20(%rsp),%rdi
		return BLK_QC_T_NONE;
<blk_sq_make_request+0x158> mov    $0xffffffff,%r12d

extern void bio_endio(struct bio *);

static inline void bio_io_error(struct bio *bio)
{
	bio->bi_error = -EIO;
<blk_sq_make_request+0x15e> movl   $0xfffffffb,0x10(%rdi)
	bio_endio(bio);
<blk_sq_make_request+0x165> callq  000000000000314a <blk_sq_make_request+0x16a>
		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
	}

	blk_mq_put_ctx(data.ctx);
	return cookie;
}
<blk_sq_make_request+0x16a> mov    0x40(%rsp),%rcx
<blk_sq_make_request+0x16f> xor    %gs:0x28,%rcx
<blk_sq_make_request+0x178> mov    %r12d,%eax
<blk_sq_make_request+0x17b> jne    000000000000348e <blk_sq_make_request+0x4ae>
<blk_sq_make_request+0x181> add    $0x48,%rsp
<blk_sq_make_request+0x185> pop    %rbx
<blk_sq_make_request+0x186> pop    %rbp
<blk_sq_make_request+0x187> pop    %r12
<blk_sq_make_request+0x189> pop    %r13
<blk_sq_make_request+0x18b> pop    %r14
<blk_sq_make_request+0x18d> pop    %r15
<blk_sq_make_request+0x18f> retq   

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);
<blk_sq_make_request+0x190> mov    %r13,%rdi
<blk_sq_make_request+0x193> callq  0000000000003178 <blk_sq_make_request+0x198>

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0x198> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x19d> lea    0x30(%rsp),%rdx
<blk_sq_make_request+0x1a2> mov    %r13,%rdi

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);
<blk_sq_make_request+0x1a5> mov    %eax,0x2c(%rsp)

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0x1a9> callq  0000000000000e70 <blk_mq_map_request>
	if (unlikely(!rq))
<blk_sq_make_request+0x1ae> test   %rax,%rax
<blk_sq_make_request+0x1b1> je     0000000000003210 <blk_sq_make_request+0x230>
		return BLK_QC_T_NONE;

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);
<blk_sq_make_request+0x1b3> mov    0x30(%rsp),%rdx
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x1b8> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x1bd> mov    %rax,%rdi
<blk_sq_make_request+0x1c0> mov    %rax,(%rsp)
<blk_sq_make_request+0x1c4> mov    0x144(%rdx),%r12d
<blk_sq_make_request+0x1cb> shl    $0x10,%r12d
<blk_sq_make_request+0x1cf> or     0x100(%rax),%r12d
<blk_sq_make_request+0x1d6> callq  00000000000031bb <blk_sq_make_request+0x1db>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x1db> mov    (%rsp),%rax
<blk_sq_make_request+0x1df> mov    $0x1,%esi
<blk_sq_make_request+0x1e4> mov    %rax,%rdi
<blk_sq_make_request+0x1e7> callq  00000000000031cc <blk_sq_make_request+0x1ec>

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);

	if (unlikely(is_flush_fua)) {
		blk_mq_bio_to_request(rq, bio);
		blk_insert_flush(rq);
<blk_sq_make_request+0x1ec> mov    (%rsp),%rax
<blk_sq_make_request+0x1f0> mov    %rax,%rdi
<blk_sq_make_request+0x1f3> callq  00000000000031d8 <blk_sq_make_request+0x1f8>
	if (unlikely(!rq))
		return BLK_QC_T_NONE;

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);

	if (unlikely(is_flush_fua)) {
<blk_sq_make_request+0x1f8> mov    $0x1,%esi
		 * an ASYNC request, just ensure that we run it later on. The
		 * latter allows for merging opportunities and more efficient
		 * dispatching.
		 */
run_queue:
		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
<blk_sq_make_request+0x1fd> xor    $0x1,%ebx
<blk_sq_make_request+0x200> mov    0x30(%rsp),%rdi
<blk_sq_make_request+0x205> or     %esi,%ebx
<blk_sq_make_request+0x207> movzbl %bl,%esi
<blk_sq_make_request+0x20a> callq  00000000000031ef <blk_sq_make_request+0x20f>
	}

	blk_mq_put_ctx(data.ctx);
	return cookie;
<blk_sq_make_request+0x20f> jmpq   000000000000314a <blk_sq_make_request+0x16a>
	}

	blk_queue_split(q, &bio, q->bio_split);

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
<blk_sq_make_request+0x214> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x219> lea    0x2c(%rsp),%rdx
<blk_sq_make_request+0x21e> xor    %ecx,%ecx
<blk_sq_make_request+0x220> mov    %r13,%rdi
<blk_sq_make_request+0x223> callq  0000000000003208 <blk_sq_make_request+0x228>
<blk_sq_make_request+0x228> test   %al,%al
<blk_sq_make_request+0x22a> je     0000000000003098 <blk_sq_make_request+0xb8>
			return BLK_QC_T_NONE;
<blk_sq_make_request+0x230> mov    $0xffffffff,%r12d
<blk_sq_make_request+0x236> jmpq   000000000000314a <blk_sq_make_request+0x16a>
			trace_block_plug(q);

		blk_mq_put_ctx(data.ctx);

		if (request_count >= BLK_MAX_REQUEST_COUNT) {
			blk_flush_plug_list(plug, false);
<blk_sq_make_request+0x23b> xor    %esi,%esi
<blk_sq_make_request+0x23d> mov    %r14,%rdi
<blk_sq_make_request+0x240> callq  0000000000003225 <blk_sq_make_request+0x245>
<blk_sq_make_request+0x245> nopl   0x0(%rax,%rax,1)
#include <linux/stringify.h>
#include <linux/types.h>

static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
{
	asm_volatile_goto("1:"
<blk_sq_make_request+0x24a> jmpq   0000000000003121 <blk_sq_make_request+0x141>
<blk_sq_make_request+0x24f> nopl   0x0(%rax,%rax,1)
<blk_sq_make_request+0x254> jmpq   0000000000003116 <blk_sq_make_request+0x136>
 *
 * Plug the request queue @q.  Do not allow block operation requests
 * to be sent to the device driver. Instead, accumulate requests in
 * the queue to improve throughput performance of the block device.
 */
TRACE_EVENT(block_plug,
<blk_sq_make_request+0x259> mov    %gs:0x0(%rip),%eax        # 0000000000003240 <blk_sq_make_request+0x260>
<blk_sq_make_request+0x260> mov    %eax,%eax

static __always_inline bool variable_test_bit(long nr, volatile const unsigned long *addr)
{
	bool oldbit;

	asm volatile("bt %2,%1\n\t"
<blk_sq_make_request+0x262> bt     %rax,0x0(%rip)        # 000000000000324a <blk_sq_make_request+0x26a>
<blk_sq_make_request+0x26a> setb   %al
<blk_sq_make_request+0x26d> test   %al,%al
<blk_sq_make_request+0x26f> je     0000000000003121 <blk_sq_make_request+0x141>
<blk_sq_make_request+0x275> mov    0x0(%rip),%rbx        # 000000000000325c <blk_sq_make_request+0x27c>
<blk_sq_make_request+0x27c> test   %rbx,%rbx
<blk_sq_make_request+0x27f> je     0000000000003279 <blk_sq_make_request+0x299>
<blk_sq_make_request+0x281> mov    (%rbx),%rax
<blk_sq_make_request+0x284> mov    0x8(%rbx),%rdi
<blk_sq_make_request+0x288> add    $0x18,%rbx
<blk_sq_make_request+0x28c> mov    %r13,%rsi
<blk_sq_make_request+0x28f> callq  *%rax
<blk_sq_make_request+0x291> mov    (%rbx),%rax
<blk_sq_make_request+0x294> test   %rax,%rax
<blk_sq_make_request+0x297> jne    0000000000003264 <blk_sq_make_request+0x284>
<blk_sq_make_request+0x299> jmpq   0000000000003121 <blk_sq_make_request+0x141>

		list_add_tail(&rq->queuelist, &plug->mq_list);
		return cookie;
	}

	if (!blk_mq_merge_queue_io(data.hctx, data.ctx, rq, bio)) {
<blk_sq_make_request+0x29e> mov    0x38(%rsp),%rax
<blk_sq_make_request+0x2a3> mov    0x20(%rsp),%r13
<blk_sq_make_request+0x2a8> mov    %rax,0x8(%rsp)
	blk_account_io_start(rq, 1);
}

static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
{
	return (hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
<blk_sq_make_request+0x2ad> mov    (%rsp),%rax
<blk_sq_make_request+0x2b1> testb  $0x1,0xa8(%rax)
<blk_sq_make_request+0x2b8> je     0000000000003326 <blk_sq_make_request+0x346>
		!blk_queue_nomerges(hctx->queue);
<blk_sq_make_request+0x2be> mov    0xb0(%rax),%rax
<blk_sq_make_request+0x2c5> mov    %rax,0x18(%rsp)
}

static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
{
	return ((1UL << (nr & (BITS_PER_LONG-1))) &
		(addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
<blk_sq_make_request+0x2ca> mov    0x510(%rax),%rax
	blk_account_io_start(rq, 1);
}

static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
{
	return (hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
<blk_sq_make_request+0x2d1> test   $0x1,%ah
<blk_sq_make_request+0x2d4> jne    0000000000003326 <blk_sq_make_request+0x346>
	return true;
}

static inline bool bio_mergeable(struct bio *bio)
{
	if (bio->bi_opf & REQ_NOMERGE_FLAGS)
<blk_sq_make_request+0x2d6> testl  $0x100e300,0x14(%r13)
<blk_sq_make_request+0x2de> jne    0000000000003326 <blk_sq_make_request+0x346>
<blk_sq_make_request+0x2e0> mov    0x8(%rsp),%r15
<blk_sq_make_request+0x2e5> mov    %r15,%rdi
<blk_sq_make_request+0x2e8> callq  00000000000032cd <blk_sq_make_request+0x2ed>
				 struct blk_mq_ctx *ctx, struct bio *bio)
{
	struct request *rq;
	int checked = 8;

	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
<blk_sq_make_request+0x2ed> mov    0x10(%r15),%r14
<blk_sq_make_request+0x2f1> mov    %r15,%rax
<blk_sq_make_request+0x2f4> mov    $0x8,%r15d
<blk_sq_make_request+0x2fa> add    $0x8,%rax
<blk_sq_make_request+0x2fe> mov    %rax,0x10(%rsp)
<blk_sq_make_request+0x303> cmp    %rax,%r14
<blk_sq_make_request+0x306> je     000000000000330c <blk_sq_make_request+0x32c>
		int el_ret;

		if (!checked--)
			break;

		if (!blk_rq_merge_ok(rq, bio))
<blk_sq_make_request+0x308> mov    %r13,%rsi
<blk_sq_make_request+0x30b> mov    %r14,%rdi
<blk_sq_make_request+0x30e> callq  00000000000032f3 <blk_sq_make_request+0x313>
<blk_sq_make_request+0x313> test   %al,%al
<blk_sq_make_request+0x315> jne    0000000000003419 <blk_sq_make_request+0x439>
				 struct blk_mq_ctx *ctx, struct bio *bio)
{
	struct request *rq;
	int checked = 8;

	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
<blk_sq_make_request+0x31b> mov    0x8(%r14),%r14
<blk_sq_make_request+0x31f> cmp    0x10(%rsp),%r14
<blk_sq_make_request+0x324> je     000000000000330c <blk_sq_make_request+0x32c>
		int el_ret;

		if (!checked--)
<blk_sq_make_request+0x326> sub    $0x1,%r15d
<blk_sq_make_request+0x32a> jne    00000000000032e8 <blk_sq_make_request+0x308>
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x32c> mov    %r13,%rsi
<blk_sq_make_request+0x32f> mov    %rbp,%rdi
<blk_sq_make_request+0x332> callq  0000000000003317 <blk_sq_make_request+0x337>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x337> mov    $0x1,%esi
<blk_sq_make_request+0x33c> mov    %rbp,%rdi
<blk_sq_make_request+0x33f> callq  0000000000003324 <blk_sq_make_request+0x344>
<blk_sq_make_request+0x344> jmp    0000000000003348 <blk_sq_make_request+0x368>
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x346> mov    %r13,%rsi
<blk_sq_make_request+0x349> mov    %rbp,%rdi
<blk_sq_make_request+0x34c> callq  0000000000003331 <blk_sq_make_request+0x351>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x351> mov    %rbp,%rdi
<blk_sq_make_request+0x354> mov    $0x1,%esi
<blk_sq_make_request+0x359> callq  000000000000333e <blk_sq_make_request+0x35e>
<blk_sq_make_request+0x35e> mov    0x8(%rsp),%rdi
<blk_sq_make_request+0x363> callq  0000000000003348 <blk_sq_make_request+0x368>
					    struct request *rq,
					    bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;

	trace_block_rq_insert(hctx->queue, rq);
<blk_sq_make_request+0x368> mov    (%rsp),%rax
}

static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
				    struct request *rq, bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;
<blk_sq_make_request+0x36c> mov    0x38(%rbp),%r13
					    struct request *rq,
					    bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;

	trace_block_rq_insert(hctx->queue, rq);
<blk_sq_make_request+0x370> mov    0xb0(%rax),%r14
<blk_sq_make_request+0x377> nopl   0x0(%rax,%rax,1)
<blk_sq_make_request+0x37c> mov    0x10(%r13),%rsi

	if (at_head)
		list_add(&rq->queuelist, &ctx->rq_list);
	else
		list_add_tail(&rq->queuelist, &ctx->rq_list);
<blk_sq_make_request+0x380> lea    0x8(%r13),%rdx
<blk_sq_make_request+0x384> mov    %rbp,%rdi
<blk_sq_make_request+0x387> callq  000000000000336c <blk_sq_make_request+0x38c>
				    struct request *rq, bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;

	__blk_mq_insert_req_list(hctx, rq, at_head);
	blk_mq_hctx_mark_pending(hctx, ctx);
<blk_sq_make_request+0x38c> mov    (%rsp),%rdi
<blk_sq_make_request+0x390> lea    0x44(%r13),%rsi
<blk_sq_make_request+0x394> callq  0000000000000ab0 <blk_mq_hctx_mark_pending.isra.25>
<blk_sq_make_request+0x399> mov    0x8(%rsp),%rdi
<blk_sq_make_request+0x39e> callq  *0x0

static inline void __raw_spin_unlock(raw_spinlock_t *lock)
{
	spin_release(&lock->dep_map, 1, _RET_IP_);
	do_raw_spin_unlock(lock);
	preempt_enable();
<blk_sq_make_request+0x3a5> xor    %esi,%esi
<blk_sq_make_request+0x3a7> jmpq   00000000000031dd <blk_sq_make_request+0x1fd>
<blk_sq_make_request+0x3ac> mov    %gs:0x0(%rip),%eax        # 0000000000003393 <blk_sq_make_request+0x3b3>
<blk_sq_make_request+0x3b3> mov    %eax,%eax

static __always_inline bool variable_test_bit(long nr, volatile const unsigned long *addr)
{
	bool oldbit;

	asm volatile("bt %2,%1\n\t"
<blk_sq_make_request+0x3b5> bt     %rax,0x0(%rip)        # 000000000000339d <blk_sq_make_request+0x3bd>
<blk_sq_make_request+0x3bd> setb   %al
<blk_sq_make_request+0x3c0> test   %al,%al
<blk_sq_make_request+0x3c2> je     0000000000003116 <blk_sq_make_request+0x136>
<blk_sq_make_request+0x3c8> mov    0x0(%rip),%rbx        # 00000000000033af <blk_sq_make_request+0x3cf>
<blk_sq_make_request+0x3cf> test   %rbx,%rbx
<blk_sq_make_request+0x3d2> je     00000000000033cc <blk_sq_make_request+0x3ec>
<blk_sq_make_request+0x3d4> mov    (%rbx),%rax
<blk_sq_make_request+0x3d7> mov    0x8(%rbx),%rdi
<blk_sq_make_request+0x3db> add    $0x18,%rbx
<blk_sq_make_request+0x3df> mov    %r13,%rsi
<blk_sq_make_request+0x3e2> callq  *%rax
<blk_sq_make_request+0x3e4> mov    (%rbx),%rax
<blk_sq_make_request+0x3e7> test   %rax,%rax
<blk_sq_make_request+0x3ea> jne    00000000000033b7 <blk_sq_make_request+0x3d7>
<blk_sq_make_request+0x3ec> jmpq   0000000000003116 <blk_sq_make_request+0x136>
 * Called immediately before block operation request @rq is inserted
 * into queue @q.  The fields in the operation request @rq struct can
 * be examined to determine which device and sectors the pending
 * operation would access.
 */
DEFINE_EVENT(block_rq, block_rq_insert,
<blk_sq_make_request+0x3f1> mov    %gs:0x0(%rip),%eax        # 00000000000033d8 <blk_sq_make_request+0x3f8>
<blk_sq_make_request+0x3f8> mov    %eax,%eax
<blk_sq_make_request+0x3fa> bt     %rax,0x0(%rip)        # 00000000000033e2 <blk_sq_make_request+0x402>
<blk_sq_make_request+0x402> setb   %al
<blk_sq_make_request+0x405> test   %al,%al
<blk_sq_make_request+0x407> je     000000000000335c <blk_sq_make_request+0x37c>
<blk_sq_make_request+0x40d> mov    0x0(%rip),%r15        # 00000000000033f4 <blk_sq_make_request+0x414>
<blk_sq_make_request+0x414> test   %r15,%r15
<blk_sq_make_request+0x417> je     0000000000003414 <blk_sq_make_request+0x434>
<blk_sq_make_request+0x419> mov    (%r15),%rcx
<blk_sq_make_request+0x41c> mov    0x8(%r15),%rdi
<blk_sq_make_request+0x420> add    $0x18,%r15
<blk_sq_make_request+0x424> mov    %rbp,%rdx
<blk_sq_make_request+0x427> mov    %r14,%rsi
<blk_sq_make_request+0x42a> callq  *%rcx
<blk_sq_make_request+0x42c> mov    (%r15),%rcx
<blk_sq_make_request+0x42f> test   %rcx,%rcx
<blk_sq_make_request+0x432> jne    00000000000033fc <blk_sq_make_request+0x41c>
<blk_sq_make_request+0x434> jmpq   000000000000335c <blk_sq_make_request+0x37c>
			break;

		if (!blk_rq_merge_ok(rq, bio))
			continue;

		el_ret = blk_try_merge(rq, bio);
<blk_sq_make_request+0x439> mov    %r13,%rsi
<blk_sq_make_request+0x43c> mov    %r14,%rdi
<blk_sq_make_request+0x43f> callq  0000000000003424 <blk_sq_make_request+0x444>
		if (el_ret == ELEVATOR_BACK_MERGE) {
<blk_sq_make_request+0x444> cmp    $0x2,%eax
<blk_sq_make_request+0x447> je     0000000000003474 <blk_sq_make_request+0x494>
			if (bio_attempt_back_merge(q, rq, bio)) {
				ctx->rq_merged++;
				return true;
			}
			break;
		} else if (el_ret == ELEVATOR_FRONT_MERGE) {
<blk_sq_make_request+0x449> cmp    $0x1,%eax
<blk_sq_make_request+0x44c> jne    00000000000032fb <blk_sq_make_request+0x31b>
			if (bio_attempt_front_merge(q, rq, bio)) {
<blk_sq_make_request+0x452> mov    0x18(%rsp),%rdi
<blk_sq_make_request+0x457> mov    %r13,%rdx
<blk_sq_make_request+0x45a> mov    %r14,%rsi
<blk_sq_make_request+0x45d> callq  0000000000003442 <blk_sq_make_request+0x462>
<blk_sq_make_request+0x462> test   %al,%al
<blk_sq_make_request+0x464> je     000000000000330c <blk_sq_make_request+0x32c>
				ctx->rq_merged++;
<blk_sq_make_request+0x46a> mov    0x8(%rsp),%rax
<blk_sq_make_request+0x46f> mov    0x8(%rsp),%rdi
<blk_sq_make_request+0x474> addq   $0x1,0x58(%rax)
<blk_sq_make_request+0x479> callq  *0x0
			blk_mq_bio_to_request(rq, bio);
			goto insert_rq;
		}

		spin_unlock(&ctx->lock);
		__blk_mq_free_request(hctx, ctx, rq);
<blk_sq_make_request+0x480> mov    %rdi,%rsi
<blk_sq_make_request+0x483> mov    (%rsp),%rdi
<blk_sq_make_request+0x487> mov    %rbp,%rdx
<blk_sq_make_request+0x48a> callq  00000000000002f0 <__blk_mq_free_request>
	return __blk_mq_get_ctx(q, get_cpu());
}

static inline void blk_mq_put_ctx(struct blk_mq_ctx *ctx)
{
	put_cpu();
<blk_sq_make_request+0x48f> jmpq   000000000000314a <blk_sq_make_request+0x16a>
		if (!blk_rq_merge_ok(rq, bio))
			continue;

		el_ret = blk_try_merge(rq, bio);
		if (el_ret == ELEVATOR_BACK_MERGE) {
			if (bio_attempt_back_merge(q, rq, bio)) {
<blk_sq_make_request+0x494> mov    0x18(%rsp),%rdi
<blk_sq_make_request+0x499> mov    %r13,%rdx
<blk_sq_make_request+0x49c> mov    %r14,%rsi
<blk_sq_make_request+0x49f> callq  0000000000003484 <blk_sq_make_request+0x4a4>
<blk_sq_make_request+0x4a4> test   %al,%al
<blk_sq_make_request+0x4a6> je     000000000000330c <blk_sq_make_request+0x32c>
<blk_sq_make_request+0x4ac> jmp    000000000000344a <blk_sq_make_request+0x46a>
		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
	}

	blk_mq_put_ctx(data.ctx);
	return cookie;
}
<blk_sq_make_request+0x4ae> callq  0000000000003493 <blk_sq_make_request+0x4b3>
<blk_sq_make_request+0x4b3> data16 data16 data16 nopw %cs:0x0(%rax,%rax,1)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 4.9.52: INFO: task X blocked for more than 300 seconds.
  2017-10-05 17:59   ` Ankur Arora
@ 2017-10-19 14:34     ` Philipp Hahn
  2017-10-19 14:34     ` [Xen-devel] " Philipp Hahn
  1 sibling, 0 replies; 8+ messages in thread
From: Philipp Hahn @ 2017-10-19 14:34 UTC (permalink / raw)
  To: Ankur Arora, Konrad Rzeszutek Wilk; +Cc: xen-devel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5514 bytes --]

Hello Ankur,

Am 05.10.2017 um 19:59 schrieb Ankur Arora:
> On 2017-10-05 06:20 AM, Konrad Rzeszutek Wilk wrote:
>> On Wed, Oct 04, 2017 at 08:26:27PM +0200, Philipp Hahn wrote:
...
>> Adding Ankur to this as I think he saw something similar.
>>
>> But in the meantime - do you see this with the latest version of Linux?
>>> with linux-4.9.52 running on Debian-Wheezy with Xen-4.1 I observed
>>> several stuck processes: Here is one (truncated) dump of the Linux
>>> kernel messages:
>>>
>>>>   [<ffffffff8160db6d>] ? __schedule+0x23d/0x6d0
>>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>>   [<ffffffff8160e032>] ? schedule+0x32/0x80
>>>>   [<ffffffff8161150c>] ? schedule_timeout+0x1ec/0x360
>>>>   [<ffffffff8130f277>] ? __blk_mq_run_hw_queue+0x327/0x3e0* see below
>>>>   [<ffffffff8101b7f1>] ? xen_clocksource_get_cycles+0x11/0x20
>>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>>   [<ffffffff8160d8b4>] ? io_schedule_timeout+0xb4/0x130
>>>>   [<ffffffff810bb6f7>] ? prepare_to_wait+0x57/0x80
>>>>   [<ffffffff8160e8b7>] ? bit_wait_io+0x17/0x60
>>>>   [<ffffffff8160e3ac>] ? __wait_on_bit+0x5c/0x90
>>>>   [<ffffffff8160e8a0>] ? bit_wait_timeout+0x90/0x90
>>>>   [<ffffffff8160e50e>] ? out_of_line_wait_on_bit+0x7e/0xa0
>>>>   [<ffffffff810bba20>] ? autoremove_wake_function+0x40/0x40
>>>>   [<ffffffffc00abd18>] ?
>>>> jbd2_journal_commit_transaction+0xd48/0x17e0 [jbd2]
>>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>>   [<ffffffff810e67fd>] ? try_to_del_timer_sync+0x4d/0x80
>>>>   [<ffffffffc00b099d>] ? kjournald2+0xdd/0x280 [jbd2]
>>>>   [<ffffffff810bb9e0>] ? wake_up_atomic_t+0x30/0x30
>>>>   [<ffffffffc00b08c0>] ? commit_timeout+0x10/0x10 [jbd2]
>>>>   [<ffffffff81097660>] ? kthread+0xf0/0x110
>>>>   [<ffffffff810247d9>] ? __switch_to+0x2c9/0x720
>>>>   [<ffffffff81097570>] ? kthread_park+0x60/0x60
>>>>   [<ffffffff81612bb5>] ? ret_from_fork+0x25/0x30
> This looks like this race: https://patchwork.kernel.org/patch/9853443/

I built a new kernel, for which I picked that patch on top of 4.9.56. We
are currently testing that, but it crashed again yesterday evening. Here
is the dmesg output:

> INFO: task systemd:1 blocked for more than 120 seconds.
>       Not tainted 4.9.0-ucs105-amd64 #1 Debian 4.9.30-2A~4.2.0.201710161640
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> systemd         D    0     1      0 0x00000000
>  ffff8801f1797c00 0000000000000000 ffffffff81c0e540 ffff8801f4956040
>  ffff8801f5a187c0 ffffc90040c4f880 ffffffff8160ebbd ffffffff81186295
>  ffffffff81350b39 0000000002cebf80 ffffffff811863c8 ffff8801f4956040
> Call Trace:
>  [<ffffffff8160ebbd>] ? __schedule+0x23d/0x6d0
>  [<ffffffff81186295>] ? move_freepages+0x95/0xd0
>  [<ffffffff81350b39>] ? list_del+0x9/0x20
>  [<ffffffff811863c8>] ? __rmqueue+0x88/0x3e0
>  [<ffffffff8160f082>] ? schedule+0x32/0x80
>  [<ffffffff8161255c>] ? schedule_timeout+0x1ec/0x360
>  [<ffffffff81187530>] ? get_page_from_freelist+0x350/0xad0
>  [<ffffffff8160e904>] ? io_schedule_timeout+0xb4/0x130
>  [<ffffffff81364fd4>] ? __sbitmap_queue_get+0x24/0x90
>  [<ffffffff813114f9>] ? bt_get.isra.6+0x129/0x1c0
>  [<ffffffff81350b39>] ? list_del+0x9/0x20
>  [<ffffffff810bba00>] ? wake_up_atomic_t+0x30/0x30
>  [<ffffffff81311843>] ? blk_mq_get_tag+0x23/0x90
>  [<ffffffff8130d1aa>] ? __blk_mq_alloc_request+0x1a/0x220
>  [<ffffffff8130dffd>] ? blk_mq_map_request+0xcd/0x170
>  [<ffffffff8131016a>] ? blk_sq_make_request+0xca/0x4c0
>  [<ffffffff81303e1a>] ? generic_make_request_checks+0x22a/0x4f0
>  [<ffffffff81304761>] ? generic_make_request+0x121/0x2c0
>  [<ffffffff8117e823>] ? __add_to_page_cache_locked+0x183/0x230
>  [<ffffffff81304976>] ? submit_bio+0x76/0x150
>  [<ffffffff8117e964>] ? add_to_page_cache_lru+0x84/0xe0
>  [<ffffffffc017b9d9>] ? ext4_mpage_readpages+0x2b9/0x8b0 [ext4]
>  [<ffffffff811d94fa>] ? alloc_pages_current+0x8a/0x110
>  [<ffffffff8118e745>] ? __do_page_cache_readahead+0x195/0x240
>  [<ffffffff8117f627>] ? pagecache_get_page+0x27/0x2b0
>  [<ffffffff811818e6>] ? filemap_fault+0x276/0x590
>  [<ffffffffc0138491>] ? ext4_filemap_fault+0x31/0x50 [ext4]
>  [<ffffffff811b23a4>] ? __do_fault+0x84/0x190
>  [<ffffffff811b783e>] ? handle_mm_fault+0xede/0x1680
>  [<ffffffff8124d25e>] ? ep_poll+0x13e/0x360
>  [<ffffffff8105feda>] ? __do_page_fault+0x26a/0x500
>  [<ffffffff81205ba2>] ? SyS_read+0x52/0xc0
>  [<ffffffff81614d58>] ? page_fault+0x28/0x30

I haven't been able to get the address of the queue object yet to get
its state. (timeout for today)

> Can you dump the output of: cat /sys/block/$xen-frontend-device/mq/*/tags
> 
> If you've hit this bug, one or more of the MQs would be wedged and
> the nr_free in one or more of the queues would be 0 and will not
> change.

As soon as the bug occurs, we can no longer access the VM via ssh or the
Xen (serial) console: the connection stalls after entering 2-3 characters.

I have a Xen crash-dump file of one such crash, but following
/sys/block/xvd?/mq/*/tags manually to get the kobject address using
"crash" (gdb) is very time consuming. So far I only did it once for
xvda, but I have to to that for the other 15 block devices as well to
find one culprit.

Philipp Hahn
-- 
Philipp Hahn
Open Source Software Engineer

Univention GmbH
be open.
Mary-Somerville-Str. 1
D-28359 Bremen
Tel.: +49 421 22232-0
Fax : +49 421 22232-99
hahn@univention.de

http://www.univention.de/
Geschäftsführer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-0287

[-- Attachment #2: blk_sq_make_request.txt --]
[-- Type: text/plain, Size: 9721 bytes --]

PID: 1      TASK: ffff8801f4956040  CPU: 0   COMMAND: "systemd"
 #0 [ffffc90040c4f800] __schedule at ffffffff8160ebbd
    kernel/sched/core.c: 2902
    ffffc90040c4f808: 0000000000000000 init_task
    ffffc90040c4f818: [ffff8801f4956040:task_struct] ffff8801f5a187c0
    ffffc90040c4f828: ffffc90040c4f880 __schedule+573
    ffffc90040c4f838: move_freepages+149 list_del+9
    ffffc90040c4f848: 0000000002cebf80 __rmqueue+136
    ffffc90040c4f858: [ffff8801f4956040:task_struct] 7fffffffffffffff
    ffffc90040c4f868: [ffff8801f129a850:kmalloc-192] 7fffffffffffffff
    ffffc90040c4f878: [ffff8801f0a9c040:request_queue] ffffc90040c4f918
    ffffc90040c4f888: schedule+50
 #1 [ffffc90040c4f888] schedule at ffffffff8160f082
    arch/x86/include/asm/bitops.h: 311
    ffffc90040c4f890: 0000000000000000 schedule_timeout+492
 #2 [ffffc90040c4f898] schedule_timeout at ffffffff8161255c
    kernel/time/timer.c: 1771
    ffffc90040c4f8a0: 0000000000000001 ffffea0004f18880
    ffffc90040c4f8b0: ffffea0004f18840 ffff8801ff01bb80
    ffffc90040c4f8c0: get_page_from_freelist+848 ffffc90040c4f8c8
    ffffc90040c4f8d0: ffffc90040c4f8c8 000059bd97399f43
    ffffc90040c4f8e0: 0000000000000000 78bd7181eeb35a00
    ffffc90040c4f8f0: 0000000000000000 7fffffffffffffff
    ffffc90040c4f900: [ffff8801f129a850:kmalloc-192] [ffff8801f16e4300:kmalloc-256]
    ffffc90040c4f910: [ffff8801f0a9c040:request_queue] ffff8801f5a187c0
    ffffc90040c4f920: io_schedule_timeout+180
 #3 [ffffc90040c4f920] io_schedule_timeout at ffffffff8160e904
    arch/x86/include/asm/current.h: 14
    ffffc90040c4f928: __sbitmap_queue_get+36 [ffff8801f120e988:kmalloc-512]
    ffffc90040c4f938: [ffff8801f1037000:kmalloc-1024] ffffc90040c4fa00
    ffffc90040c4f948: bt_get+297
 #4 [ffffc90040c4f948] bt_get at ffffffff813114f9
    block/blk-mq-tag.c: 143
    ffffc90040c4f950: list_del+9       0000000000000000
    ffffc90040c4f960: [ffff8801f4956040:task_struct] autoremove_wake_function
    ffffc90040c4f970: [ffff8801f120e990:kmalloc-512] [ffff8801f120e990:kmalloc-512]
    ffffc90040c4f980: 78bd7181eeb35a00 ffffc90040c4fa00
    ffffc90040c4f990: 0000000000000008 0000000000000000
    ffffc90040c4f9a0: ffffc90040c4fa00 blk_mq_get_tag+35
 #5 [ffffc90040c4f9a8] blk_mq_get_tag at ffffffff81311843
    block/blk-mq-tag.c: 165
    ffffc90040c4f9b0: 0000000000000001 __blk_mq_alloc_request+26
 #6 [ffffc90040c4f9b8] __blk_mq_alloc_request at ffffffff8130d1aa
    block/blk-mq.c: 197
    ffffc90040c4f9c0: 0000000000000001 ffffe8ffffc0a140
    ffffc90040c4f9d0: 0000000000000000 0000000000000008
    ffffc90040c4f9e0: [ffff8801f16e4300:kmalloc-256] blk_mq_map_request+205
 #7 [ffffc90040c4f9e8] blk_mq_map_request at ffffffff8130dffd
    block/blk-mq.c: 1209
    ffffc90040c4f9f0: [ffff8801f1037000:kmalloc-1024] ffffc90040c4fa90
    ffffc90040c4fa00: [ffff8801f0a9c040:request_queue] 0000000000000000
    ffffc90040c4fa10: ffffe8ffffc0a140 [ffff8801f1037000:kmalloc-1024]
    ffffc90040c4fa20: 78bd7181eeb35a00 0000000000000001
    ffffc90040c4fa30: 0000000000000000 00000000ffffffff
    ffffc90040c4fa40: [ffff8801f0a9c040:request_queue] 0000000000000001
    ffffc90040c4fa50: 0000000000000001 blk_sq_make_request+202
 #8 [ffffc90040c4fa58] blk_sq_make_request at ffffffff8131016a
    block/blk-mq.c: 1375
    ffffc90040c4fa60: [ffff8801f10379e0:kmalloc-1024] [ffff8801f0a9c040:request_queue]
    ffffc90040c4fa70: generic_make_request_checks+554 ffffc90040c4fad0
    ffffc90040c4fa80: [ffff8801f16e4300:kmalloc-256] 0000000088000102
    ffffc90040c4fa90: 78bd7181eeb35a00 [ffff8801f3fa6890:ext4_inode_cache]
    ffffc90040c4faa0: 78bd7181eeb35a00 [ffff8801f16e4300:kmalloc-256]
    ffffc90040c4fab0: ffffc90040c4fb30 00000000ffffffff
    ffffc90040c4fac0: [ffff8801f0a9c040:request_queue] 0000000000000001
    ffffc90040c4fad0: 0000000000000001 generic_make_request+289
 #9 [ffffc90040c4fad8] generic_make_request at ffffffff81304761
    block/blk-core.c: 2051
    ffffc90040c4fae0: 0000000000000000 0000000000000000
    ffffc90040c4faf0: 0000000000000000 0000000000000000
    ffffc90040c4fb00: __add_to_page_cache_locked+387 78bd7181eeb35a00
    ffffc90040c4fb10: 00000000028e2388 [ffff8801f16e4300:kmalloc-256]
    ffffc90040c4fb20: ffffea0004a376c0 0000000000000001
    ffffc90040c4fb30: 0000000000000020 submit_bio+118
#10 [ffffc90040c4fb38] submit_bio at ffffffff81304976
    block/blk-core.c: 2123
    ffffc90040c4fb40: ffffea0004a376c0 0000000000000000
    ffffc90040c4fb50: add_to_page_cache_lru+132 00028e2388000102
    ffffc90040c4fb60: 78bd7181eeb35a00 78bd7181eeb35a00
    ffffc90040c4fb70: 00000000000000f0 0000000000000013
    ffffc90040c4fb80: ext4_mpage_readpages+697
#11 [ffffc90040c4fb80] ext4_mpage_readpages at ffffffffc017b9d9 [ext4]
    ffffc90040c4fb88: 024200caffffffff [ffff8801f3fa6718:ext4_inode_cache]
    ffffc90040c4fb98: ffffea0006bd4400 ffffc90040c4fcd0
    ffffc90040c4fba8: [ffff8801f16e4300:kmalloc-256] 0000001500000015
    ffffc90040c4fbb8: [ffff8801f3fa6890:ext4_inode_cache] 000010000000000c
    ffffc90040c4fbc8: 0000000001652086 [ffff8801f4956040:task_struct]
    ffffc90040c4fbd8: [ffff8801f3eaaa40:bdev_cache] [ffff8801f4956040:task_struct]
    ffffc90040c4fbe8: [ffff8801f4956040:task_struct] [ffff8801f4956040:task_struct]
    ffffc90040c4fbf8: 0000000001652083 00000019000000e9
    ffffc90040c4fc08: 0000000000000020 0000000001652089
    ffffc90040c4fc18: preferred_node_policy 00000000024213ca
    ffffc90040c4fc28: 0000000000000000 0000000000000000
    ffffc90040c4fc38: [ffff8801f4956040:task_struct] [ffff8801f4956040:task_struct]
    ffffc90040c4fc48: alloc_pages_current+138 78bd7181eeb35a00
    ffffc90040c4fc58: 0000000000000020 ffffc90040c4fd50
    ffffc90040c4fc68: 0000000000000108 0000000000000020
    ffffc90040c4fc78: 0000000000000019 00000000000000e9
    ffffc90040c4fc88: __do_page_cache_readahead+405
#12 [ffffc90040c4fc88] __do_page_cache_readahead at ffffffff8118e745
    mm/readahead.c: 123
    ffffc90040c4fc90: [ffff8801f179d300:kmalloc-256] [ffff8801f3fa6890:ext4_inode_cache]
    ffffc90040c4fca0: 0000000000000018 024213ca81337a06
    ffffc90040c4fcb0: 0000000000000140 ffffea0006b34c80
    ffffc90040c4fcc0: [ffff8801f3fa6898:ext4_inode_cache] 0000000000000020
    ffffc90040c4fcd0: ffffea0006b34ca0 ffffea00069b9a60
    ffffc90040c4fce0: ffffc90040c4fce0 ffffc90040c4fce0
    ffffc90040c4fcf0: ffffc90040c4fcf0 ffffc90040c4fcf0
    ffffc90040c4fd00: ffffc90040c4fd00 ffffc90040c4fd00
    ffffc90040c4fd10: 0000000000000000 78bd7181eeb35a00
    ffffc90040c4fd20: pagecache_get_page+39 00000000000000f9
    ffffc90040c4fd30: [ffff8801f179d300:kmalloc-256] 0000000000000000
    ffffc90040c4fd40: ffffc90040c4fde8 [ffff8801f3fa6718:ext4_inode_cache]
    ffffc90040c4fd50: [ffff8801f179d3a0:kmalloc-256] filemap_fault+630
#13 [ffffc90040c4fd58] filemap_fault at ffffffff811818e6
    mm/filemap.c: 2133
    ffffc90040c4fd60: 0000000000000101 [ffff8801f3fa6890:ext4_inode_cache]
    ffffc90040c4fd70: ffffc90040c4fde8 [ffff8801f1794af0:vm_area_struct]
    ffffc90040c4fd80: [ffff8801f1794af0:vm_area_struct] [ffff8801f3fa66f0:ext4_inode_cache]
    ffffc90040c4fd90: [ffff8801f1794af0:vm_area_struct] ffffc90040c4fde8
    ffffc90040c4fda0: 0000564c1f3b7223 0000000000000000
    ffffc90040c4fdb0: 0000000000000000 ext4_filemap_fault+49
#14 [ffffc90040c4fdb8] ext4_filemap_fault at ffffffffc0138491 [ext4]
    ffffc90040c4fdc0: [ffff8801f1794af0:vm_area_struct] ffffc90040c4fe68
    ffffc90040c4fdd0: 0000000000000000 __do_fault+132
#15 [ffffc90040c4fdd8] __do_fault at ffffffff811b23a4
    mm/memory.c: 2828
    ffffc90040c4fde0: 0000000000000040 024200ca00000054
    ffffc90040c4fdf0: 00000000000000f9 0000564c1f3b7000
    ffffc90040c4fe00: 0000000000000000 0000000000000000
    ffffc90040c4fe10: ffff880000000d88 78bd7181eeb35a00
    ffffc90040c4fe20: [ffff8801f1794af0:vm_area_struct] 0000000000000040
    ffffc90040c4fe30: 00000000000000f9 handle_mm_fault+3806
#16 [ffffc90040c4fe38] handle_mm_fault at ffffffff811b783e
    mm/memory.c: 3155
    ffffc90040c4fe40: ffffc90040c4fe70 0000000000000000
    ffffc90040c4fe50: [ffff8801f4956040:task_struct] 00007ffca20e0e00
    ffffc90040c4fe60: ep_poll+318      [ffff8801f3d09db0:inode_cache]
    ffffc90040c4fe70: [ffff8801f1794af0:vm_area_struct] 0000564c1f3b7223
    ffffc90040c4fe80: 0000000000000054 ffff8801f17627c8
    ffffc90040c4fe90: 0000000000000000 ffffea0007cd03b0
    ffffc90040c4fea0: 0000000000000000 78bd7181eeb35a00
    ffffc90040c4feb0: 0000000000000004 0000564c1f3b7223
    ffffc90040c4fec0: [ffff8801f1797c00:mm_struct] ffffc90040c4ff58
    ffffc90040c4fed0: 0000000000000054 [ffff8801f4956040:task_struct]
    ffffc90040c4fee0: __do_page_fault+618
#17 [ffffc90040c4fee0] __do_page_fault at ffffffff8105feda
    arch/x86/mm/fault.c: 1398
    ffffc90040c4fee8: [ffff8801f1797c68:mm_struct] 00000000f1275c00
    ffffc90040c4fef8: [ffff8801f1794af0:vm_area_struct] 0000000000000000
    ffffc90040c4ff08: 0000000000000000 sys_read+82
    ffffc90040c4ff18: 78bd7181eeb35a00 0000000000000000
    ffffc90040c4ff28: 00007ffca20e0ab0 0000564c1f39dc13
    ffffc90040c4ff38: 0000000000000000 0000000000000000
    ffffc90040c4ff48: 00007ffca20e0c38 page_fault+40
#18 [ffffc90040c4ff50] page_fault at ffffffff81614d58
    arch/x86/entry/entry_64.S: 1026
    RIP: 00007fdaa3eb9dcc  RSP: 00007ffca20e04e0  RFLAGS: 00010286
    RAX: 0000000000000000  RBX: 00007ffca20e0ac0  RCX: ffffffffffffffff
    RDX: 0000000000000030  RSI: 0000564c1f39dc13  RDI: 0000564c1f3b7223
    RBP: 00007ffca20e0ab0   R8: 0000564c1f3b7223   R9: 00007fdaa3eba99a
    R10: 0000564c21226180  R11: 0000000000000001  R12: 0000564c1f39dc13
    R13: 0000000000000000  R14: 0000000000000000  R15: 00007ffca20e0c38
    ORIG_RAX: ffffffffffffffff  CS: e033  SS: e02b

[-- Attachment #3: blk_sq_make_request.s --]
[-- Type: text/plain, Size: 26172 bytes --]

/*
 * Single hardware queue variant. This will attempt to use any per-process
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
<blk_sq_make_request> callq  0000000000002fe5 <blk_sq_make_request+0x5>
<blk_sq_make_request+0x5> push   %r15
<blk_sq_make_request+0x7> push   %r14
<blk_sq_make_request+0x9> push   %r13
<blk_sq_make_request+0xb> push   %r12
<blk_sq_make_request+0xd> mov    %rdi,%r13
<blk_sq_make_request+0x10> push   %rbp
<blk_sq_make_request+0x11> push   %rbx
<blk_sq_make_request+0x12> mov    $0x1,%ebx
<blk_sq_make_request+0x17> sub    $0x48,%rsp
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
<blk_sq_make_request+0x1b> mov    0x14(%rsi),%eax
/*
 * Single hardware queue variant. This will attempt to use any per-process
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
<blk_sq_make_request+0x1e> mov    %gs:0x28,%rcx
<blk_sq_make_request+0x27> mov    %rcx,0x40(%rsp)
<blk_sq_make_request+0x2c> xor    %ecx,%ecx
<blk_sq_make_request+0x2e> mov    %rsi,0x20(%rsp)
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
<blk_sq_make_request+0x33> mov    %eax,%edx
<blk_sq_make_request+0x35> shr    $0x1d,%edx
<blk_sq_make_request+0x38> test   %edx,%edx
<blk_sq_make_request+0x3a> je     0000000000003024 <blk_sq_make_request+0x44>
<blk_sq_make_request+0x3c> mov    %eax,%ebx
<blk_sq_make_request+0x3e> shr    $0x3,%ebx
<blk_sq_make_request+0x41> and    $0x1,%ebx
	unsigned int request_count = 0;
	struct blk_map_ctx data;
	struct request *rq;
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);
<blk_sq_make_request+0x44> lea    0x20(%rsp),%rsi
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
	const int is_flush_fua = bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
<blk_sq_make_request+0x49> and    $0x300,%eax
	unsigned int request_count = 0;
	struct blk_map_ctx data;
	struct request *rq;
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);
<blk_sq_make_request+0x4e> mov    %r13,%rdi
 * plug for merging and IO deferral.
 */
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
	const int is_flush_fua = bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
<blk_sq_make_request+0x51> mov    %eax,%ebp
	struct blk_plug *plug;
	unsigned int request_count = 0;
<blk_sq_make_request+0x53> movl   $0x0,0x2c(%rsp)
	struct blk_map_ctx data;
	struct request *rq;
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);
<blk_sq_make_request+0x5b> callq  0000000000003040 <blk_sq_make_request+0x60>

	if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
<blk_sq_make_request+0x60> mov    0x20(%rsp),%rdi
<blk_sq_make_request+0x65> callq  000000000000304a <blk_sq_make_request+0x6a>
<blk_sq_make_request+0x6a> test   %al,%al
<blk_sq_make_request+0x6c> je     0000000000003060 <blk_sq_make_request+0x80>
<blk_sq_make_request+0x6e> mov    0x20(%rsp),%rdi
<blk_sq_make_request+0x73> callq  0000000000003058 <blk_sq_make_request+0x78>
<blk_sq_make_request+0x78> test   %eax,%eax
<blk_sq_make_request+0x7a> jne    0000000000003133 <blk_sq_make_request+0x153>
		bio_io_error(bio);
		return BLK_QC_T_NONE;
	}

	blk_queue_split(q, &bio, q->bio_split);
<blk_sq_make_request+0x80> mov    0x8b0(%r13),%rdx
<blk_sq_make_request+0x87> lea    0x20(%rsp),%rsi
<blk_sq_make_request+0x8c> mov    %r13,%rdi
<blk_sq_make_request+0x8f> callq  0000000000003074 <blk_sq_make_request+0x94>

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
<blk_sq_make_request+0x94> test   %ebp,%ebp
<blk_sq_make_request+0x96> jne    0000000000003170 <blk_sq_make_request+0x190>
}

static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
{
	return ((1UL << (nr & (BITS_PER_LONG-1))) &
		(addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
<blk_sq_make_request+0x9c> mov    0x510(%r13),%rax
<blk_sq_make_request+0xa3> test   $0x1,%ah
<blk_sq_make_request+0xa6> je     00000000000031f4 <blk_sq_make_request+0x214>
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);
<blk_sq_make_request+0xac> mov    %r13,%rdi
<blk_sq_make_request+0xaf> callq  0000000000003094 <blk_sq_make_request+0xb4>
<blk_sq_make_request+0xb4> mov    %eax,0x2c(%rsp)

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0xb8> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0xbd> lea    0x30(%rsp),%rdx
<blk_sq_make_request+0xc2> mov    %r13,%rdi
<blk_sq_make_request+0xc5> callq  0000000000000e70 <blk_mq_map_request>
	if (unlikely(!rq))
<blk_sq_make_request+0xca> test   %rax,%rax
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0xcd> mov    %rax,%rbp
	if (unlikely(!rq))
<blk_sq_make_request+0xd0> je     0000000000003210 <blk_sq_make_request+0x230>
		return BLK_QC_T_NONE;

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);
<blk_sq_make_request+0xd6> mov    0x30(%rsp),%rax
<blk_sq_make_request+0xdb> mov    %rax,(%rsp)
	return cookie != BLK_QC_T_NONE;
}

static inline blk_qc_t blk_tag_to_qc_t(unsigned int tag, unsigned int queue_num)
{
	return tag | (queue_num << BLK_QC_T_SHIFT);
<blk_sq_make_request+0xdf> mov    0x144(%rax),%eax
<blk_sq_make_request+0xe5> mov    %eax,0x8(%rsp)
<blk_sq_make_request+0xe9> mov    %eax,%r12d

DECLARE_PER_CPU(struct task_struct *, current_task);

static __always_inline struct task_struct *get_current(void)
{
	return this_cpu_read_stable(current_task);
<blk_sq_make_request+0xec> mov    %gs:0x0,%rax
	/*
	 * A task plug currently exists. Since this is completely lockless,
	 * utilize that to temporarily store requests until the task is
	 * either done or scheduled away.
	 */
	plug = current->plug;
<blk_sq_make_request+0xf5> mov    0x750(%rax),%r14
<blk_sq_make_request+0xfc> shl    $0x10,%r12d
<blk_sq_make_request+0x100> or     0x100(%rbp),%r12d
	if (plug) {
<blk_sq_make_request+0x107> test   %r14,%r14
<blk_sq_make_request+0x10a> je     000000000000327e <blk_sq_make_request+0x29e>
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x110> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x115> mov    %rbp,%rdi
<blk_sq_make_request+0x118> callq  00000000000030fd <blk_sq_make_request+0x11d>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x11d> mov    $0x1,%esi
<blk_sq_make_request+0x122> mov    %rbp,%rdi
<blk_sq_make_request+0x125> callq  000000000000310a <blk_sq_make_request+0x12a>
	 * either done or scheduled away.
	 */
	plug = current->plug;
	if (plug) {
		blk_mq_bio_to_request(rq, bio);
		if (!request_count)
<blk_sq_make_request+0x12a> mov    0x2c(%rsp),%eax
<blk_sq_make_request+0x12e> test   %eax,%eax
<blk_sq_make_request+0x130> je     000000000000322f <blk_sq_make_request+0x24f>
			trace_block_plug(q);

		blk_mq_put_ctx(data.ctx);

		if (request_count >= BLK_MAX_REQUEST_COUNT) {
<blk_sq_make_request+0x136> cmpl   $0xf,0x2c(%rsp)
<blk_sq_make_request+0x13b> ja     000000000000321b <blk_sq_make_request+0x23b>
<blk_sq_make_request+0x141> mov    0x18(%r14),%rsi
			blk_flush_plug_list(plug, false);
			trace_block_plug(q);
		}

		list_add_tail(&rq->queuelist, &plug->mq_list);
<blk_sq_make_request+0x145> lea    0x10(%r14),%rdx
<blk_sq_make_request+0x149> mov    %rbp,%rdi
<blk_sq_make_request+0x14c> callq  0000000000003131 <blk_sq_make_request+0x151>
		return cookie;
<blk_sq_make_request+0x151> jmp    000000000000314a <blk_sq_make_request+0x16a>
	blk_qc_t cookie;

	blk_queue_bounce(q, &bio);

	if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
		bio_io_error(bio);
<blk_sq_make_request+0x153> mov    0x20(%rsp),%rdi
		return BLK_QC_T_NONE;
<blk_sq_make_request+0x158> mov    $0xffffffff,%r12d

extern void bio_endio(struct bio *);

static inline void bio_io_error(struct bio *bio)
{
	bio->bi_error = -EIO;
<blk_sq_make_request+0x15e> movl   $0xfffffffb,0x10(%rdi)
	bio_endio(bio);
<blk_sq_make_request+0x165> callq  000000000000314a <blk_sq_make_request+0x16a>
		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
	}

	blk_mq_put_ctx(data.ctx);
	return cookie;
}
<blk_sq_make_request+0x16a> mov    0x40(%rsp),%rcx
<blk_sq_make_request+0x16f> xor    %gs:0x28,%rcx
<blk_sq_make_request+0x178> mov    %r12d,%eax
<blk_sq_make_request+0x17b> jne    000000000000348e <blk_sq_make_request+0x4ae>
<blk_sq_make_request+0x181> add    $0x48,%rsp
<blk_sq_make_request+0x185> pop    %rbx
<blk_sq_make_request+0x186> pop    %rbp
<blk_sq_make_request+0x187> pop    %r12
<blk_sq_make_request+0x189> pop    %r13
<blk_sq_make_request+0x18b> pop    %r14
<blk_sq_make_request+0x18d> pop    %r15
<blk_sq_make_request+0x18f> retq   

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);
<blk_sq_make_request+0x190> mov    %r13,%rdi
<blk_sq_make_request+0x193> callq  0000000000003178 <blk_sq_make_request+0x198>

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0x198> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x19d> lea    0x30(%rsp),%rdx
<blk_sq_make_request+0x1a2> mov    %r13,%rdi

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
			return BLK_QC_T_NONE;
	} else
		request_count = blk_plug_queued_count(q);
<blk_sq_make_request+0x1a5> mov    %eax,0x2c(%rsp)

	rq = blk_mq_map_request(q, bio, &data);
<blk_sq_make_request+0x1a9> callq  0000000000000e70 <blk_mq_map_request>
	if (unlikely(!rq))
<blk_sq_make_request+0x1ae> test   %rax,%rax
<blk_sq_make_request+0x1b1> je     0000000000003210 <blk_sq_make_request+0x230>
		return BLK_QC_T_NONE;

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);
<blk_sq_make_request+0x1b3> mov    0x30(%rsp),%rdx
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x1b8> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x1bd> mov    %rax,%rdi
<blk_sq_make_request+0x1c0> mov    %rax,(%rsp)
<blk_sq_make_request+0x1c4> mov    0x144(%rdx),%r12d
<blk_sq_make_request+0x1cb> shl    $0x10,%r12d
<blk_sq_make_request+0x1cf> or     0x100(%rax),%r12d
<blk_sq_make_request+0x1d6> callq  00000000000031bb <blk_sq_make_request+0x1db>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x1db> mov    (%rsp),%rax
<blk_sq_make_request+0x1df> mov    $0x1,%esi
<blk_sq_make_request+0x1e4> mov    %rax,%rdi
<blk_sq_make_request+0x1e7> callq  00000000000031cc <blk_sq_make_request+0x1ec>

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);

	if (unlikely(is_flush_fua)) {
		blk_mq_bio_to_request(rq, bio);
		blk_insert_flush(rq);
<blk_sq_make_request+0x1ec> mov    (%rsp),%rax
<blk_sq_make_request+0x1f0> mov    %rax,%rdi
<blk_sq_make_request+0x1f3> callq  00000000000031d8 <blk_sq_make_request+0x1f8>
	if (unlikely(!rq))
		return BLK_QC_T_NONE;

	cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);

	if (unlikely(is_flush_fua)) {
<blk_sq_make_request+0x1f8> mov    $0x1,%esi
		 * an ASYNC request, just ensure that we run it later on. The
		 * latter allows for merging opportunities and more efficient
		 * dispatching.
		 */
run_queue:
		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
<blk_sq_make_request+0x1fd> xor    $0x1,%ebx
<blk_sq_make_request+0x200> mov    0x30(%rsp),%rdi
<blk_sq_make_request+0x205> or     %esi,%ebx
<blk_sq_make_request+0x207> movzbl %bl,%esi
<blk_sq_make_request+0x20a> callq  00000000000031ef <blk_sq_make_request+0x20f>
	}

	blk_mq_put_ctx(data.ctx);
	return cookie;
<blk_sq_make_request+0x20f> jmpq   000000000000314a <blk_sq_make_request+0x16a>
	}

	blk_queue_split(q, &bio, q->bio_split);

	if (!is_flush_fua && !blk_queue_nomerges(q)) {
		if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
<blk_sq_make_request+0x214> mov    0x20(%rsp),%rsi
<blk_sq_make_request+0x219> lea    0x2c(%rsp),%rdx
<blk_sq_make_request+0x21e> xor    %ecx,%ecx
<blk_sq_make_request+0x220> mov    %r13,%rdi
<blk_sq_make_request+0x223> callq  0000000000003208 <blk_sq_make_request+0x228>
<blk_sq_make_request+0x228> test   %al,%al
<blk_sq_make_request+0x22a> je     0000000000003098 <blk_sq_make_request+0xb8>
			return BLK_QC_T_NONE;
<blk_sq_make_request+0x230> mov    $0xffffffff,%r12d
<blk_sq_make_request+0x236> jmpq   000000000000314a <blk_sq_make_request+0x16a>
			trace_block_plug(q);

		blk_mq_put_ctx(data.ctx);

		if (request_count >= BLK_MAX_REQUEST_COUNT) {
			blk_flush_plug_list(plug, false);
<blk_sq_make_request+0x23b> xor    %esi,%esi
<blk_sq_make_request+0x23d> mov    %r14,%rdi
<blk_sq_make_request+0x240> callq  0000000000003225 <blk_sq_make_request+0x245>
<blk_sq_make_request+0x245> nopl   0x0(%rax,%rax,1)
#include <linux/stringify.h>
#include <linux/types.h>

static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
{
	asm_volatile_goto("1:"
<blk_sq_make_request+0x24a> jmpq   0000000000003121 <blk_sq_make_request+0x141>
<blk_sq_make_request+0x24f> nopl   0x0(%rax,%rax,1)
<blk_sq_make_request+0x254> jmpq   0000000000003116 <blk_sq_make_request+0x136>
 *
 * Plug the request queue @q.  Do not allow block operation requests
 * to be sent to the device driver. Instead, accumulate requests in
 * the queue to improve throughput performance of the block device.
 */
TRACE_EVENT(block_plug,
<blk_sq_make_request+0x259> mov    %gs:0x0(%rip),%eax        # 0000000000003240 <blk_sq_make_request+0x260>
<blk_sq_make_request+0x260> mov    %eax,%eax

static __always_inline bool variable_test_bit(long nr, volatile const unsigned long *addr)
{
	bool oldbit;

	asm volatile("bt %2,%1\n\t"
<blk_sq_make_request+0x262> bt     %rax,0x0(%rip)        # 000000000000324a <blk_sq_make_request+0x26a>
<blk_sq_make_request+0x26a> setb   %al
<blk_sq_make_request+0x26d> test   %al,%al
<blk_sq_make_request+0x26f> je     0000000000003121 <blk_sq_make_request+0x141>
<blk_sq_make_request+0x275> mov    0x0(%rip),%rbx        # 000000000000325c <blk_sq_make_request+0x27c>
<blk_sq_make_request+0x27c> test   %rbx,%rbx
<blk_sq_make_request+0x27f> je     0000000000003279 <blk_sq_make_request+0x299>
<blk_sq_make_request+0x281> mov    (%rbx),%rax
<blk_sq_make_request+0x284> mov    0x8(%rbx),%rdi
<blk_sq_make_request+0x288> add    $0x18,%rbx
<blk_sq_make_request+0x28c> mov    %r13,%rsi
<blk_sq_make_request+0x28f> callq  *%rax
<blk_sq_make_request+0x291> mov    (%rbx),%rax
<blk_sq_make_request+0x294> test   %rax,%rax
<blk_sq_make_request+0x297> jne    0000000000003264 <blk_sq_make_request+0x284>
<blk_sq_make_request+0x299> jmpq   0000000000003121 <blk_sq_make_request+0x141>

		list_add_tail(&rq->queuelist, &plug->mq_list);
		return cookie;
	}

	if (!blk_mq_merge_queue_io(data.hctx, data.ctx, rq, bio)) {
<blk_sq_make_request+0x29e> mov    0x38(%rsp),%rax
<blk_sq_make_request+0x2a3> mov    0x20(%rsp),%r13
<blk_sq_make_request+0x2a8> mov    %rax,0x8(%rsp)
	blk_account_io_start(rq, 1);
}

static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
{
	return (hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
<blk_sq_make_request+0x2ad> mov    (%rsp),%rax
<blk_sq_make_request+0x2b1> testb  $0x1,0xa8(%rax)
<blk_sq_make_request+0x2b8> je     0000000000003326 <blk_sq_make_request+0x346>
		!blk_queue_nomerges(hctx->queue);
<blk_sq_make_request+0x2be> mov    0xb0(%rax),%rax
<blk_sq_make_request+0x2c5> mov    %rax,0x18(%rsp)
}

static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
{
	return ((1UL << (nr & (BITS_PER_LONG-1))) &
		(addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
<blk_sq_make_request+0x2ca> mov    0x510(%rax),%rax
	blk_account_io_start(rq, 1);
}

static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
{
	return (hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
<blk_sq_make_request+0x2d1> test   $0x1,%ah
<blk_sq_make_request+0x2d4> jne    0000000000003326 <blk_sq_make_request+0x346>
	return true;
}

static inline bool bio_mergeable(struct bio *bio)
{
	if (bio->bi_opf & REQ_NOMERGE_FLAGS)
<blk_sq_make_request+0x2d6> testl  $0x100e300,0x14(%r13)
<blk_sq_make_request+0x2de> jne    0000000000003326 <blk_sq_make_request+0x346>
<blk_sq_make_request+0x2e0> mov    0x8(%rsp),%r15
<blk_sq_make_request+0x2e5> mov    %r15,%rdi
<blk_sq_make_request+0x2e8> callq  00000000000032cd <blk_sq_make_request+0x2ed>
				 struct blk_mq_ctx *ctx, struct bio *bio)
{
	struct request *rq;
	int checked = 8;

	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
<blk_sq_make_request+0x2ed> mov    0x10(%r15),%r14
<blk_sq_make_request+0x2f1> mov    %r15,%rax
<blk_sq_make_request+0x2f4> mov    $0x8,%r15d
<blk_sq_make_request+0x2fa> add    $0x8,%rax
<blk_sq_make_request+0x2fe> mov    %rax,0x10(%rsp)
<blk_sq_make_request+0x303> cmp    %rax,%r14
<blk_sq_make_request+0x306> je     000000000000330c <blk_sq_make_request+0x32c>
		int el_ret;

		if (!checked--)
			break;

		if (!blk_rq_merge_ok(rq, bio))
<blk_sq_make_request+0x308> mov    %r13,%rsi
<blk_sq_make_request+0x30b> mov    %r14,%rdi
<blk_sq_make_request+0x30e> callq  00000000000032f3 <blk_sq_make_request+0x313>
<blk_sq_make_request+0x313> test   %al,%al
<blk_sq_make_request+0x315> jne    0000000000003419 <blk_sq_make_request+0x439>
				 struct blk_mq_ctx *ctx, struct bio *bio)
{
	struct request *rq;
	int checked = 8;

	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
<blk_sq_make_request+0x31b> mov    0x8(%r14),%r14
<blk_sq_make_request+0x31f> cmp    0x10(%rsp),%r14
<blk_sq_make_request+0x324> je     000000000000330c <blk_sq_make_request+0x32c>
		int el_ret;

		if (!checked--)
<blk_sq_make_request+0x326> sub    $0x1,%r15d
<blk_sq_make_request+0x32a> jne    00000000000032e8 <blk_sq_make_request+0x308>
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x32c> mov    %r13,%rsi
<blk_sq_make_request+0x32f> mov    %rbp,%rdi
<blk_sq_make_request+0x332> callq  0000000000003317 <blk_sq_make_request+0x337>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x337> mov    $0x1,%esi
<blk_sq_make_request+0x33c> mov    %rbp,%rdi
<blk_sq_make_request+0x33f> callq  0000000000003324 <blk_sq_make_request+0x344>
<blk_sq_make_request+0x344> jmp    0000000000003348 <blk_sq_make_request+0x368>
	}
}

static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
	init_request_from_bio(rq, bio);
<blk_sq_make_request+0x346> mov    %r13,%rsi
<blk_sq_make_request+0x349> mov    %rbp,%rdi
<blk_sq_make_request+0x34c> callq  0000000000003331 <blk_sq_make_request+0x351>

	blk_account_io_start(rq, 1);
<blk_sq_make_request+0x351> mov    %rbp,%rdi
<blk_sq_make_request+0x354> mov    $0x1,%esi
<blk_sq_make_request+0x359> callq  000000000000333e <blk_sq_make_request+0x35e>
<blk_sq_make_request+0x35e> mov    0x8(%rsp),%rdi
<blk_sq_make_request+0x363> callq  0000000000003348 <blk_sq_make_request+0x368>
					    struct request *rq,
					    bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;

	trace_block_rq_insert(hctx->queue, rq);
<blk_sq_make_request+0x368> mov    (%rsp),%rax
}

static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
				    struct request *rq, bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;
<blk_sq_make_request+0x36c> mov    0x38(%rbp),%r13
					    struct request *rq,
					    bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;

	trace_block_rq_insert(hctx->queue, rq);
<blk_sq_make_request+0x370> mov    0xb0(%rax),%r14
<blk_sq_make_request+0x377> nopl   0x0(%rax,%rax,1)
<blk_sq_make_request+0x37c> mov    0x10(%r13),%rsi

	if (at_head)
		list_add(&rq->queuelist, &ctx->rq_list);
	else
		list_add_tail(&rq->queuelist, &ctx->rq_list);
<blk_sq_make_request+0x380> lea    0x8(%r13),%rdx
<blk_sq_make_request+0x384> mov    %rbp,%rdi
<blk_sq_make_request+0x387> callq  000000000000336c <blk_sq_make_request+0x38c>
				    struct request *rq, bool at_head)
{
	struct blk_mq_ctx *ctx = rq->mq_ctx;

	__blk_mq_insert_req_list(hctx, rq, at_head);
	blk_mq_hctx_mark_pending(hctx, ctx);
<blk_sq_make_request+0x38c> mov    (%rsp),%rdi
<blk_sq_make_request+0x390> lea    0x44(%r13),%rsi
<blk_sq_make_request+0x394> callq  0000000000000ab0 <blk_mq_hctx_mark_pending.isra.25>
<blk_sq_make_request+0x399> mov    0x8(%rsp),%rdi
<blk_sq_make_request+0x39e> callq  *0x0

static inline void __raw_spin_unlock(raw_spinlock_t *lock)
{
	spin_release(&lock->dep_map, 1, _RET_IP_);
	do_raw_spin_unlock(lock);
	preempt_enable();
<blk_sq_make_request+0x3a5> xor    %esi,%esi
<blk_sq_make_request+0x3a7> jmpq   00000000000031dd <blk_sq_make_request+0x1fd>
<blk_sq_make_request+0x3ac> mov    %gs:0x0(%rip),%eax        # 0000000000003393 <blk_sq_make_request+0x3b3>
<blk_sq_make_request+0x3b3> mov    %eax,%eax

static __always_inline bool variable_test_bit(long nr, volatile const unsigned long *addr)
{
	bool oldbit;

	asm volatile("bt %2,%1\n\t"
<blk_sq_make_request+0x3b5> bt     %rax,0x0(%rip)        # 000000000000339d <blk_sq_make_request+0x3bd>
<blk_sq_make_request+0x3bd> setb   %al
<blk_sq_make_request+0x3c0> test   %al,%al
<blk_sq_make_request+0x3c2> je     0000000000003116 <blk_sq_make_request+0x136>
<blk_sq_make_request+0x3c8> mov    0x0(%rip),%rbx        # 00000000000033af <blk_sq_make_request+0x3cf>
<blk_sq_make_request+0x3cf> test   %rbx,%rbx
<blk_sq_make_request+0x3d2> je     00000000000033cc <blk_sq_make_request+0x3ec>
<blk_sq_make_request+0x3d4> mov    (%rbx),%rax
<blk_sq_make_request+0x3d7> mov    0x8(%rbx),%rdi
<blk_sq_make_request+0x3db> add    $0x18,%rbx
<blk_sq_make_request+0x3df> mov    %r13,%rsi
<blk_sq_make_request+0x3e2> callq  *%rax
<blk_sq_make_request+0x3e4> mov    (%rbx),%rax
<blk_sq_make_request+0x3e7> test   %rax,%rax
<blk_sq_make_request+0x3ea> jne    00000000000033b7 <blk_sq_make_request+0x3d7>
<blk_sq_make_request+0x3ec> jmpq   0000000000003116 <blk_sq_make_request+0x136>
 * Called immediately before block operation request @rq is inserted
 * into queue @q.  The fields in the operation request @rq struct can
 * be examined to determine which device and sectors the pending
 * operation would access.
 */
DEFINE_EVENT(block_rq, block_rq_insert,
<blk_sq_make_request+0x3f1> mov    %gs:0x0(%rip),%eax        # 00000000000033d8 <blk_sq_make_request+0x3f8>
<blk_sq_make_request+0x3f8> mov    %eax,%eax
<blk_sq_make_request+0x3fa> bt     %rax,0x0(%rip)        # 00000000000033e2 <blk_sq_make_request+0x402>
<blk_sq_make_request+0x402> setb   %al
<blk_sq_make_request+0x405> test   %al,%al
<blk_sq_make_request+0x407> je     000000000000335c <blk_sq_make_request+0x37c>
<blk_sq_make_request+0x40d> mov    0x0(%rip),%r15        # 00000000000033f4 <blk_sq_make_request+0x414>
<blk_sq_make_request+0x414> test   %r15,%r15
<blk_sq_make_request+0x417> je     0000000000003414 <blk_sq_make_request+0x434>
<blk_sq_make_request+0x419> mov    (%r15),%rcx
<blk_sq_make_request+0x41c> mov    0x8(%r15),%rdi
<blk_sq_make_request+0x420> add    $0x18,%r15
<blk_sq_make_request+0x424> mov    %rbp,%rdx
<blk_sq_make_request+0x427> mov    %r14,%rsi
<blk_sq_make_request+0x42a> callq  *%rcx
<blk_sq_make_request+0x42c> mov    (%r15),%rcx
<blk_sq_make_request+0x42f> test   %rcx,%rcx
<blk_sq_make_request+0x432> jne    00000000000033fc <blk_sq_make_request+0x41c>
<blk_sq_make_request+0x434> jmpq   000000000000335c <blk_sq_make_request+0x37c>
			break;

		if (!blk_rq_merge_ok(rq, bio))
			continue;

		el_ret = blk_try_merge(rq, bio);
<blk_sq_make_request+0x439> mov    %r13,%rsi
<blk_sq_make_request+0x43c> mov    %r14,%rdi
<blk_sq_make_request+0x43f> callq  0000000000003424 <blk_sq_make_request+0x444>
		if (el_ret == ELEVATOR_BACK_MERGE) {
<blk_sq_make_request+0x444> cmp    $0x2,%eax
<blk_sq_make_request+0x447> je     0000000000003474 <blk_sq_make_request+0x494>
			if (bio_attempt_back_merge(q, rq, bio)) {
				ctx->rq_merged++;
				return true;
			}
			break;
		} else if (el_ret == ELEVATOR_FRONT_MERGE) {
<blk_sq_make_request+0x449> cmp    $0x1,%eax
<blk_sq_make_request+0x44c> jne    00000000000032fb <blk_sq_make_request+0x31b>
			if (bio_attempt_front_merge(q, rq, bio)) {
<blk_sq_make_request+0x452> mov    0x18(%rsp),%rdi
<blk_sq_make_request+0x457> mov    %r13,%rdx
<blk_sq_make_request+0x45a> mov    %r14,%rsi
<blk_sq_make_request+0x45d> callq  0000000000003442 <blk_sq_make_request+0x462>
<blk_sq_make_request+0x462> test   %al,%al
<blk_sq_make_request+0x464> je     000000000000330c <blk_sq_make_request+0x32c>
				ctx->rq_merged++;
<blk_sq_make_request+0x46a> mov    0x8(%rsp),%rax
<blk_sq_make_request+0x46f> mov    0x8(%rsp),%rdi
<blk_sq_make_request+0x474> addq   $0x1,0x58(%rax)
<blk_sq_make_request+0x479> callq  *0x0
			blk_mq_bio_to_request(rq, bio);
			goto insert_rq;
		}

		spin_unlock(&ctx->lock);
		__blk_mq_free_request(hctx, ctx, rq);
<blk_sq_make_request+0x480> mov    %rdi,%rsi
<blk_sq_make_request+0x483> mov    (%rsp),%rdi
<blk_sq_make_request+0x487> mov    %rbp,%rdx
<blk_sq_make_request+0x48a> callq  00000000000002f0 <__blk_mq_free_request>
	return __blk_mq_get_ctx(q, get_cpu());
}

static inline void blk_mq_put_ctx(struct blk_mq_ctx *ctx)
{
	put_cpu();
<blk_sq_make_request+0x48f> jmpq   000000000000314a <blk_sq_make_request+0x16a>
		if (!blk_rq_merge_ok(rq, bio))
			continue;

		el_ret = blk_try_merge(rq, bio);
		if (el_ret == ELEVATOR_BACK_MERGE) {
			if (bio_attempt_back_merge(q, rq, bio)) {
<blk_sq_make_request+0x494> mov    0x18(%rsp),%rdi
<blk_sq_make_request+0x499> mov    %r13,%rdx
<blk_sq_make_request+0x49c> mov    %r14,%rsi
<blk_sq_make_request+0x49f> callq  0000000000003484 <blk_sq_make_request+0x4a4>
<blk_sq_make_request+0x4a4> test   %al,%al
<blk_sq_make_request+0x4a6> je     000000000000330c <blk_sq_make_request+0x32c>
<blk_sq_make_request+0x4ac> jmp    000000000000344a <blk_sq_make_request+0x46a>
		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
	}

	blk_mq_put_ctx(data.ctx);
	return cookie;
}
<blk_sq_make_request+0x4ae> callq  0000000000003493 <blk_sq_make_request+0x4b3>
<blk_sq_make_request+0x4b3> data16 data16 data16 nopw %cs:0x0(%rax,%rax,1)

[-- Attachment #4: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-10-19 14:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-04 18:26 4.9.52: INFO: task XXX blocked for more than 300 seconds Philipp Hahn
2017-10-05 10:12 ` Jan Beulich
2017-10-05 12:04   ` Philipp Hahn
2017-10-05 10:38 ` Roger Pau Monné
2017-10-05 13:20 ` Konrad Rzeszutek Wilk
2017-10-05 17:59   ` Ankur Arora
2017-10-19 14:34     ` 4.9.52: INFO: task X " Philipp Hahn
2017-10-19 14:34     ` [Xen-devel] " Philipp Hahn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.