All of lore.kernel.org
 help / color / mirror / Atom feed
* [bug report] blktests srp/002 hang
@ 2023-08-21  6:46 Shinichiro Kawasaki
  2023-08-22  1:46 ` Bob Pearson
  2023-09-22 11:06 ` Linux regression tracking #adding (Thorsten Leemhuis)
  0 siblings, 2 replies; 87+ messages in thread
From: Shinichiro Kawasaki @ 2023-08-21  6:46 UTC (permalink / raw)
  To: linux-rdma, linux-scsi; +Cc: Bob Pearson

I observed a process hang at the blktests test case srp/002 occasionally, using
kernel v6.5-rcX. Kernel reported stall of many kworkers [1]. PID 2757 hanged at
inode_sleep_on_writeback(). Other kworkers hanged at __inode_wait_for_writeback.

The hang is recreated in stable manner by repeating the test case srp/002 (from
15 times to 30 times).

I bisected and found the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
for rxe tasks") looks like the trigger commit. When I revert it from the kernel
v6.5-rc7, the hang symptom disappears. I'm not sure how the commit relates to
the hang. Comments will be welcomed.

[1]

...
[ 1670.489181] scsi 4:0:0:1: alua: Detached
[ 1670.985461] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-38: queued zerolength write
[ 1670.985702] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-36: queued zerolength write
[ 1670.985716] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-38 wc->status 5
[ 1670.985821] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-38
[ 1670.985824] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-36 wc->status 5
[ 1670.985909] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-34: queued zerolength write
[ 1670.985924] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-36
[ 1670.986104] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-34 wc->status 5
[ 1670.986244] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-34
[ 1671.049223] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-40: queued zerolength write
[ 1671.049588] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-40 wc->status 5
[ 1671.049626] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-40
[ 1844.873748] INFO: task kworker/0:1:9 blocked for more than 122 seconds.
[ 1844.877893]       Not tainted 6.5.0-rc7 #106
[ 1844.878903] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1844.880255] task:kworker/0:1     state:D stack:0     pid:9     ppid:2      flags:0x00004000
[ 1844.881830] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1844.882999] Call Trace:
[ 1844.883900]  <TASK>
[ 1844.884703]  __schedule+0x10ac/0x5e80
[ 1844.885609]  ? do_raw_spin_unlock+0x54/0x1f0
[ 1844.886569]  ? __pfx___schedule+0x10/0x10
[ 1844.887596]  ? lock_release+0x378/0x650
[ 1844.888431]  ? schedule+0x92/0x220
[ 1844.889232]  ? mark_held_locks+0x96/0xe0
[ 1844.890117]  schedule+0x133/0x220
[ 1844.890874]  bit_wait+0x17/0xe0
[ 1844.891619]  __wait_on_bit+0x66/0x180
[ 1844.892409]  ? __pfx_bit_wait+0x10/0x10
[ 1844.893192]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1844.894245]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1844.895225]  ? __pfx_wake_bit_function+0x10/0x10
[ 1844.896138]  ? find_held_lock+0x2d/0x110
[ 1844.897085]  writeback_single_inode+0xf9/0x3f0
[ 1844.898186]  sync_inode_metadata+0x91/0xd0
[ 1844.899036]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1844.900106]  ? lock_release+0x378/0x650
[ 1844.900988]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1844.901978]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1844.902964]  ext4_sync_file+0x469/0xb60
[ 1844.903859]  iomap_dio_complete+0x5d1/0x860
[ 1844.904828]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1844.905841]  iomap_dio_complete_work+0x52/0x80
[ 1844.906774]  process_one_work+0x898/0x14a0
[ 1844.907673]  ? __pfx_lock_acquire+0x10/0x10
[ 1844.908644]  ? __pfx_process_one_work+0x10/0x10
[ 1844.909693]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1844.910676]  worker_thread+0x100/0x12c0
[ 1844.911612]  ? __kthread_parkme+0xc1/0x1f0
[ 1844.912542]  ? __pfx_worker_thread+0x10/0x10
[ 1844.913584]  kthread+0x2ea/0x3c0
[ 1844.914465]  ? __pfx_kthread+0x10/0x10
[ 1844.915335]  ret_from_fork+0x30/0x70
[ 1844.916269]  ? __pfx_kthread+0x10/0x10
[ 1844.917308]  ret_from_fork_asm+0x1b/0x30
[ 1844.918243]  </TASK>
[ 1844.918998] INFO: task kworker/1:0:25 blocked for more than 122 seconds.
[ 1844.920107]       Not tainted 6.5.0-rc7 #106
[ 1844.921041] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1844.922262] task:kworker/1:0     state:D stack:0     pid:25    ppid:2      flags:0x00004000
[ 1844.923550] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1844.924598] Call Trace:
[ 1844.925407]  <TASK>
[ 1844.926194]  __schedule+0x10ac/0x5e80
[ 1844.927097]  ? do_raw_spin_unlock+0x54/0x1f0
[ 1844.928032]  ? __pfx___schedule+0x10/0x10
[ 1844.928937]  ? lock_release+0x378/0x650
[ 1844.929823]  ? schedule+0x92/0x220
[ 1844.930682]  ? mark_held_locks+0x96/0xe0
[ 1844.931579]  schedule+0x133/0x220
[ 1844.932411]  bit_wait+0x17/0xe0
[ 1844.933238]  __wait_on_bit+0x66/0x180
[ 1844.934107]  ? __pfx_bit_wait+0x10/0x10
[ 1844.934996]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1844.935956]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1844.936969]  ? __pfx_wake_bit_function+0x10/0x10
[ 1844.937942]  ? find_held_lock+0x2d/0x110
[ 1844.938891]  writeback_single_inode+0xf9/0x3f0
[ 1844.939836]  sync_inode_metadata+0x91/0xd0
[ 1844.940758]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1844.941730]  ? lock_release+0x378/0x650
[ 1844.942640]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1844.943647]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1844.944652]  ext4_sync_file+0x469/0xb60
[ 1844.945561]  iomap_dio_complete+0x5d1/0x860
[ 1844.946469]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1844.947417]  iomap_dio_complete_work+0x52/0x80
[ 1844.948358]  process_one_work+0x898/0x14a0
[ 1844.949284]  ? __pfx_lock_acquire+0x10/0x10
[ 1844.950204]  ? __pfx_process_one_work+0x10/0x10
[ 1844.951152]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1844.952094]  worker_thread+0x100/0x12c0
[ 1844.952998]  ? __pfx_worker_thread+0x10/0x10
[ 1844.953919]  kthread+0x2ea/0x3c0
[ 1844.954760]  ? __pfx_kthread+0x10/0x10
[ 1844.955669]  ret_from_fork+0x30/0x70
[ 1844.956550]  ? __pfx_kthread+0x10/0x10
[ 1844.957418]  ret_from_fork_asm+0x1b/0x30
[ 1844.958321]  </TASK>
[ 1844.959085] INFO: task kworker/1:1:49 blocked for more than 122 seconds.
[ 1844.960193]       Not tainted 6.5.0-rc7 #106
[ 1844.961122] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1844.962340] task:kworker/1:1     state:D stack:0     pid:49    ppid:2      flags:0x00004000
[ 1844.963619] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1844.964667] Call Trace:
[ 1844.965503]  <TASK>
[ 1844.966289]  __schedule+0x10ac/0x5e80
[ 1844.967207]  ? lock_acquire+0x1a9/0x4e0
[ 1844.968122]  ? __pfx___schedule+0x10/0x10
[ 1844.969034]  ? lock_release+0x378/0x650
[ 1844.969922]  ? schedule+0x92/0x220
[ 1844.970778]  ? mark_held_locks+0x96/0xe0
[ 1844.971674]  schedule+0x133/0x220
[ 1844.972526]  bit_wait+0x17/0xe0
[ 1844.973336]  __wait_on_bit+0x66/0x180
[ 1844.974206]  ? __pfx_bit_wait+0x10/0x10
[ 1844.975086]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1844.976046]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1844.977056]  ? __pfx_wake_bit_function+0x10/0x10
[ 1844.978007]  ? find_held_lock+0x2d/0x110
[ 1844.978917]  writeback_single_inode+0xf9/0x3f0
[ 1844.979865]  sync_inode_metadata+0x91/0xd0
[ 1844.980786]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1844.981765]  ? lock_release+0x378/0x650
[ 1844.982677]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1844.983687]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1844.984696]  ext4_sync_file+0x469/0xb60
[ 1844.985608]  iomap_dio_complete+0x5d1/0x860
[ 1844.986548]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1844.987484]  iomap_dio_complete_work+0x52/0x80
[ 1844.988435]  process_one_work+0x898/0x14a0
[ 1844.989352]  ? __pfx_lock_acquire+0x10/0x10
[ 1844.990275]  ? __pfx_process_one_work+0x10/0x10
[ 1844.991220]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1844.992164]  worker_thread+0x100/0x12c0
[ 1844.993065]  ? __kthread_parkme+0xc1/0x1f0
[ 1844.993977]  ? __pfx_worker_thread+0x10/0x10
[ 1844.994934]  kthread+0x2ea/0x3c0
[ 1844.995783]  ? __pfx_kthread+0x10/0x10
[ 1844.996670]  ret_from_fork+0x30/0x70
[ 1844.997544]  ? __pfx_kthread+0x10/0x10
[ 1844.998409]  ret_from_fork_asm+0x1b/0x30
[ 1844.999308]  </TASK>
[ 1845.000094] INFO: task kworker/0:2:74 blocked for more than 123 seconds.
[ 1845.001315]       Not tainted 6.5.0-rc7 #106
[ 1845.002326] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.003630] task:kworker/0:2     state:D stack:0     pid:74    ppid:2      flags:0x00004000
[ 1845.004991] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1845.006108] Call Trace:
[ 1845.006975]  <TASK>
[ 1845.007805]  __schedule+0x10ac/0x5e80
[ 1845.008781]  ? do_raw_spin_unlock+0x54/0x1f0
[ 1845.009780]  ? __pfx___schedule+0x10/0x10
[ 1845.010736]  ? lock_release+0x378/0x650
[ 1845.011666]  ? schedule+0x92/0x220
[ 1845.012579]  ? mark_held_locks+0x96/0xe0
[ 1845.013531]  schedule+0x133/0x220
[ 1845.014414]  bit_wait+0x17/0xe0
[ 1845.015287]  __wait_on_bit+0x66/0x180
[ 1845.016219]  ? __pfx_bit_wait+0x10/0x10
[ 1845.017164]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1845.018185]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1845.019269]  ? __pfx_wake_bit_function+0x10/0x10
[ 1845.020282]  ? find_held_lock+0x2d/0x110
[ 1845.021246]  writeback_single_inode+0xf9/0x3f0
[ 1845.022248]  sync_inode_metadata+0x91/0xd0
[ 1845.023222]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1845.024255]  ? lock_release+0x378/0x650
[ 1845.025207]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1845.026281]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1845.027347]  ext4_sync_file+0x469/0xb60
[ 1845.028302]  iomap_dio_complete+0x5d1/0x860
[ 1845.029275]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1845.030276]  iomap_dio_complete_work+0x52/0x80
[ 1845.031281]  process_one_work+0x898/0x14a0
[ 1845.032248]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.033199]  ? __pfx_process_one_work+0x10/0x10
[ 1845.034182]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.035188]  worker_thread+0x100/0x12c0
[ 1845.036138]  ? __pfx_worker_thread+0x10/0x10
[ 1845.037104]  kthread+0x2ea/0x3c0
[ 1845.037996]  ? __pfx_kthread+0x10/0x10
[ 1845.038923]  ret_from_fork+0x30/0x70
[ 1845.039840]  ? __pfx_kthread+0x10/0x10
[ 1845.040763]  ret_from_fork_asm+0x1b/0x30
[ 1845.041729]  </TASK>
[ 1845.042531] INFO: task kworker/3:2:169 blocked for more than 123 seconds.
[ 1845.043703]       Not tainted 6.5.0-rc7 #106
[ 1845.044780] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.046068] task:kworker/3:2     state:D stack:0     pid:169   ppid:2      flags:0x00004000
[ 1845.047400] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1845.048518] Call Trace:
[ 1845.049392]  <TASK>
[ 1845.050214]  __schedule+0x10ac/0x5e80
[ 1845.051172]  ? lock_acquire+0x1a9/0x4e0
[ 1845.052141]  ? __pfx___schedule+0x10/0x10
[ 1845.053086]  ? lock_release+0x378/0x650
[ 1845.054017]  ? schedule+0x92/0x220
[ 1845.054920]  ? mark_held_locks+0x96/0xe0
[ 1845.055866]  schedule+0x133/0x220
[ 1845.056761]  bit_wait+0x17/0xe0
[ 1845.057645]  __wait_on_bit+0x66/0x180
[ 1845.058573]  ? __pfx_bit_wait+0x10/0x10
[ 1845.059502]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1845.060528]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1845.061603]  ? __pfx_wake_bit_function+0x10/0x10
[ 1845.062604]  ? find_held_lock+0x2d/0x110
[ 1845.063548]  writeback_single_inode+0xf9/0x3f0
[ 1845.064564]  sync_inode_metadata+0x91/0xd0
[ 1845.065534]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1845.066552]  ? lock_release+0x378/0x650
[ 1845.067504]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1845.068557]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1845.069609]  ext4_sync_file+0x469/0xb60
[ 1845.070563]  iomap_dio_complete+0x5d1/0x860
[ 1845.071550]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1845.072543]  iomap_dio_complete_work+0x52/0x80
[ 1845.073547]  process_one_work+0x898/0x14a0
[ 1845.074518]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.075468]  ? __pfx_process_one_work+0x10/0x10
[ 1845.076456]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.077436]  worker_thread+0x100/0x12c0
[ 1845.078382]  ? __pfx_worker_thread+0x10/0x10
[ 1845.079354]  kthread+0x2ea/0x3c0
[ 1845.080230]  ? __pfx_kthread+0x10/0x10
[ 1845.081163]  ret_from_fork+0x30/0x70
[ 1845.082075]  ? __pfx_kthread+0x10/0x10
[ 1845.083014]  ret_from_fork_asm+0x1b/0x30
[ 1845.083957]  </TASK>
[ 1845.084756] INFO: task kworker/0:3:221 blocked for more than 123 seconds.
[ 1845.085927]       Not tainted 6.5.0-rc7 #106
[ 1845.086911] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.088205] task:kworker/0:3     state:D stack:0     pid:221   ppid:2      flags:0x00004000
[ 1845.089566] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1845.090635] Call Trace:
[ 1845.091503]  <TASK>
[ 1845.092318]  __schedule+0x10ac/0x5e80
[ 1845.093282]  ? do_raw_spin_unlock+0x54/0x1f0
[ 1845.094265]  ? __pfx___schedule+0x10/0x10
[ 1845.095200]  ? lock_release+0x378/0x650
[ 1845.096132]  ? schedule+0x92/0x220
[ 1845.097018]  ? mark_held_locks+0x96/0xe0
[ 1845.097959]  schedule+0x133/0x220
[ 1845.098863]  bit_wait+0x17/0xe0
[ 1845.099736]  __wait_on_bit+0x66/0x180
[ 1845.100649]  ? __pfx_bit_wait+0x10/0x10
[ 1845.101600]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1845.102606]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1845.103673]  ? __pfx_wake_bit_function+0x10/0x10
[ 1845.104685]  ? find_held_lock+0x2d/0x110
[ 1845.105633]  writeback_single_inode+0xf9/0x3f0
[ 1845.106625]  sync_inode_metadata+0x91/0xd0
[ 1845.107612]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1845.108635]  ? lock_release+0x378/0x650
[ 1845.109591]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1845.110645]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1845.111698]  ext4_sync_file+0x469/0xb60
[ 1845.112657]  iomap_dio_complete+0x5d1/0x860
[ 1845.113639]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1845.114625]  iomap_dio_complete_work+0x52/0x80
[ 1845.115616]  process_one_work+0x898/0x14a0
[ 1845.116582]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.117575]  ? __pfx_process_one_work+0x10/0x10
[ 1845.118573]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.119557]  worker_thread+0x100/0x12c0
[ 1845.120480]  ? __pfx_worker_thread+0x10/0x10
[ 1845.121453]  kthread+0x2ea/0x3c0
[ 1845.122339]  ? __pfx_kthread+0x10/0x10
[ 1845.123277]  ret_from_fork+0x30/0x70
[ 1845.124192]  ? __pfx_kthread+0x10/0x10
[ 1845.125131]  ret_from_fork_asm+0x1b/0x30
[ 1845.126085]  </TASK>
[ 1845.127043] INFO: task kworker/1:2:230 blocked for more than 123 seconds.
[ 1845.128574]       Not tainted 6.5.0-rc7 #106
[ 1845.129789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.131441] task:kworker/1:2     state:D stack:0     pid:230   ppid:2      flags:0x00004000
[ 1845.133125] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1845.134546] Call Trace:
[ 1845.135547]  <TASK>
[ 1845.136475]  __schedule+0x10ac/0x5e80
[ 1845.137599]  ? lock_acquire+0x1a9/0x4e0
[ 1845.138703]  ? __pfx___schedule+0x10/0x10
[ 1845.139859]  ? lock_release+0x378/0x650
[ 1845.140980]  ? schedule+0x92/0x220
[ 1845.142026]  ? mark_held_locks+0x96/0xe0
[ 1845.143161]  schedule+0x133/0x220
[ 1845.144196]  bit_wait+0x17/0xe0
[ 1845.145233]  __wait_on_bit+0x66/0x180
[ 1845.146262]  ? __pfx_bit_wait+0x10/0x10
[ 1845.147380]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1845.148650]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1845.149950]  ? __pfx_wake_bit_function+0x10/0x10
[ 1845.151181]  ? find_held_lock+0x2d/0x110
[ 1845.152288]  writeback_single_inode+0xf9/0x3f0
[ 1845.153474]  sync_inode_metadata+0x91/0xd0
[ 1845.154608]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1845.155857]  ? lock_release+0x378/0x650
[ 1845.156997]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1845.158309]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1845.159569]  ext4_sync_file+0x469/0xb60
[ 1845.160709]  iomap_dio_complete+0x5d1/0x860
[ 1845.161881]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1845.163086]  iomap_dio_complete_work+0x52/0x80
[ 1845.164269]  process_one_work+0x898/0x14a0
[ 1845.165367]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.166541]  ? __pfx_process_one_work+0x10/0x10
[ 1845.167706]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.168880]  worker_thread+0x100/0x12c0
[ 1845.170006]  ? __kthread_parkme+0xc1/0x1f0
[ 1845.171083]  ? __pfx_worker_thread+0x10/0x10
[ 1845.172302]  kthread+0x2ea/0x3c0
[ 1845.173350]  ? __pfx_kthread+0x10/0x10
[ 1845.174465]  ret_from_fork+0x30/0x70
[ 1845.175522]  ? __pfx_kthread+0x10/0x10
[ 1845.176616]  ret_from_fork_asm+0x1b/0x30
[ 1845.177754]  </TASK>
[ 1845.178624] INFO: task kworker/2:3:291 blocked for more than 123 seconds.
[ 1845.180123]       Not tainted 6.5.0-rc7 #106
[ 1845.181306] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.182914] task:kworker/2:3     state:D stack:0     pid:291   ppid:2      flags:0x00004000
[ 1845.184626] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1845.186012] Call Trace:
[ 1845.187004]  <TASK>
[ 1845.187939]  __schedule+0x10ac/0x5e80
[ 1845.189072]  ? do_raw_spin_unlock+0x54/0x1f0
[ 1845.190177]  ? __pfx___schedule+0x10/0x10
[ 1845.191356]  ? lock_release+0x378/0x650
[ 1845.192421]  ? schedule+0x92/0x220
[ 1845.193501]  ? mark_held_locks+0x96/0xe0
[ 1845.194535]  schedule+0x133/0x220
[ 1845.195595]  bit_wait+0x17/0xe0
[ 1845.196603]  __wait_on_bit+0x66/0x180
[ 1845.197697]  ? __pfx_bit_wait+0x10/0x10
[ 1845.198820]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1845.200061]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1845.201315]  ? __pfx_wake_bit_function+0x10/0x10
[ 1845.202522]  ? find_held_lock+0x2d/0x110
[ 1845.203679]  writeback_single_inode+0xf9/0x3f0
[ 1845.204885]  sync_inode_metadata+0x91/0xd0
[ 1845.205943]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1845.207190]  ? lock_release+0x378/0x650
[ 1845.208325]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1845.209581]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1845.210883]  ext4_sync_file+0x469/0xb60
[ 1845.212022]  iomap_dio_complete+0x5d1/0x860
[ 1845.213177]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1845.214315]  iomap_dio_complete_work+0x52/0x80
[ 1845.215547]  process_one_work+0x898/0x14a0
[ 1845.216714]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.217887]  ? __pfx_process_one_work+0x10/0x10
[ 1845.219026]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.220280]  worker_thread+0x100/0x12c0
[ 1845.221386]  ? __kthread_parkme+0xc1/0x1f0
[ 1845.222569]  ? __pfx_worker_thread+0x10/0x10
[ 1845.223743]  kthread+0x2ea/0x3c0
[ 1845.224788]  ? __pfx_kthread+0x10/0x10
[ 1845.225908]  ret_from_fork+0x30/0x70
[ 1845.226996]  ? __pfx_kthread+0x10/0x10
[ 1845.228110]  ret_from_fork_asm+0x1b/0x30
[ 1845.229254]  </TASK>
[ 1845.230191] INFO: task kworker/1:3:322 blocked for more than 123 seconds.
[ 1845.231562]       Not tainted 6.5.0-rc7 #106
[ 1845.232622] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.233992] task:kworker/1:3     state:D stack:0     pid:322   ppid:2      flags:0x00004000
[ 1845.235439] Workqueue: dio/dm-1 iomap_dio_complete_work
[ 1845.236681] Call Trace:
[ 1845.237629]  <TASK>
[ 1845.238526]  __schedule+0x10ac/0x5e80
[ 1845.239559]  ? do_raw_spin_unlock+0x54/0x1f0
[ 1845.240622]  ? __pfx___schedule+0x10/0x10
[ 1845.241639]  ? lock_release+0x378/0x650
[ 1845.242650]  ? schedule+0x92/0x220
[ 1845.243654]  ? mark_held_locks+0x96/0xe0
[ 1845.244707]  schedule+0x133/0x220
[ 1845.245657]  bit_wait+0x17/0xe0
[ 1845.246631]  __wait_on_bit+0x66/0x180
[ 1845.247601]  ? __pfx_bit_wait+0x10/0x10
[ 1845.248630]  __inode_wait_for_writeback+0x12b/0x1b0
[ 1845.249743]  ? __pfx___inode_wait_for_writeback+0x10/0x10
[ 1845.250948]  ? __pfx_wake_bit_function+0x10/0x10
[ 1845.252021]  ? find_held_lock+0x2d/0x110
[ 1845.253043]  writeback_single_inode+0xf9/0x3f0
[ 1845.254123]  sync_inode_metadata+0x91/0xd0
[ 1845.255205]  ? __pfx_sync_inode_metadata+0x10/0x10
[ 1845.256294]  ? lock_release+0x378/0x650
[ 1845.257332]  ? file_check_and_advance_wb_err+0xb5/0x230
[ 1845.258542]  generic_buffers_fsync_noflush+0x1bf/0x270
[ 1845.259701]  ext4_sync_file+0x469/0xb60
[ 1845.260765]  iomap_dio_complete+0x5d1/0x860
[ 1845.261790]  ? __pfx_aio_complete_rw+0x10/0x10
[ 1845.262907]  iomap_dio_complete_work+0x52/0x80
[ 1845.263961]  process_one_work+0x898/0x14a0
[ 1845.265025]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.266074]  ? __pfx_process_one_work+0x10/0x10
[ 1845.267197]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.268305]  worker_thread+0x100/0x12c0
[ 1845.269328]  ? __kthread_parkme+0xc1/0x1f0
[ 1845.270368]  ? __pfx_worker_thread+0x10/0x10
[ 1845.271457]  kthread+0x2ea/0x3c0
[ 1845.272422]  ? __pfx_kthread+0x10/0x10
[ 1845.273443]  ret_from_fork+0x30/0x70
[ 1845.274438]  ? __pfx_kthread+0x10/0x10
[ 1845.275475]  ret_from_fork_asm+0x1b/0x30
[ 1845.276555]  </TASK>
[ 1845.277433] INFO: task kworker/u8:7:2757 blocked for more than 123 seconds.
[ 1845.278808]       Not tainted 6.5.0-rc7 #106
[ 1845.279897] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1845.281313] task:kworker/u8:7    state:D stack:0     pid:2757  ppid:2      flags:0x00004000
[ 1845.282753] Workqueue: writeback wb_workfn (flush-253:1)
[ 1845.283993] Call Trace:
[ 1845.284945]  <TASK>
[ 1845.285853]  __schedule+0x10ac/0x5e80
[ 1845.286872]  ? lock_acquire+0x1b9/0x4e0
[ 1845.287917]  ? __pfx___schedule+0x10/0x10
[ 1845.288934]  ? __blk_flush_plug+0x27a/0x450
[ 1845.289979]  ? inode_sleep_on_writeback+0xf4/0x160
[ 1845.291131]  schedule+0x133/0x220
[ 1845.292052]  inode_sleep_on_writeback+0x14e/0x160
[ 1845.293130]  ? __pfx_inode_sleep_on_writeback+0x10/0x10
[ 1845.294289]  ? __pfx_lock_release+0x10/0x10
[ 1845.295362]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 1845.296574]  ? __pfx___writeback_inodes_wb+0x10/0x10
[ 1845.297750]  wb_writeback+0x330/0x7a0
[ 1845.298800]  ? __pfx_wb_writeback+0x10/0x10
[ 1845.299876]  ? get_nr_dirty_inodes+0xc7/0x170
[ 1845.300988]  wb_workfn+0x7a1/0xcc0
[ 1845.302019]  ? __pfx_wb_workfn+0x10/0x10
[ 1845.303071]  ? lock_acquire+0x1b9/0x4e0
[ 1845.304127]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.305232]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.306341]  process_one_work+0x898/0x14a0
[ 1845.307377]  ? __pfx_lock_acquire+0x10/0x10
[ 1845.308410]  ? __pfx_process_one_work+0x10/0x10
[ 1845.309551]  ? __pfx_do_raw_spin_lock+0x10/0x10
[ 1845.310678]  worker_thread+0x100/0x12c0
[ 1845.311702]  ? __kthread_parkme+0xc1/0x1f0
[ 1845.312778]  ? __pfx_worker_thread+0x10/0x10
[ 1845.313864]  kthread+0x2ea/0x3c0
[ 1845.314848]  ? __pfx_kthread+0x10/0x10
[ 1845.315885]  ret_from_fork+0x30/0x70
[ 1845.316879]  ? __pfx_kthread+0x10/0x10
[ 1845.317885]  ret_from_fork_asm+0x1b/0x30
[ 1845.318896]  </TASK>
[ 1845.319767] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
[ 1845.321587] 
               Showing all locks held in the system:
[ 1845.323498] 2 locks held by kworker/0:1/9:
[ 1845.324569]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.326209]  #1: ffff888100877d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.327999] 1 lock held by rcu_tasks_kthre/13:
[ 1845.329153]  #0: ffffffffa8c7b010 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xde0
[ 1845.330838] 1 lock held by rcu_tasks_rude_/14:
[ 1845.332043]  #0: ffffffffa8c7ad70 (rcu_tasks_rude.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xde0
[ 1845.333713] 1 lock held by rcu_tasks_trace/15:
[ 1845.334939]  #0: ffffffffa8c7aa70 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xde0
[ 1845.336716] 2 locks held by kworker/1:0/25:
[ 1845.337890]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.339639]  #1: ffff888100977d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.341440] 1 lock held by khungtaskd/43:
[ 1845.342669]  #0: ffffffffa8c7bbe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x340
[ 1845.344347] 2 locks held by kworker/1:1/49:
[ 1845.345577]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.347382]  #1: ffff88810164fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.349278] 2 locks held by kworker/0:2/74:
[ 1845.350547]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.352400]  #1: ffff88811c8ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.354301] 2 locks held by kworker/3:2/169:
[ 1845.355618]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.357472]  #1: ffff88811f0e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.359445] 2 locks held by kworker/0:3/221:
[ 1845.360862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.362800]  #1: ffff888126567d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.364804] 2 locks held by kworker/1:2/230:
[ 1845.366259]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.368270]  #1: ffff8881285f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.370338] 2 locks held by kworker/2:3/291:
[ 1845.371807]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.373789]  #1: ffff88812a1f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.375949] 2 locks held by kworker/1:3/322:
[ 1845.377464]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.379533]  #1: ffff888105a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.381731] 1 lock held by in:imjournal/663:
[ 1845.383335] 2 locks held by kworker/u8:7/2757:
[ 1845.384953]  #0: ffff888101191938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.387067]  #1: ffff88813542fd98 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.389320] 2 locks held by kworker/3:4/2759:
[ 1845.390985]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.393164]  #1: ffff888122ddfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.395410] 2 locks held by kworker/0:4/2760:
[ 1845.397073]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.399329]  #1: ffff888107dbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.401670] 2 locks held by kworker/1:5/2762:
[ 1845.403414]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.405626]  #1: ffff888105fbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.407962] 2 locks held by kworker/1:6/2764:
[ 1845.409693]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.411996]  #1: ffff888134647d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.414335] 2 locks held by kworker/3:5/2765:
[ 1845.416107]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.418376]  #1: ffff888128effd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.420758] 2 locks held by kworker/1:7/2767:
[ 1845.422532]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.424711]  #1: ffff88810fcefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.427082] 2 locks held by kworker/1:8/2768:
[ 1845.428790]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.431080]  #1: ffff88812a42fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.433495] 2 locks held by kworker/1:9/2770:
[ 1845.435192]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.437507]  #1: ffff888135477d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.439982] 2 locks held by kworker/3:6/2771:
[ 1845.441737]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.444015]  #1: ffff888127c6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.446448] 2 locks held by kworker/3:10/2776:
[ 1845.448255]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.450561]  #1: ffff888129fafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.452971] 2 locks held by kworker/3:11/2777:
[ 1845.454703]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.457029]  #1: ffff8881056b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.459377] 2 locks held by kworker/2:8/2779:
[ 1845.461157]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.463483]  #1: ffff88812e997d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.465906] 2 locks held by kworker/3:13/2780:
[ 1845.467678]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.469988]  #1: ffff888128d57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.472395] 2 locks held by kworker/3:14/2781:
[ 1845.474175]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.476468]  #1: ffff88812c9bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.478896] 2 locks held by kworker/3:15/2782:
[ 1845.480638]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.482919]  #1: ffff888104f27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.485299] 2 locks held by kworker/3:17/2784:
[ 1845.487097]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.489383]  #1: ffff88812224fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.491737] 2 locks held by kworker/3:18/2785:
[ 1845.493480]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.495790]  #1: ffff8881361afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.498159] 2 locks held by kworker/3:19/2786:
[ 1845.499941]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.502266]  #1: ffff888127e67d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.504618] 2 locks held by kworker/3:22/2790:
[ 1845.506418]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.508708]  #1: ffff888130d4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.511121] 2 locks held by kworker/2:10/2791:
[ 1845.512938]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.515179]  #1: ffff888113127d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.517588] 2 locks held by kworker/3:23/2793:
[ 1845.519372]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.521683]  #1: ffff88812a89fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.524075] 2 locks held by kworker/3:24/2794:
[ 1845.525876]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.528115]  #1: ffff888129a1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.530515] 2 locks held by kworker/3:25/2795:
[ 1845.532283]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.534610]  #1: ffff88812ebb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.537020] 2 locks held by kworker/3:26/2796:
[ 1845.538809]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.541117]  #1: ffff888119577d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.543506] 2 locks held by kworker/1:11/2797:
[ 1845.545286]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.547624]  #1: ffff88813716fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.550018] 2 locks held by kworker/3:27/2798:
[ 1845.551827]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.554139]  #1: ffff888136747d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.556535] 2 locks held by kworker/1:13/2800:
[ 1845.558325]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.560657]  #1: ffff888131687d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.563055] 2 locks held by kworker/1:15/2802:
[ 1845.564867]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.567176]  #1: ffff8881342d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.569574] 2 locks held by kworker/1:17/2804:
[ 1845.571352]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.573643]  #1: ffff888132137d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.576005] 2 locks held by kworker/1:18/2805:
[ 1845.577768]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.580107]  #1: ffff888134a5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.582512] 2 locks held by kworker/1:19/2806:
[ 1845.584307]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.586598]  #1: ffff888135b87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.588971] 2 locks held by kworker/1:20/2807:
[ 1845.590771]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.593039]  #1: ffff88810513fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.595437] 2 locks held by kworker/1:22/2809:
[ 1845.597257]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.599584]  #1: ffff8881397bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.601975] 2 locks held by kworker/1:23/2810:
[ 1845.603756]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.606073]  #1: ffff888139807d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.608442] 2 locks held by kworker/3:30/2814:
[ 1845.610262]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.612547]  #1: ffff888101a27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.614937] 2 locks held by kworker/2:13/2815:
[ 1845.616711]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.618912]  #1: ffff888120087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.621317] 2 locks held by kworker/2:15/2817:
[ 1845.623090]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.625381]  #1: ffff88812258fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.627743] 2 locks held by kworker/2:16/2818:
[ 1845.629551]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.631844]  #1: ffff888133d47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.634251] 2 locks held by kworker/2:19/2821:
[ 1845.636011]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.638324]  #1: ffff88812ea37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.640711] 2 locks held by kworker/2:20/2822:
[ 1845.642514]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.644824]  #1: ffff88813abd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.647217] 2 locks held by kworker/2:21/2823:
[ 1845.649025]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.651351]  #1: ffff88813454fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.653690] 2 locks held by kworker/2:22/2824:
[ 1845.655501]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.657763]  #1: ffff888132e5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.660177] 2 locks held by kworker/3:31/2825:
[ 1845.661943]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.664289]  #1: ffff888138177d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.666651] 2 locks held by kworker/3:32/2826:
[ 1845.668418]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.670748]  #1: ffff88812a26fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.673018] 2 locks held by kworker/3:38/2832:
[ 1845.674821]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.677132]  #1: ffff8881319b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.679533] 2 locks held by kworker/2:24/2834:
[ 1845.681338]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.683668]  #1: ffff8881185efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.686081] 2 locks held by kworker/2:25/2835:
[ 1845.687877]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.690160]  #1: ffff8881299a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.692548] 2 locks held by kworker/2:27/2837:
[ 1845.694316]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.696589]  #1: ffff888105ae7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.698995] 2 locks held by kworker/2:28/2838:
[ 1845.700799]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.703139]  #1: ffff888133fd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.705549] 2 locks held by kworker/2:30/2840:
[ 1845.707341]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.709638]  #1: ffff888127627d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.712057] 2 locks held by kworker/2:31/2841:
[ 1845.713853]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.716160]  #1: ffff88810a8d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.718564] 2 locks held by kworker/2:34/2845:
[ 1845.720341]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.722653]  #1: ffff888134107d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.725061] 2 locks held by kworker/3:40/2847:
[ 1845.726873]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.729184]  #1: ffff88812f5cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.731588] 2 locks held by kworker/2:36/2848:
[ 1845.733384]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.735681]  #1: ffff8881184efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.738077] 2 locks held by kworker/2:37/2851:
[ 1845.739855]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.742191]  #1: ffff88813b89fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.744532] 2 locks held by kworker/1:24/2852:
[ 1845.746338]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.748635]  #1: ffff8881275c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.751036] 2 locks held by kworker/1:26/2854:
[ 1845.752810]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.755139]  #1: ffff88812238fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.757498] 2 locks held by kworker/1:28/2856:
[ 1845.759286]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.761628]  #1: ffff888122f2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.763996] 2 locks held by kworker/1:29/2857:
[ 1845.765766]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.768067]  #1: ffff88812215fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.770425] 2 locks held by kworker/1:30/2858:
[ 1845.772237]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.774564]  #1: ffff888137177d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.776959] 2 locks held by kworker/1:32/2860:
[ 1845.778767]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.781058]  #1: ffff88812a6bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.783435] 2 locks held by kworker/1:34/2862:
[ 1845.785261]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.787605]  #1: ffff888119487d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.790019] 2 locks held by kworker/1:35/2863:
[ 1845.791759]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.794093]  #1: ffff888135497d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.796540] 2 locks held by kworker/1:37/2865:
[ 1845.798278]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.800636]  #1: ffff8881053b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.803035] 2 locks held by kworker/2:38/2866:
[ 1845.804808]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.807150]  #1: ffff88810533fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.809571] 2 locks held by kworker/2:39/2867:
[ 1845.811371]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.813698]  #1: ffff888119d57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.816104] 2 locks held by kworker/2:41/2869:
[ 1845.817858]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.820217]  #1: ffff888119d7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.822579] 2 locks held by kworker/2:46/2874:
[ 1845.824384]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.826691]  #1: ffff888106be7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.829051] 2 locks held by kworker/2:49/2878:
[ 1845.830865]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.833194]  #1: ffff88813af5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.835616] 2 locks held by kworker/2:51/2881:
[ 1845.837390]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.839737]  #1: ffff888122957d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.842116] 2 locks held by kworker/2:52/2882:
[ 1845.843933]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.846254]  #1: ffff888123fe7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.848710] 2 locks held by kworker/2:53/2883:
[ 1845.850464]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.852749]  #1: ffff88812282fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.855191] 2 locks held by kworker/2:54/2884:
[ 1845.856982]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.859288]  #1: ffff88813baffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.861684] 2 locks held by kworker/2:55/2885:
[ 1845.863494]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.865779]  #1: ffff888111c97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.868184] 2 locks held by kworker/2:56/2886:
[ 1845.869955]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.872223]  #1: ffff888111c8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.874666] 2 locks held by kworker/1:40/2888:
[ 1845.876443]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.878794]  #1: ffff88811b197d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.881130] 2 locks held by kworker/0:5/2889:
[ 1845.882854]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.885148]  #1: ffff888118247d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.887535] 2 locks held by kworker/2:58/2890:
[ 1845.889341]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.891495]  #1: ffff88810cf57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.893905] 2 locks held by kworker/1:41/2897:
[ 1845.895655]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.897934]  #1: ffff888137987d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.900296] 2 locks held by kworker/2:61/2898:
[ 1845.902071]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.904422]  #1: ffff88811008fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.906816] 2 locks held by kworker/0:7/2899:
[ 1845.908574]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.910857]  #1: ffff88810530fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.913250] 2 locks held by kworker/2:62/2900:
[ 1845.915027]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.917326]  #1: ffff88812eccfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.919696] 2 locks held by kworker/0:8/2901:
[ 1845.921496]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.923773]  #1: ffff888139277d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.926133] 2 locks held by kworker/0:9/2903:
[ 1845.927908]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.930231]  #1: ffff888105f27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.932617] 2 locks held by kworker/1:43/2905:
[ 1845.934393]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.936659]  #1: ffff88810629fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.939044] 2 locks held by kworker/1:44/2907:
[ 1845.940855]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.943143]  #1: ffff88811d127d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.945543] 2 locks held by kworker/0:10/2908:
[ 1845.947309]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.949590]  #1: ffff8881361b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.952001] 2 locks held by kworker/1:45/2909:
[ 1845.953773]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.956004]  #1: ffff888121147d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.958426] 2 locks held by kworker/2:65/2910:
[ 1845.960240]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.962547]  #1: ffff88810c597d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.964935] 2 locks held by kworker/1:46/2911:
[ 1845.966701]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.968990]  #1: ffff88812b2ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.971313] 2 locks held by kworker/1:47/2913:
[ 1845.973100]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.975451]  #1: ffff88813f79fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.977880] 2 locks held by kworker/0:11/2916:
[ 1845.979682]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.981949]  #1: ffff88811d7e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.984317] 2 locks held by kworker/2:68/2917:
[ 1845.986087]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.988369]  #1: ffff88812c017d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.990715] 2 locks held by kworker/1:50/2920:
[ 1845.992496]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1845.994769]  #1: ffff888123fc7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1845.997095] 2 locks held by kworker/0:12/2921:
[ 1845.998885]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.001218]  #1: ffff8881202f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.003603] 2 locks held by kworker/1:51/2923:
[ 1846.005405]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.007715]  #1: ffff8881114ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.010124] 2 locks held by kworker/2:71/2924:
[ 1846.011907]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.014223]  #1: ffff88812ef5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.016615] 2 locks held by kworker/2:73/2928:
[ 1846.018367]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.020712]  #1: ffff888117667d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.023000] 2 locks held by kworker/2:74/2931:
[ 1846.024774]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.027108]  #1: ffff88811322fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.029466] 2 locks held by kworker/0:14/2932:
[ 1846.031284]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.033576]  #1: ffff88810fd5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.035945] 2 locks held by kworker/2:75/2933:
[ 1846.037730]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.040007]  #1: ffff8881367a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.042335] 2 locks held by kworker/0:16/2935:
[ 1846.044121]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.046392]  #1: ffff88810c55fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.048757] 2 locks held by kworker/0:17/2937:
[ 1846.050524]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.052871]  #1: ffff8881368a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.055241] 2 locks held by kworker/2:77/2938:
[ 1846.056990]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.059306]  #1: ffff888122217d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.061588] 2 locks held by kworker/2:78/2940:
[ 1846.063332]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.065636]  #1: ffff8881212a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.068005] 2 locks held by kworker/1:56/2941:
[ 1846.069793]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.072091]  #1: ffff8881192efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.074460] 2 locks held by kworker/2:79/2942:
[ 1846.076276]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.078593]  #1: ffff88811b187d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.080997] 2 locks held by kworker/1:57/2943:
[ 1846.082766]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.085099]  #1: ffff888139457d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.087514] 2 locks held by kworker/2:80/2944:
[ 1846.089313]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.091623]  #1: ffff888134697d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.094002] 2 locks held by kworker/1:59/2948:
[ 1846.095792]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.098122]  #1: ffff888107d27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.100558] 2 locks held by kworker/2:82/2949:
[ 1846.102361]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.104650]  #1: ffff88812810fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.107035] 2 locks held by kworker/0:19/2950:
[ 1846.108804]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.111121]  #1: ffff8881313f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.113499] 2 locks held by kworker/1:60/2951:
[ 1846.115278]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.117586]  #1: ffff88810d01fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.120000] 2 locks held by kworker/2:84/2954:
[ 1846.121772]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.124105]  #1: ffff88812618fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.126532] 2 locks held by kworker/0:21/2955:
[ 1846.128332]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.130576]  #1: ffff888107c6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.132910] 2 locks held by kworker/0:24/2960:
[ 1846.134696]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.136967]  #1: ffff888100cafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.139353] 2 locks held by kworker/0:25/2962:
[ 1846.141106]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.143454]  #1: ffff888111267d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.145841] 2 locks held by kworker/2:88/2963:
[ 1846.147625]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.149903]  #1: ffff888134d0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.152280] 2 locks held by kworker/3:46/2964:
[ 1846.154068]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.156371]  #1: ffff88810f7afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.158751] 2 locks held by kworker/3:47/2967:
[ 1846.160398]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.162653]  #1: ffff88813c7b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.165045] 2 locks held by kworker/0:28/2968:
[ 1846.166830]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.169156]  #1: ffff88812dc77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.171558] 2 locks held by kworker/0:29/2970:
[ 1846.173363]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.175655]  #1: ffff88812892fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.178081] 2 locks held by kworker/0:30/2971:
[ 1846.179861]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.182198]  #1: ffff88812dfd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.184562] 2 locks held by kworker/0:31/2973:
[ 1846.186364]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.188663]  #1: ffff8881304ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.191053] 2 locks held by kworker/3:50/2974:
[ 1846.192850]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.195153]  #1: ffff88811fa6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.197534] 2 locks held by kworker/3:51/2975:
[ 1846.199290]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.201640]  #1: ffff888130c0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.204059] 2 locks held by kworker/2:90/2978:
[ 1846.205833]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.208134]  #1: ffff888138457d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.210548] 2 locks held by kworker/2:94/2983:
[ 1846.212355]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.214684]  #1: ffff88813c5b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.217078] 2 locks held by kworker/0:33/2984:
[ 1846.218870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.221180]  #1: ffff888118337d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.223616] 2 locks held by kworker/0:34/2987:
[ 1846.225402]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.227712]  #1: ffff88812b827d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.230049] 2 locks held by kworker/0:35/2988:
[ 1846.231865]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.234180]  #1: ffff88811761fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.236603] 2 locks held by kworker/0:36/2990:
[ 1846.238405]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.240743]  #1: ffff88813a327d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.243154] 2 locks held by kworker/3:54/2991:
[ 1846.244944]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.247254]  #1: ffff88813a32fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.249661] 2 locks held by kworker/2:96/2992:
[ 1846.251415]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.253744]  #1: ffff88813a5cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.256072] 2 locks held by kworker/1:62/2993:
[ 1846.257867]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.260131]  #1: ffff88810f7c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.262502] 2 locks held by kworker/2:98/2996:
[ 1846.264306]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.266598]  #1: ffff88813544fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.268989] 2 locks held by kworker/1:64/2997:
[ 1846.270789]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.273098]  #1: ffff88810f497d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.275485] 2 locks held by kworker/2:102/3001:
[ 1846.277249]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.279558]  #1: ffff888107d37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.281985] 2 locks held by kworker/0:38/3004:
[ 1846.283756]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.286069]  #1: ffff88812db1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.288455] 2 locks held by kworker/0:39/3006:
[ 1846.290218]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.292529]  #1: ffff88812b847d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.294939] 2 locks held by kworker/2:105/3007:
[ 1846.296685]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.299010]  #1: ffff888135e37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.301429] 2 locks held by kworker/0:40/3008:
[ 1846.303243]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.305566]  #1: ffff888112cffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.307892] 2 locks held by kworker/2:107/3011:
[ 1846.309698]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.312023]  #1: ffff88812e577d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.314422] 2 locks held by kworker/2:108/3013:
[ 1846.316249]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.318523]  #1: ffff88812183fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.320928] 2 locks held by kworker/0:43/3014:
[ 1846.322729]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.325007]  #1: ffff88813b8f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.327362] 2 locks held by kworker/1:65/3015:
[ 1846.329133]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.331460]  #1: ffff8881230efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.333826] 2 locks held by kworker/0:44/3016:
[ 1846.335617]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.337933]  #1: ffff888134f77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.340294] 2 locks held by kworker/2:110/3019:
[ 1846.342063]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.344356]  #1: ffff888123877d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.346708] 2 locks held by kworker/2:111/3021:
[ 1846.348463]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.350711]  #1: ffff88811b93fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.353066] 2 locks held by kworker/0:48/3024:
[ 1846.354871]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.357159]  #1: ffff88812500fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.359565] 2 locks held by kworker/0:49/3026:
[ 1846.361326]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.363655]  #1: ffff8881184a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.366021] 2 locks held by kworker/0:50/3027:
[ 1846.367802]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.370043]  #1: ffff8881184afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.372410] 2 locks held by kworker/1:66/3028:
[ 1846.374214]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.376522]  #1: ffff88813478fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.378915] 2 locks held by kworker/1:67/3029:
[ 1846.380682]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.383009]  #1: ffff8881216e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.385428] 2 locks held by kworker/1:72/3034:
[ 1846.387211]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.389542]  #1: ffff88812bdd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.391894] 2 locks held by kworker/1:73/3035:
[ 1846.393613]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.395947]  #1: ffff88812bddfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.398383] 2 locks held by kworker/1:74/3036:
[ 1846.400150]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.402449]  #1: ffff88811c49fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.404851] 2 locks held by kworker/1:75/3037:
[ 1846.406632]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.408913]  #1: ffff888111587d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.411257] 2 locks held by kworker/1:77/3039:
[ 1846.413046]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.415323]  #1: ffff88811157fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.417715] 2 locks held by kworker/1:79/3042:
[ 1846.419479]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.421757]  #1: ffff888126f77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.424093] 2 locks held by kworker/1:80/3043:
[ 1846.425872]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.428177]  #1: ffff888126f7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.430604] 2 locks held by kworker/1:82/3046:
[ 1846.432382]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.434722]  #1: ffff88811b027d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.437127] 2 locks held by kworker/2:116/3052:
[ 1846.438947]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.441264]  #1: ffff888138e4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.443697] 2 locks held by kworker/2:118/3054:
[ 1846.445508]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.447719]  #1: ffff88813ecc7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.450101] 2 locks held by kworker/2:120/3056:
[ 1846.451878]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.454210]  #1: ffff88813ecdfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.456602] 2 locks held by kworker/2:122/3058:
[ 1846.458392]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.460678]  #1: ffff88811c597d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.463034] 2 locks held by kworker/2:123/3059:
[ 1846.464820]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.467113]  #1: ffff88811c59fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.469512] 2 locks held by kworker/2:125/3061:
[ 1846.471288]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.473547]  #1: ffff88811c47fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.475876] 2 locks held by kworker/2:127/3063:
[ 1846.477645]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.479943]  #1: ffff88812fbf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.482357] 2 locks held by kworker/2:128/3064:
[ 1846.484135]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.486426]  #1: ffff88810f5a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.488860] 2 locks held by kworker/2:131/3067:
[ 1846.490666]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.492903]  #1: ffff88811f307d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.495335] 2 locks held by kworker/2:133/3069:
[ 1846.497155]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.499482]  #1: ffff888130447d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.501832] 2 locks held by kworker/2:134/3070:
[ 1846.503601]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.505908]  #1: ffff888130457d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.508286] 2 locks held by kworker/2:141/3077:
[ 1846.510081]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.512419]  #1: ffff88813d78fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.514763] 2 locks held by kworker/0:55/3078:
[ 1846.516571]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.518869]  #1: ffff88813d79fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.521270] 2 locks held by kworker/0:56/3080:
[ 1846.523060]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.525405]  #1: ffff8881252f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.527817] 2 locks held by kworker/0:58/3082:
[ 1846.529590]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.531794]  #1: ffff888110d6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.534196] 2 locks held by kworker/0:59/3083:
[ 1846.535999]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.538293]  #1: ffff888110d77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.540713] 2 locks held by kworker/0:60/3084:
[ 1846.542437]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.544711]  #1: ffff888119c07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.547111] 2 locks held by kworker/0:62/3086:
[ 1846.548917]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.551264]  #1: ffff88811464fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.553629] 2 locks held by kworker/0:64/3088:
[ 1846.555433]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.557729]  #1: ffff88813ee47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.560105] 2 locks held by kworker/0:65/3089:
[ 1846.561924]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.564227]  #1: ffff88813ee4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.566623] 2 locks held by kworker/0:66/3090:
[ 1846.568414]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.570750]  #1: ffff88813ee5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.573183] 2 locks held by kworker/0:68/3092:
[ 1846.574932]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.577277]  #1: ffff8881169b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.579664] 2 locks held by kworker/0:69/3093:
[ 1846.581445]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.583780]  #1: ffff8881169bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.586161] 2 locks held by kworker/0:73/3097:
[ 1846.587954]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.590274]  #1: ffff88811632fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.592663] 2 locks held by kworker/0:74/3098:
[ 1846.594470]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.596751]  #1: ffff88811633fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.599107] 2 locks held by kworker/0:76/3100:
[ 1846.600881]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.603217]  #1: ffff8881169dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.605601] 2 locks held by kworker/0:77/3101:
[ 1846.607402]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.609573]  #1: ffff8881169e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.611971] 2 locks held by kworker/0:78/3102:
[ 1846.613730]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.616042]  #1: ffff8881169f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.618444] 2 locks held by kworker/0:79/3103:
[ 1846.620254]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.622558]  #1: ffff8881169ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.624952] 2 locks held by kworker/0:80/3104:
[ 1846.626680]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.628957]  #1: ffff888113257d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.631318] 2 locks held by kworker/0:82/3106:
[ 1846.633132]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.635458]  #1: ffff88811326fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.637825] 2 locks held by kworker/2:143/3107:
[ 1846.639535]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.641623]  #1: ffff888113277d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.643714] 2 locks held by kworker/0:83/3108:
[ 1846.645345]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.647378]  #1: ffff888116747d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.649476] 2 locks held by kworker/0:85/3110:
[ 1846.651108]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.653157]  #1: ffff88811675fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.655265] 2 locks held by kworker/2:145/3115:
[ 1846.656907]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.658943]  #1: ffff88811681fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.661043] 2 locks held by kworker/0:88/3116:
[ 1846.662672]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.664700]  #1: ffff88811682fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.666800] 2 locks held by kworker/0:89/3117:
[ 1846.668428]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.670467]  #1: ffff888116837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.672564] 2 locks held by kworker/0:90/3118:
[ 1846.674191]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.676231]  #1: ffff888116847d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.678334] 2 locks held by kworker/0:91/3119:
[ 1846.679962]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.682000]  #1: ffff88811684fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.684357] 2 locks held by kworker/0:94/3122:
[ 1846.686158]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.688463]  #1: ffff88811687fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.690827] 2 locks held by kworker/0:96/3124:
[ 1846.692624]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.694930]  #1: ffff888116797d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.697350] 2 locks held by kworker/0:97/3125:
[ 1846.699168]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.701468]  #1: ffff8881167a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.703838] 2 locks held by kworker/3:55/3126:
[ 1846.705562]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.707872]  #1: ffff8881167efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.710221] 2 locks held by kworker/3:57/3129:
[ 1846.712016]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.714368]  #1: ffff88810f7efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.716772] 2 locks held by kworker/2:147/3130:
[ 1846.718550]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.720798]  #1: ffff888130f3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.723151] 2 locks held by kworker/3:58/3131:
[ 1846.724961]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.727251]  #1: ffff88813387fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.729629] 2 locks held by kworker/3:60/3136:
[ 1846.731377]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.733722]  #1: ffff88811cac7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.736070] 2 locks held by kworker/2:151/3137:
[ 1846.737871]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.740181]  #1: ffff888119a2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.742605] 2 locks held by kworker/3:61/3138:
[ 1846.744409]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.746708]  #1: ffff888132bbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.749059] 2 locks held by kworker/3:62/3141:
[ 1846.750851]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.753065]  #1: ffff8881378dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.755504] 2 locks held by kworker/2:155/3144:
[ 1846.757284]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.759575]  #1: ffff888118bffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.761933] 2 locks held by kworker/2:157/3147:
[ 1846.763742]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.766062]  #1: ffff88812f4c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.768433] 2 locks held by kworker/3:66/3150:
[ 1846.770245]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.772589]  #1: ffff88812c59fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.774922] 2 locks held by kworker/2:159/3151:
[ 1846.776705]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.778997]  #1: ffff888128447d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.781418] 2 locks held by kworker/3:67/3152:
[ 1846.783229]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.785552]  #1: ffff8881010c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.787935] 2 locks held by kworker/2:160/3153:
[ 1846.789731]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.791997]  #1: ffff88811b8dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.794398] 2 locks held by kworker/3:68/3154:
[ 1846.796217]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.798461]  #1: ffff8881230c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.800827] 2 locks held by kworker/3:69/3156:
[ 1846.802626]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.804880]  #1: ffff88811a5afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.807294] 2 locks held by kworker/2:162/3157:
[ 1846.809032]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.811342]  #1: ffff888123e27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.813731] 2 locks held by kworker/3:70/3158:
[ 1846.815471]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.817704]  #1: ffff888119967d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.820099] 2 locks held by kworker/2:163/3159:
[ 1846.821912]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.824259]  #1: ffff88812eb17d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.826658] 2 locks held by kworker/3:72/3162:
[ 1846.828445]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.830737]  #1: ffff88812b71fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.833091] 2 locks held by kworker/3:73/3164:
[ 1846.834905]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.837180]  #1: ffff8881236cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.839547] 2 locks held by kworker/2:166/3165:
[ 1846.841359]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.843683]  #1: ffff888127ce7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.846077] 2 locks held by kworker/2:167/3166:
[ 1846.847874]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.850188]  #1: ffff888130f5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.852581] 2 locks held by kworker/3:74/3167:
[ 1846.854381]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.856638]  #1: ffff88812a03fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.858994] 2 locks held by kworker/2:168/3168:
[ 1846.860803]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.863125]  #1: ffff888118547d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.865502] 2 locks held by kworker/3:76/3170:
[ 1846.867274]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.869590]  #1: ffff8881290efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.871966] 2 locks held by kworker/2:169/3171:
[ 1846.873759]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.876067]  #1: ffff888113537d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.878500] 2 locks held by kworker/2:170/3172:
[ 1846.880241]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.882550]  #1: ffff88812800fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.884923] 2 locks held by kworker/2:171/3174:
[ 1846.886717]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.888964]  #1: ffff88810b7afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.891342] 2 locks held by kworker/3:78/3175:
[ 1846.893152]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.895444]  #1: ffff88810b7cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.897839] 2 locks held by kworker/2:173/3178:
[ 1846.899645]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.901930]  #1: ffff88813824fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.904368] 2 locks held by kworker/2:174/3180:
[ 1846.906166]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.908487]  #1: ffff88811fbffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.910889] 2 locks held by kworker/2:175/3181:
[ 1846.912677]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.914993]  #1: ffff88810d657d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.917381] 2 locks held by kworker/2:176/3183:
[ 1846.919126]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.921418]  #1: ffff88812cd0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.923802] 2 locks held by kworker/0:99/3184:
[ 1846.925561]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.927875]  #1: ffff888129a8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.930261] 2 locks held by kworker/0:101/3188:
[ 1846.932086]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.934427]  #1: ffff888122d0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.936855] 2 locks held by kworker/0:102/3189:
[ 1846.938637]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.940957]  #1: ffff888135087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.943360] 2 locks held by kworker/2:179/3190:
[ 1846.945154]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.947422]  #1: ffff88812db5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.949816] 2 locks held by kworker/2:180/3192:
[ 1846.951610]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.953888]  #1: ffff888135c2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.956310] 2 locks held by kworker/2:181/3194:
[ 1846.958131]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.960429]  #1: ffff88811e607d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.962842] 2 locks held by kworker/0:105/3195:
[ 1846.964653]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.966944]  #1: ffff88810786fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.969357] 2 locks held by kworker/2:182/3196:
[ 1846.971031]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.973347]  #1: ffff88810b6dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.975722] 2 locks held by kworker/0:106/3197:
[ 1846.977477]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.979799]  #1: ffff888133eb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.982180] 2 locks held by kworker/2:183/3198:
[ 1846.983956]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.986282]  #1: ffff88810fd4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.988625] 2 locks held by kworker/2:184/3200:
[ 1846.990385]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.992687]  #1: ffff88811d3bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1846.995091] 2 locks held by kworker/0:108/3201:
[ 1846.996880]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1846.999212]  #1: ffff8881194f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.001610] 2 locks held by kworker/2:185/3202:
[ 1847.003419]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.005697]  #1: ffff88812201fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.008064] 2 locks held by kworker/0:109/3203:
[ 1847.009833]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.012147]  #1: ffff88812360fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.014580] 2 locks held by kworker/0:110/3205:
[ 1847.016323]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.018596]  #1: ffff88812dbffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.020968] 2 locks held by kworker/0:111/3206:
[ 1847.022748]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.025005]  #1: ffff888121917d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.027404] 2 locks held by kworker/0:113/3208:
[ 1847.029168]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.031487]  #1: ffff888125257d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.033927] 2 locks held by kworker/0:114/3209:
[ 1847.035695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.037924]  #1: ffff888117cffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.040327] 2 locks held by kworker/0:115/3210:
[ 1847.042093]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.044392]  #1: ffff88813ee97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.046790] 2 locks held by kworker/3:84/3214:
[ 1847.048447]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.050752]  #1: ffff88811624fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.053143] 2 locks held by kworker/3:85/3215:
[ 1847.054922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.057177]  #1: ffff88811625fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.059600] 2 locks held by kworker/3:86/3216:
[ 1847.061342]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.063655]  #1: ffff888116267d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.066031] 2 locks held by kworker/3:87/3217:
[ 1847.067846]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.070173]  #1: ffff888116277d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.072605] 2 locks held by kworker/3:88/3218:
[ 1847.074322]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.076675]  #1: ffff88811627fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.079050] 2 locks held by kworker/3:90/3220:
[ 1847.080866]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.083139]  #1: ffff88811629fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.085557] 2 locks held by kworker/0:116/3224:
[ 1847.087325]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.089540]  #1: ffff8881162cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.091896] 2 locks held by kworker/0:117/3225:
[ 1847.093708]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.095968]  #1: ffff8881162dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.098385] 2 locks held by kworker/0:120/3228:
[ 1847.100165]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.102497]  #1: ffff8881162ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.104912] 2 locks held by kworker/0:122/3230:
[ 1847.106730]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.109025]  #1: ffff888116617d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.111433] 2 locks held by kworker/0:124/3232:
[ 1847.113261]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.115545]  #1: ffff888116637d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.117891] 2 locks held by kworker/0:125/3233:
[ 1847.119688]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.121958]  #1: ffff88811664fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.124367] 2 locks held by kworker/0:126/3234:
[ 1847.126165]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.128485]  #1: ffff888116657d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.130883] 2 locks held by kworker/0:127/3235:
[ 1847.132680]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.134962]  #1: ffff888116667d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.137353] 2 locks held by kworker/0:128/3236:
[ 1847.139112]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.141422]  #1: ffff88811666fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.143839] 2 locks held by kworker/0:129/3237:
[ 1847.145625]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.147894]  #1: ffff88811667fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.150250] 2 locks held by kworker/0:130/3238:
[ 1847.152017]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.154368]  #1: ffff888116687d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.156773] 2 locks held by kworker/0:135/3243:
[ 1847.158555]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.160730]  #1: ffff8881166c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.163142] 2 locks held by kworker/0:136/3244:
[ 1847.164910]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.167258]  #1: ffff8881166cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.169669] 2 locks held by kworker/3:95/3246:
[ 1847.171438]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.173713]  #1: ffff8881166efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.176070] 2 locks held by kworker/3:97/3248:
[ 1847.177888]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.180173]  #1: ffff88813ef07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.182623] 2 locks held by kworker/0:137/3249:
[ 1847.184437]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.186710]  #1: ffff88813ef1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.189078] 2 locks held by kworker/3:99/3251:
[ 1847.190888]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.193149]  #1: ffff88813ef37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.195552] 2 locks held by kworker/3:102/3254:
[ 1847.197351]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.199679]  #1: ffff88813ef57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.202095] 2 locks held by kworker/3:104/3256:
[ 1847.203830]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.206136]  #1: ffff88813ef6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.208511] 2 locks held by kworker/3:107/3259:
[ 1847.210327]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.212667]  #1: ffff88813ef9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.215030] 2 locks held by kworker/3:109/3261:
[ 1847.216850]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.219130]  #1: ffff88813efb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.221522] 2 locks held by kworker/3:110/3262:
[ 1847.223298]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.225646]  #1: ffff88813efbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.227959] 2 locks held by kworker/3:112/3264:
[ 1847.229749]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.232022]  #1: ffff88811600fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.234410] 2 locks held by kworker/1:85/3265:
[ 1847.236204]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.238532]  #1: ffff88811601fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.240856] 2 locks held by kworker/1:86/3266:
[ 1847.242656]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.244925]  #1: ffff888116027d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.247269] 2 locks held by kworker/1:87/3267:
[ 1847.249067]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.251384]  #1: ffff88811607fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.253760] 2 locks held by kworker/1:88/3268:
[ 1847.255546]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.257786]  #1: ffff888116087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.260211] 2 locks held by kworker/1:89/3269:
[ 1847.262017]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.264285]  #1: ffff888116097d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.266675] 2 locks held by kworker/0:138/3270:
[ 1847.268432]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.270712]  #1: ffff8881393cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.273097] 2 locks held by kworker/0:139/3272:
[ 1847.274906]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.277199]  #1: ffff8881160e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.279629] 2 locks held by kworker/1:91/3273:
[ 1847.281402]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.283688]  #1: ffff8881160f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.286055] 2 locks held by kworker/1:92/3275:
[ 1847.287859]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.290142]  #1: ffff88811610fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.292499] 2 locks held by kworker/1:93/3277:
[ 1847.294276]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.296572]  #1: ffff888116167d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.298959] 2 locks held by kworker/0:143/3280:
[ 1847.300741]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.303053]  #1: ffff88811618fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.305458] 2 locks held by kworker/0:144/3282:
[ 1847.307260]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.309562]  #1: ffff8881161a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.311971] 2 locks held by kworker/1:99/3289:
[ 1847.313761]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.316077]  #1: ffff888116407d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.318447] 2 locks held by kworker/1:100/3291:
[ 1847.320274]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.322556]  #1: ffff88811641fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.324976] 2 locks held by kworker/0:149/3292:
[ 1847.326775]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.329064]  #1: ffff88811642fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.331461] 2 locks held by kworker/0:150/3294:
[ 1847.333272]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.335568]  #1: ffff888116447d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.337986] 2 locks held by kworker/1:102/3295:
[ 1847.339772]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.342014]  #1: ffff88811644fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.344402] 2 locks held by kworker/0:151/3296:
[ 1847.346193]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.348481]  #1: ffff88811645fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.350902] 2 locks held by kworker/1:103/3297:
[ 1847.352678]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.354987]  #1: ffff888116467d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.357395] 2 locks held by kworker/0:152/3298:
[ 1847.359178]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.361441]  #1: ffff8881164afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.363841] 2 locks held by kworker/1:104/3299:
[ 1847.365642]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.367965]  #1: ffff8881164bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.370355] 2 locks held by kworker/0:154/3301:
[ 1847.372178]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.374487]  #1: ffff8881164d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.376872] 2 locks held by kworker/0:155/3302:
[ 1847.378658]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.380974]  #1: ffff8881164e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.383376] 2 locks held by kworker/0:156/3303:
[ 1847.385153]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.387476]  #1: ffff8881164efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.389924] 2 locks held by kworker/0:157/3304:
[ 1847.391724]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.394012]  #1: ffff888116507d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.396410] 2 locks held by kworker/0:158/3306:
[ 1847.398175]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.400498]  #1: ffff888124897d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.402864] 2 locks held by kworker/2:188/3307:
[ 1847.404675]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.406978]  #1: ffff88811f0afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.409328] 2 locks held by kworker/0:159/3310:
[ 1847.411083]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.413366]  #1: ffff888129117d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.415716] 2 locks held by kworker/0:160/3312:
[ 1847.417507]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.419764]  #1: ffff888105837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.422101] 2 locks held by kworker/0:161/3314:
[ 1847.423922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.426202]  #1: ffff88813d44fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.428598] 2 locks held by kworker/0:162/3316:
[ 1847.430401]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.432698]  #1: ffff888121b37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.435047] 2 locks held by kworker/2:194/3317:
[ 1847.436834]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.439139]  #1: ffff88812ba5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.441506] 2 locks held by kworker/0:163/3318:
[ 1847.443320]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.445432]  #1: ffff88812923fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.447767] 2 locks held by kworker/2:197/3321:
[ 1847.449571]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.451878]  #1: ffff88811ea3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.454252] 2 locks held by kworker/2:199/3323:
[ 1847.456041]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.458354]  #1: ffff888113057d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.460746] 2 locks held by kworker/2:202/3326:
[ 1847.462556]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.464785]  #1: ffff8881330b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.467097] 2 locks held by kworker/3:113/3328:
[ 1847.468870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.471219]  #1: ffff888122eb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.473651] 2 locks held by kworker/1:105/3329:
[ 1847.475411]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.477759]  #1: ffff888127057d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.480120] 2 locks held by kworker/2:204/3331:
[ 1847.481886]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.484226]  #1: ffff888117757d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.486641] 2 locks held by kworker/2:206/3333:
[ 1847.488454]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.490716]  #1: ffff88812be7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.493118] 2 locks held by kworker/2:209/3336:
[ 1847.494924]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.497212]  #1: ffff88811778fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.499623] 2 locks held by kworker/2:210/3337:
[ 1847.501424]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.503699]  #1: ffff8881304efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.506080] 2 locks held by kworker/2:213/3340:
[ 1847.507880]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.510224]  #1: ffff88811ffbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.512684] 2 locks held by kworker/2:220/3347:
[ 1847.514454]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.516795]  #1: ffff8881165c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.519190] 2 locks held by kworker/1:106/3348:
[ 1847.520945]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.523275]  #1: ffff8881165cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.525698] 2 locks held by kworker/1:108/3350:
[ 1847.527508]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.529801]  #1: ffff8881165e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.532214] 2 locks held by kworker/1:109/3351:
[ 1847.534035]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.536369]  #1: ffff8881165f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.538787] 2 locks held by kworker/1:110/3352:
[ 1847.540550]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.542874]  #1: ffff888116a37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.545280] 2 locks held by kworker/1:111/3353:
[ 1847.547082]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.549357]  #1: ffff888116a47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.551733] 2 locks held by kworker/1:112/3354:
[ 1847.553531]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.555790]  #1: ffff888116a4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.558151] 2 locks held by kworker/1:114/3356:
[ 1847.559954]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.562196]  #1: ffff888116a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.564639] 2 locks held by kworker/1:115/3357:
[ 1847.566434]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.568712]  #1: ffff888116a7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.571103] 2 locks held by kworker/1:116/3358:
[ 1847.572922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.575204]  #1: ffff888116a87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.577562] 2 locks held by kworker/1:117/3359:
[ 1847.579381]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.581713]  #1: ffff888116a9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.584083] 2 locks held by kworker/1:119/3361:
[ 1847.585908]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.588165]  #1: ffff888116ab7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.590577] 2 locks held by kworker/1:120/3362:
[ 1847.592382]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.594688]  #1: ffff888116abfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.597066] 2 locks held by kworker/1:121/3363:
[ 1847.598848]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.601147]  #1: ffff888116acfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.603549] 2 locks held by kworker/1:123/3365:
[ 1847.605340]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.607647]  #1: ffff888116ae7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.610047] 2 locks held by kworker/1:124/3366:
[ 1847.611864]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.614188]  #1: ffff888116aefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.616565] 2 locks held by kworker/1:125/3367:
[ 1847.618340]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.620638]  #1: ffff888116affd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.623046] 2 locks held by kworker/1:126/3368:
[ 1847.624844]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.627133]  #1: ffff888116b07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.629539] 2 locks held by kworker/1:127/3369:
[ 1847.631330]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.633633]  #1: ffff888116b17d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.636059] 2 locks held by kworker/1:129/3371:
[ 1847.637882]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.640201]  #1: ffff888116b2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.642586] 2 locks held by kworker/1:130/3372:
[ 1847.644401]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.646695]  #1: ffff888116b3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.649024] 2 locks held by kworker/1:132/3374:
[ 1847.650768]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.653070]  #1: ffff888116b5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.655470] 2 locks held by kworker/1:134/3376:
[ 1847.657301]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.659606]  #1: ffff888116b77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.661962] 2 locks held by kworker/1:135/3377:
[ 1847.663777]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.666019]  #1: ffff888116b87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.668437] 2 locks held by kworker/1:136/3378:
[ 1847.670250]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.672595]  #1: ffff888116b8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.674955] 2 locks held by kworker/1:137/3379:
[ 1847.676736]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.678987]  #1: ffff888116b9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.681417] 2 locks held by kworker/1:138/3380:
[ 1847.683182]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.685500]  #1: ffff888116ba7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.687873] 2 locks held by kworker/1:141/3383:
[ 1847.689653]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.691934]  #1: ffff888116bcfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.694359] 2 locks held by kworker/1:143/3385:
[ 1847.696172]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.698478]  #1: ffff888116befd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.700903] 2 locks held by kworker/1:144/3386:
[ 1847.702685]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.704954]  #1: ffff888116bf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.707320] 2 locks held by kworker/1:146/3388:
[ 1847.709107]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.711342]  #1: ffff88813e40fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.713773] 2 locks held by kworker/1:147/3389:
[ 1847.715521]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.717823]  #1: ffff88813e41fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.720135] 2 locks held by kworker/2:226/3395:
[ 1847.721952]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.724259]  #1: ffff88813e46fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.726683] 2 locks held by kworker/2:230/3399:
[ 1847.728488]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.730738]  #1: ffff88813e4a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.733151] 2 locks held by kworker/2:235/3404:
[ 1847.734971]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.737282]  #1: ffff88813e4dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.739700] 2 locks held by kworker/2:237/3406:
[ 1847.741471]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.743751]  #1: ffff88813e4f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.746141] 2 locks held by kworker/2:238/3407:
[ 1847.747934]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.750210]  #1: ffff88813e507d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.752594] 2 locks held by kworker/2:240/3409:
[ 1847.754389]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.756657]  #1: ffff88813e51fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.759038] 2 locks held by kworker/0:165/3410:
[ 1847.760840]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.763088]  #1: ffff88813e52fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.765501] 2 locks held by kworker/0:166/3411:
[ 1847.767292]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.769539]  #1: ffff88813e587d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.771912] 2 locks held by kworker/0:167/3412:
[ 1847.773703]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.775930]  #1: ffff88813e58fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.778359] 2 locks held by kworker/0:170/3415:
[ 1847.780177]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.782469]  #1: ffff88813e5b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.784868] 2 locks held by kworker/0:171/3416:
[ 1847.786668]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.788962]  #1: ffff88813e5bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.791373] 2 locks held by kworker/0:172/3417:
[ 1847.793191]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.795535]  #1: ffff88813e5cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.797917] 2 locks held by kworker/0:173/3418:
[ 1847.799712]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.802009]  #1: ffff88813e5d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.804327] 2 locks held by kworker/0:174/3419:
[ 1847.806122]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.808399]  #1: ffff88813e5e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.810832] 2 locks held by kworker/0:175/3420:
[ 1847.812621]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.814863]  #1: ffff88813e5efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.817249] 2 locks held by kworker/0:177/3422:
[ 1847.819043]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.821388]  #1: ffff88813e607d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.823768] 2 locks held by kworker/0:181/3426:
[ 1847.825577]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.827834]  #1: ffff88811e057d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.830158] 2 locks held by kworker/0:184/3429:
[ 1847.831957]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.834282]  #1: ffff88811d1bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.836701] 2 locks held by kworker/2:241/3430:
[ 1847.838428]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.840701]  #1: ffff88813b6efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.843032] 2 locks held by kworker/2:242/3431:
[ 1847.844838]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.847128]  #1: ffff888138427d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.849525] 2 locks held by kworker/2:245/3434:
[ 1847.851328]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.853618]  #1: ffff88813e617d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.856041] 2 locks held by kworker/2:250/3439:
[ 1847.857842]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.860122]  #1: ffff88813e657d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.862556] 2 locks held by kworker/2:251/3440:
[ 1847.864373]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.866672]  #1: ffff88813e667d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.869094] 2 locks held by kworker/2:253/3442:
[ 1847.870890]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.873154]  #1: ffff88813e67fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.875581] 2 locks held by kworker/3:114/3447:
[ 1847.877394]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.879740]  #1: ffff88813e6b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.882098] 2 locks held by kworker/3:115/3448:
[ 1847.883918]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.886171]  #1: ffff88813e6c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.888583] 2 locks held by kworker/3:116/3449:
[ 1847.890398]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.892705]  #1: ffff88813e747d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.895051] 2 locks held by kworker/3:118/3451:
[ 1847.896847]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.899132]  #1: ffff88813e767d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.901507] 2 locks held by kworker/3:120/3453:
[ 1847.903236]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.905527]  #1: ffff88813e77fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.907901] 2 locks held by kworker/3:122/3455:
[ 1847.909708]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.912026]  #1: ffff88813e797d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.914454] 2 locks held by kworker/3:124/3457:
[ 1847.916231]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.918510]  #1: ffff88813e7afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.920927] 2 locks held by kworker/3:125/3458:
[ 1847.922695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.925008]  #1: ffff88813e7bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.927436] 2 locks held by kworker/3:128/3461:
[ 1847.929255]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.931546]  #1: ffff88813e7dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.933891] 2 locks held by kworker/3:131/3464:
[ 1847.935689]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.937985]  #1: ffff88813e047d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.940359] 2 locks held by kworker/0:186/3467:
[ 1847.942142]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.944467]  #1: ffff88813e06fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.946777] 2 locks held by kworker/0:188/3469:
[ 1847.948559]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.950874]  #1: ffff88813e087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.953271] 2 locks held by kworker/0:189/3470:
[ 1847.955060]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.957359]  #1: ffff88813e097d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.959762] 2 locks held by kworker/0:191/3472:
[ 1847.961559]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.963833]  #1: ffff88813e0afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.966201] 2 locks held by kworker/0:192/3473:
[ 1847.967990]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.970260]  #1: ffff88813e0b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.972661] 2 locks held by kworker/0:193/3474:
[ 1847.974439]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.976762]  #1: ffff88813e0c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.979010] 2 locks held by kworker/0:195/3476:
[ 1847.980748]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.983039]  #1: ffff88813e0e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.985423] 2 locks held by kworker/0:197/3478:
[ 1847.987205]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.989489]  #1: ffff88813e0ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.991878] 2 locks held by kworker/0:198/3479:
[ 1847.993608]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1847.995881]  #1: ffff88813e13fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1847.998236] 2 locks held by kworker/0:199/3480:
[ 1848.000010]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.002319]  #1: ffff88813e14fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.004719] 2 locks held by kworker/0:200/3481:
[ 1848.006525]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.008835]  #1: ffff88813e157d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.011188] 2 locks held by kworker/0:203/3484:
[ 1848.012969]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.015327]  #1: ffff88813e17fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.017729] 2 locks held by kworker/0:205/3486:
[ 1848.019539]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.021850]  #1: ffff88813e19fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.024275] 2 locks held by kworker/0:206/3487:
[ 1848.026073]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.028400]  #1: ffff88813e1a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.030781] 2 locks held by kworker/0:207/3488:
[ 1848.032591]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.034794]  #1: ffff88813e1b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.037098] 2 locks held by kworker/0:208/3489:
[ 1848.038870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.041174]  #1: ffff88813e1c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.043556] 2 locks held by kworker/0:209/3490:
[ 1848.045370]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.047648]  #1: ffff88813e1d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.050062] 2 locks held by kworker/0:211/3492:
[ 1848.051827]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.054123]  #1: ffff88813e227d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.056524] 2 locks held by kworker/0:215/3496:
[ 1848.058345]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.060635]  #1: ffff88813e257d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.063063] 2 locks held by kworker/0:219/3500:
[ 1848.064887]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.067150]  #1: ffff88813e287d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.069530] 2 locks held by kworker/0:220/3501:
[ 1848.071321]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.073575]  #1: ffff88813e28fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.075995] 2 locks held by kworker/0:221/3502:
[ 1848.077815]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.080025]  #1: ffff8881348afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.082419] 2 locks held by kworker/0:222/3503:
[ 1848.084240]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.086510]  #1: ffff88812e54fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.088878] 2 locks held by kworker/0:224/3505:
[ 1848.090652]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.092903]  #1: ffff888126f0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.095228] 2 locks held by kworker/3:133/3506:
[ 1848.097027]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.099344]  #1: ffff88813c507d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.101705] 2 locks held by kworker/0:225/3507:
[ 1848.103476]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.105768]  #1: ffff88811f2d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.108190] 2 locks held by kworker/0:228/3510:
[ 1848.109978]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.112277]  #1: ffff888130bc7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.114622] 2 locks held by kworker/0:229/3511:
[ 1848.116439]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.118707]  #1: ffff88811cd5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.121118] 2 locks held by kworker/0:231/3513:
[ 1848.122938]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.125207]  #1: ffff888122837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.127584] 2 locks held by kworker/0:234/3516:
[ 1848.129347]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.131683]  #1: ffff8881277bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.134053] 2 locks held by kworker/0:235/3517:
[ 1848.135872]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.138204]  #1: ffff88811a1bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.140577] 2 locks held by kworker/0:237/3519:
[ 1848.142368]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.144709]  #1: ffff8881182f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.147107] 2 locks held by kworker/0:238/3520:
[ 1848.148899]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.151163]  #1: ffff8881394ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.153550] 2 locks held by kworker/0:239/3521:
[ 1848.155363]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.157675]  #1: ffff888120a3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.160047] 2 locks held by kworker/0:240/3522:
[ 1848.161866]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.164186]  #1: ffff88812cf97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.166588] 2 locks held by kworker/0:241/3523:
[ 1848.168362]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.170636]  #1: ffff888132a37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.172998] 2 locks held by kworker/1:149/3528:
[ 1848.174801]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.177106]  #1: ffff88813b2b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.179521] 2 locks held by kworker/1:153/3532:
[ 1848.181318]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.183570]  #1: ffff888115c87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.185967] 2 locks held by kworker/1:154/3533:
[ 1848.187772]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.190094]  #1: ffff888115c8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.192475] 2 locks held by kworker/1:156/3535:
[ 1848.194287]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.196585]  #1: ffff888115ca7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.198982] 2 locks held by kworker/1:157/3536:
[ 1848.200771]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.203095]  #1: ffff88813e3bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.205467] 2 locks held by kworker/1:159/3538:
[ 1848.207299]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.209643]  #1: ffff88813e3d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.212044] 2 locks held by kworker/1:160/3539:
[ 1848.213848]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.216173]  #1: ffff88813e3e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.218566] 2 locks held by kworker/1:162/3541:
[ 1848.220380]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.222663]  #1: ffff88813e3ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.225041] 2 locks held by kworker/1:163/3542:
[ 1848.226859]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.229127]  #1: ffff88813dc0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.231558] 2 locks held by kworker/1:164/3543:
[ 1848.233346]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.235588]  #1: ffff88813dc17d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.237998] 2 locks held by kworker/1:165/3544:
[ 1848.239818]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.242102]  #1: ffff88813dc27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.244507] 2 locks held by kworker/0:245/3546:
[ 1848.246246]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.248542]  #1: ffff88813dc3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.250948] 2 locks held by kworker/0:248/3549:
[ 1848.252757]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.254964]  #1: ffff88813dc67d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.257370] 2 locks held by kworker/0:249/3550:
[ 1848.259162]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.261488]  #1: ffff88813dc77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.263876] 2 locks held by kworker/0:250/3551:
[ 1848.265634]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.267917]  #1: ffff88813dc7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.270327] 2 locks held by kworker/0:252/3553:
[ 1848.272112]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.274439]  #1: ffff88813dc97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.276827] 2 locks held by kworker/0:253/3554:
[ 1848.278595]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.280893]  #1: ffff88813dca7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.283292] 2 locks held by kworker/0:255/3556:
[ 1848.285086]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.287396]  #1: ffff88813dcbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.289755] 2 locks held by kworker/3:134/3558:
[ 1848.291566]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.293838]  #1: ffff88813dcdfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.296224] 2 locks held by kworker/3:135/3559:
[ 1848.297985]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.300275]  #1: ffff88813dce7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.302673] 2 locks held by kworker/3:136/3560:
[ 1848.304445]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.306727]  #1: ffff88813dcf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.309148] 2 locks held by kworker/3:137/3561:
[ 1848.310950]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.313250]  #1: ffff88813dd37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.315666] 2 locks held by kworker/3:141/3565:
[ 1848.317476]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.319782]  #1: ffff88813dd6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.322179] 2 locks held by kworker/3:143/3567:
[ 1848.323971]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.326288]  #1: ffff88813dd87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.328681] 2 locks held by kworker/3:144/3568:
[ 1848.330412]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.332695]  #1: ffff88813dd97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.335071] 2 locks held by kworker/3:146/3570:
[ 1848.336870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.339177]  #1: ffff88813ddafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.341567] 2 locks held by kworker/3:149/3573:
[ 1848.343358]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.345637]  #1: ffff88813ddcfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.348027] 2 locks held by kworker/3:150/3574:
[ 1848.349819]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.352086]  #1: ffff88813dddfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.354507] 2 locks held by kworker/3:151/3575:
[ 1848.356320]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.358664]  #1: ffff88813ddf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.361065] 2 locks held by kworker/3:152/3576:
[ 1848.362867]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.365196]  #1: ffff88813de07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.367604] 2 locks held by kworker/3:153/3577:
[ 1848.369354]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.371643]  #1: ffff88813de0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.374031] 2 locks held by kworker/0:257/3578:
[ 1848.375841]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.378138]  #1: ffff88813de1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.380570] 2 locks held by kworker/3:154/3579:
[ 1848.382385]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.384693]  #1: ffff88813de3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.387099] 2 locks held by kworker/3:156/3581:
[ 1848.388912]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.391253]  #1: ffff88813dedfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.393640] 2 locks held by kworker/1:167/3585:
[ 1848.395440]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.397769]  #1: ffff888134f9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.400195] 2 locks held by kworker/1:168/3586:
[ 1848.402010]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.404334]  #1: ffff8881304a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.406710] 2 locks held by kworker/1:169/3587:
[ 1848.408518]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.410797]  #1: ffff888128997d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.413123] 2 locks held by kworker/1:170/3588:
[ 1848.414922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.417223]  #1: ffff888128c0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.419652] 2 locks held by kworker/1:173/3591:
[ 1848.421465]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.423770]  #1: ffff88812479fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.426139] 2 locks held by kworker/3:159/3592:
[ 1848.427922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.430183]  #1: ffff88813b37fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.432620] 2 locks held by kworker/3:161/3594:
[ 1848.434390]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.436736]  #1: ffff88812f527d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.439124] 2 locks held by kworker/1:174/3595:
[ 1848.440806]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.443037]  #1: ffff88812ddefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.445407] 2 locks held by kworker/1:175/3596:
[ 1848.447227]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.449537]  #1: ffff88813d93fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.451925] 2 locks held by kworker/1:176/3597:
[ 1848.453695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.456000]  #1: ffff88813d94fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.458405] 2 locks held by kworker/1:178/3599:
[ 1848.460161]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.462413]  #1: ffff88813d967d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.464819] 2 locks held by kworker/1:179/3600:
[ 1848.466623]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.468907]  #1: ffff88813dadfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.471296] 2 locks held by kworker/1:180/3601:
[ 1848.473040]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.475356]  #1: ffff88813dae7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.477785] 2 locks held by kworker/1:181/3602:
[ 1848.479595]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.481902]  #1: ffff88813daf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.484286] 2 locks held by kworker/1:182/3603:
[ 1848.486088]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.488360]  #1: ffff88813daffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.490762] 2 locks held by kworker/1:184/3605:
[ 1848.492571]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.494862]  #1: ffff88813db1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.497218] 2 locks held by kworker/1:185/3606:
[ 1848.498997]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.501268]  #1: ffff88813db2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.503383] 2 locks held by kworker/1:186/3607:
[ 1848.505022]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.507066]  #1: ffff88813db37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.509171] 2 locks held by kworker/1:189/3610:
[ 1848.510820]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.512859]  #1: ffff88813db5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.514967] 2 locks held by kworker/1:191/3612:
[ 1848.516610]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.518642]  #1: ffff88813db77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.520738] 2 locks held by kworker/1:192/3613:
[ 1848.522384]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.524420]  #1: ffff88813db7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.526519] 2 locks held by kworker/1:193/3614:
[ 1848.528154]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.530192]  #1: ffff88813db8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.532301] 2 locks held by kworker/1:194/3615:
[ 1848.533949]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.535994]  #1: ffff88813db9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.538100] 2 locks held by kworker/1:195/3616:
[ 1848.539738]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.541777]  #1: ffff88813dbafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.543878] 2 locks held by kworker/1:196/3617:
[ 1848.545523]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.547550]  #1: ffff88813dbb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.549645] 2 locks held by kworker/1:198/3619:
[ 1848.551279]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.553424]  #1: ffff88813dbd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.555635] 2 locks held by kworker/1:199/3620:
[ 1848.557345]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.559466]  #1: ffff88813dbe7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.561573] 2 locks held by kworker/1:200/3621:
[ 1848.563208]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.565245]  #1: ffff88813dbefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.567358] 2 locks held by kworker/1:203/3624:
[ 1848.568987]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.571029]  #1: ffff888161817d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.573136] 2 locks held by kworker/1:206/3627:
[ 1848.574789]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.576827]  #1: ffff888161837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.578934] 2 locks held by kworker/1:209/3630:
[ 1848.580574]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.582608]  #1: ffff88816185fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.584704] 2 locks held by kworker/1:210/3631:
[ 1848.586343]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.588374]  #1: ffff88816186fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.590480] 2 locks held by kworker/1:211/3632:
[ 1848.592125]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.594163]  #1: ffff88816187fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.596268] 2 locks held by kworker/3:162/3633:
[ 1848.597914]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.599956]  #1: ffff88816189fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.602064] 2 locks held by kworker/3:163/3634:
[ 1848.603707]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.605740]  #1: ffff888127a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.607845] 2 locks held by kworker/3:164/3635:
[ 1848.609485]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.611528]  #1: ffff888128f3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.613627] 2 locks held by kworker/3:166/3637:
[ 1848.615263]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.617310]  #1: ffff88812b83fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.619419] 2 locks held by kworker/3:167/3638:
[ 1848.621064]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.623106]  #1: ffff88812aa57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.625217] 2 locks held by kworker/3:168/3639:
[ 1848.626872]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.628921]  #1: ffff888127d3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.631031] 2 locks held by kworker/3:170/3641:
[ 1848.632673]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.634706]  #1: ffff88811ec6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.636811] 2 locks held by kworker/3:171/3642:
[ 1848.638453]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.640493]  #1: ffff88812f687d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.642594] 2 locks held by kworker/3:172/3643:
[ 1848.644233]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.646272]  #1: ffff8881380a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.648388] 2 locks held by kworker/1:212/3644:
[ 1848.650034]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.652083]  #1: ffff888126e6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.654204] 2 locks held by kworker/1:213/3645:
[ 1848.655862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.657906]  #1: ffff8881276afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.660014] 2 locks held by kworker/1:214/3646:
[ 1848.661658]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.663690]  #1: ffff8881323dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.665799] 2 locks held by kworker/1:215/3647:
[ 1848.667443]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.669472]  #1: ffff888129ecfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.671566] 2 locks held by kworker/1:216/3648:
[ 1848.673206]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.675248]  #1: ffff88810f47fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.677353] 2 locks held by kworker/1:218/3650:
[ 1848.679001]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.681046]  #1: ffff888126487d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.683158] 2 locks held by kworker/1:220/3652:
[ 1848.684810]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.686858]  #1: ffff88813d47fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.688970] 2 locks held by kworker/1:222/3654:
[ 1848.690618]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.692656]  #1: ffff8881289d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.694758] 2 locks held by kworker/1:223/3655:
[ 1848.696401]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.698445]  #1: ffff888126a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.700559] 2 locks held by kworker/1:224/3656:
[ 1848.702204]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.704245]  #1: ffff88812338fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.706363] 2 locks held by kworker/1:226/3658:
[ 1848.708009]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.710057]  #1: ffff888105697d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.712165] 2 locks held by kworker/1:227/3659:
[ 1848.713818]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.715864]  #1: ffff888130d6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.717969] 2 locks held by kworker/1:229/3661:
[ 1848.719616]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.721652]  #1: ffff88813c977d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.723760] 2 locks held by kworker/3:173/3663:
[ 1848.725406]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.727446]  #1: ffff88812b3a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.729563] 2 locks held by kworker/3:174/3664:
[ 1848.731188]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.733232]  #1: ffff88812b28fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.735350] 2 locks held by kworker/3:176/3666:
[ 1848.736998]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.739041]  #1: ffff888130617d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.741151] 2 locks held by kworker/3:177/3667:
[ 1848.742800]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.744841]  #1: ffff88812fcbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.746948] 2 locks held by kworker/3:180/3670:
[ 1848.748596]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.750633]  #1: ffff88812f107d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.752736] 2 locks held by kworker/3:181/3671:
[ 1848.754382]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.756410]  #1: ffff88812feffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.758503] 2 locks held by kworker/3:182/3672:
[ 1848.760134]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.762174]  #1: ffff88812bc8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.764287] 2 locks held by kworker/3:185/3675:
[ 1848.765941]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.767986]  #1: ffff8881348dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.770095] 2 locks held by kworker/3:187/3677:
[ 1848.771739]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.773785]  #1: ffff888132c87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.775892] 2 locks held by kworker/3:188/3678:
[ 1848.777537]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.779565]  #1: ffff888121e2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.781669] 2 locks held by kworker/3:195/3685:
[ 1848.783314]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.785354]  #1: ffff88812c187d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.787460] 2 locks held by kworker/3:197/3687:
[ 1848.789094]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.791137]  #1: ffff888131f5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.793245] 2 locks held by kworker/3:198/3688:
[ 1848.794897]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.796942]  #1: ffff88813516fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.799050] 2 locks held by kworker/3:202/3692:
[ 1848.800695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.802729]  #1: ffff8881350efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.804833] 2 locks held by kworker/3:204/3694:
[ 1848.806476]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.808520]  #1: ffff88811e2a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.810620] 2 locks held by kworker/3:205/3695:
[ 1848.812256]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.814287]  #1: ffff88812f4cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.816397] 2 locks held by kworker/3:207/3697:
[ 1848.818041]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.820082]  #1: ffff8881247dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.822188] 2 locks held by kworker/3:208/3698:
[ 1848.823848]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.825889]  #1: ffff88811934fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.827995] 2 locks held by kworker/3:209/3699:
[ 1848.829643]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.831677]  #1: ffff8881231d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.833786] 2 locks held by kworker/3:211/3701:
[ 1848.835431]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.837472]  #1: ffff888133b6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.839583] 2 locks held by kworker/3:212/3702:
[ 1848.841220]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.843267]  #1: ffff88813242fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.845377] 2 locks held by kworker/3:214/3704:
[ 1848.847024]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.849080]  #1: ffff8881316b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.851197] 2 locks held by kworker/3:217/3707:
[ 1848.852849]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.854894]  #1: ffff88811476fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.856998] 2 locks held by kworker/3:218/3708:
[ 1848.858649]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.860681]  #1: ffff888132bdfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.862786] 2 locks held by kworker/3:220/3710:
[ 1848.864428]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.866459]  #1: ffff888125137d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.868556] 2 locks held by kworker/3:221/3711:
[ 1848.870182]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.872225]  #1: ffff888132597d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.874338] 2 locks held by kworker/3:223/3713:
[ 1848.875983]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.878025]  #1: ffff8881209cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.880131] 2 locks held by kworker/3:224/3714:
[ 1848.881784]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.883828]  #1: ffff88811f877d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.885935] 2 locks held by kworker/3:225/3715:
[ 1848.887576]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.889601]  #1: ffff88811cf47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.891689] 2 locks held by kworker/3:226/3716:
[ 1848.893331]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.895372]  #1: ffff88811cd7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.897468] 2 locks held by kworker/3:227/3717:
[ 1848.899110]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.901150]  #1: ffff888111b9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.903256] 2 locks held by kworker/3:232/3722:
[ 1848.904908]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.906947]  #1: ffff88812aed7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.909055] 2 locks held by kworker/3:233/3723:
[ 1848.910698]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.912732]  #1: ffff888130637d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.914839] 2 locks held by kworker/3:238/3728:
[ 1848.916481]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.918521]  #1: ffff8881399efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.920623] 2 locks held by kworker/1:231/3737:
[ 1848.922259]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.924298]  #1: ffff8881290a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.926411] 2 locks held by kworker/1:232/3738:
[ 1848.928047]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.930086]  #1: ffff888120b77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.932193] 2 locks held by kworker/1:237/3743:
[ 1848.933842]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.935906]  #1: ffff8881100f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.938012] 2 locks held by kworker/1:238/3744:
[ 1848.939659]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.941690]  #1: ffff88812e0e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.943802] 2 locks held by kworker/1:239/3745:
[ 1848.945451]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.947488]  #1: ffff88810ad5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.949607] 2 locks held by kworker/1:241/3747:
[ 1848.951246]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.953289]  #1: ffff88811fb5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.955408] 2 locks held by kworker/1:242/3748:
[ 1848.957058]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.959102]  #1: ffff888119eefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.961208] 2 locks held by kworker/1:243/3749:
[ 1848.962862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.964903]  #1: ffff888130d87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.967016] 2 locks held by kworker/1:244/3750:
[ 1848.968662]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.970700]  #1: ffff8881289afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.972810] 2 locks held by kworker/1:245/3751:
[ 1848.974456]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.976486]  #1: ffff8881063cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.978601] 2 locks held by kworker/1:246/3752:
[ 1848.980230]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.982276]  #1: ffff88811fca7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.984392] 2 locks held by kworker/1:248/3754:
[ 1848.986033]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.988077]  #1: ffff888106997d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.990184] 2 locks held by kworker/1:249/3755:
[ 1848.991840]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.993885]  #1: ffff8881372afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1848.995994] 2 locks held by kworker/1:250/3756:
[ 1848.997641]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1848.999679]  #1: ffff8881209b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.001787] 2 locks held by kworker/1:251/3757:
[ 1849.003436]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.005482]  #1: ffff8881314ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.007582] 2 locks held by kworker/1:252/3758:
[ 1849.009219]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.011260]  #1: ffff888130d8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.013377] 2 locks held by kworker/1:253/3759:
[ 1849.015026]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.017066]  #1: ffff8881371b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.019201] 2 locks held by kworker/1:255/3761:
[ 1849.020862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.022902]  #1: ffff88811d897d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.025007] 2 locks held by kworker/1:256/3762:
[ 1849.026649]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.028680]  #1: ffff88813b99fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.030792] 2 locks held by kworker/1:257/3763:
[ 1849.032440]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.034483]  #1: ffff88813b0efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.036590] 2 locks held by kworker/3:247/3765:
[ 1849.038220]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.040260]  #1: ffff888134867d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.042382] 2 locks held by kworker/3:248/3766:
[ 1849.044023]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.046066]  #1: ffff888124b7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.048170] 2 locks held by kworker/3:249/3767:
[ 1849.049821]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.051860]  #1: ffff888131aafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.053964] 2 locks held by kworker/3:251/3769:
[ 1849.055610]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.057647]  #1: ffff8881068ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.059746] 2 locks held by kworker/3:252/3770:
[ 1849.061399]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.063434]  #1: ffff88810b757d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.065537] 2 locks held by kworker/3:254/3772:
[ 1849.067174]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.069213]  #1: ffff888136d97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.071346] 2 locks held by kworker/3:255/3773:
[ 1849.072992]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.075027]  #1: ffff88811830fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.077139] 2 locks held by kworker/3:256/3774:
[ 1849.078795]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.080838]  #1: ffff888127547d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.082946] 2 locks held by kworker/1:258/4004:
[ 1849.084592]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.086626]  #1: ffff88812cbefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.088727] 2 locks held by kworker/0:18/13817:
[ 1849.090368]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.092406]  #1: ffff8881213cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.094515] 2 locks held by kworker/1:97/23521:
[ 1849.096151]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.098194]  #1: ffff88810c33fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.100300] 2 locks held by kworker/1:259/28552:
[ 1849.101959]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.103997]  #1: ffff888140777d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.106102] 2 locks held by kworker/3:258/38106:
[ 1849.107766]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.109802]  #1: ffff888111b0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
[ 1849.111912] 2 locks held by kworker/1:172/39248:
[ 1849.113563]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
[ 1849.115594]  #1: ffff888110eefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0

[ 1849.119116] =============================================


[2]

$ ps axuw | grep " D "
root           9  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/0:1+dio/dm-1]
root          25  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:0+dio/dm-1]
root          49  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:1+dio/dm-1]
root          74  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/0:2+dio/dm-1]
root         169  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/3:2+dio/dm-1]
root         221  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/0:3+dio/dm-1]
root         230  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:2+dio/dm-1]
root         291  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/2:3+dio/dm-1]
root         322  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:3+dio/dm-1]
root        2757  2.1  0.0      0     0 ?        D    10:57   1:14 [kworker/u8:7+flush-253:1]
root        2759  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/3:4+dio/dm-1]
root        2760  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/0:4+dio/dm-1]
root        2762  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/1:5+dio/dm-1]
root        2764  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/1:6+dio/dm-1]
root        2765  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/3:5+dio/dm-1]
...

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-21  6:46 [bug report] blktests srp/002 hang Shinichiro Kawasaki
@ 2023-08-22  1:46 ` Bob Pearson
  2023-08-22 10:18   ` Shinichiro Kawasaki
  2023-09-22 11:06 ` Linux regression tracking #adding (Thorsten Leemhuis)
  1 sibling, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-08-22  1:46 UTC (permalink / raw)
  To: Shinichiro Kawasaki, linux-rdma, linux-scsi

On 8/21/23 01:46, Shinichiro Kawasaki wrote:
> I observed a process hang at the blktests test case srp/002 occasionally, using
> kernel v6.5-rcX. Kernel reported stall of many kworkers [1]. PID 2757 hanged at
> inode_sleep_on_writeback(). Other kworkers hanged at __inode_wait_for_writeback.
> 
> The hang is recreated in stable manner by repeating the test case srp/002 (from
> 15 times to 30 times).
> 
> I bisected and found the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
> for rxe tasks") looks like the trigger commit. When I revert it from the kernel
> v6.5-rc7, the hang symptom disappears. I'm not sure how the commit relates to
> the hang. Comments will be welcomed.
> 
> [1]
> 
> ...
> [ 1670.489181] scsi 4:0:0:1: alua: Detached
> [ 1670.985461] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-38: queued zerolength write
> [ 1670.985702] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-36: queued zerolength write
> [ 1670.985716] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-38 wc->status 5
> [ 1670.985821] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-38
> [ 1670.985824] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-36 wc->status 5
> [ 1670.985909] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-34: queued zerolength write
> [ 1670.985924] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-36
> [ 1670.986104] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-34 wc->status 5
> [ 1670.986244] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-34
> [ 1671.049223] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-40: queued zerolength write
> [ 1671.049588] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-40 wc->status 5
> [ 1671.049626] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-40
> [ 1844.873748] INFO: task kworker/0:1:9 blocked for more than 122 seconds.
> [ 1844.877893]       Not tainted 6.5.0-rc7 #106
> [ 1844.878903] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1844.880255] task:kworker/0:1     state:D stack:0     pid:9     ppid:2      flags:0x00004000
> [ 1844.881830] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1844.882999] Call Trace:
> [ 1844.883900]  <TASK>
> [ 1844.884703]  __schedule+0x10ac/0x5e80
> [ 1844.885609]  ? do_raw_spin_unlock+0x54/0x1f0
> [ 1844.886569]  ? __pfx___schedule+0x10/0x10
> [ 1844.887596]  ? lock_release+0x378/0x650
> [ 1844.888431]  ? schedule+0x92/0x220
> [ 1844.889232]  ? mark_held_locks+0x96/0xe0
> [ 1844.890117]  schedule+0x133/0x220
> [ 1844.890874]  bit_wait+0x17/0xe0
> [ 1844.891619]  __wait_on_bit+0x66/0x180
> [ 1844.892409]  ? __pfx_bit_wait+0x10/0x10
> [ 1844.893192]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1844.894245]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1844.895225]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1844.896138]  ? find_held_lock+0x2d/0x110
> [ 1844.897085]  writeback_single_inode+0xf9/0x3f0
> [ 1844.898186]  sync_inode_metadata+0x91/0xd0
> [ 1844.899036]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1844.900106]  ? lock_release+0x378/0x650
> [ 1844.900988]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1844.901978]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1844.902964]  ext4_sync_file+0x469/0xb60
> [ 1844.903859]  iomap_dio_complete+0x5d1/0x860
> [ 1844.904828]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1844.905841]  iomap_dio_complete_work+0x52/0x80
> [ 1844.906774]  process_one_work+0x898/0x14a0
> [ 1844.907673]  ? __pfx_lock_acquire+0x10/0x10
> [ 1844.908644]  ? __pfx_process_one_work+0x10/0x10
> [ 1844.909693]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1844.910676]  worker_thread+0x100/0x12c0
> [ 1844.911612]  ? __kthread_parkme+0xc1/0x1f0
> [ 1844.912542]  ? __pfx_worker_thread+0x10/0x10
> [ 1844.913584]  kthread+0x2ea/0x3c0
> [ 1844.914465]  ? __pfx_kthread+0x10/0x10
> [ 1844.915335]  ret_from_fork+0x30/0x70
> [ 1844.916269]  ? __pfx_kthread+0x10/0x10
> [ 1844.917308]  ret_from_fork_asm+0x1b/0x30
> [ 1844.918243]  </TASK>
> [ 1844.918998] INFO: task kworker/1:0:25 blocked for more than 122 seconds.
> [ 1844.920107]       Not tainted 6.5.0-rc7 #106
> [ 1844.921041] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1844.922262] task:kworker/1:0     state:D stack:0     pid:25    ppid:2      flags:0x00004000
> [ 1844.923550] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1844.924598] Call Trace:
> [ 1844.925407]  <TASK>
> [ 1844.926194]  __schedule+0x10ac/0x5e80
> [ 1844.927097]  ? do_raw_spin_unlock+0x54/0x1f0
> [ 1844.928032]  ? __pfx___schedule+0x10/0x10
> [ 1844.928937]  ? lock_release+0x378/0x650
> [ 1844.929823]  ? schedule+0x92/0x220
> [ 1844.930682]  ? mark_held_locks+0x96/0xe0
> [ 1844.931579]  schedule+0x133/0x220
> [ 1844.932411]  bit_wait+0x17/0xe0
> [ 1844.933238]  __wait_on_bit+0x66/0x180
> [ 1844.934107]  ? __pfx_bit_wait+0x10/0x10
> [ 1844.934996]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1844.935956]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1844.936969]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1844.937942]  ? find_held_lock+0x2d/0x110
> [ 1844.938891]  writeback_single_inode+0xf9/0x3f0
> [ 1844.939836]  sync_inode_metadata+0x91/0xd0
> [ 1844.940758]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1844.941730]  ? lock_release+0x378/0x650
> [ 1844.942640]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1844.943647]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1844.944652]  ext4_sync_file+0x469/0xb60
> [ 1844.945561]  iomap_dio_complete+0x5d1/0x860
> [ 1844.946469]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1844.947417]  iomap_dio_complete_work+0x52/0x80
> [ 1844.948358]  process_one_work+0x898/0x14a0
> [ 1844.949284]  ? __pfx_lock_acquire+0x10/0x10
> [ 1844.950204]  ? __pfx_process_one_work+0x10/0x10
> [ 1844.951152]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1844.952094]  worker_thread+0x100/0x12c0
> [ 1844.952998]  ? __pfx_worker_thread+0x10/0x10
> [ 1844.953919]  kthread+0x2ea/0x3c0
> [ 1844.954760]  ? __pfx_kthread+0x10/0x10
> [ 1844.955669]  ret_from_fork+0x30/0x70
> [ 1844.956550]  ? __pfx_kthread+0x10/0x10
> [ 1844.957418]  ret_from_fork_asm+0x1b/0x30
> [ 1844.958321]  </TASK>
> [ 1844.959085] INFO: task kworker/1:1:49 blocked for more than 122 seconds.
> [ 1844.960193]       Not tainted 6.5.0-rc7 #106
> [ 1844.961122] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1844.962340] task:kworker/1:1     state:D stack:0     pid:49    ppid:2      flags:0x00004000
> [ 1844.963619] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1844.964667] Call Trace:
> [ 1844.965503]  <TASK>
> [ 1844.966289]  __schedule+0x10ac/0x5e80
> [ 1844.967207]  ? lock_acquire+0x1a9/0x4e0
> [ 1844.968122]  ? __pfx___schedule+0x10/0x10
> [ 1844.969034]  ? lock_release+0x378/0x650
> [ 1844.969922]  ? schedule+0x92/0x220
> [ 1844.970778]  ? mark_held_locks+0x96/0xe0
> [ 1844.971674]  schedule+0x133/0x220
> [ 1844.972526]  bit_wait+0x17/0xe0
> [ 1844.973336]  __wait_on_bit+0x66/0x180
> [ 1844.974206]  ? __pfx_bit_wait+0x10/0x10
> [ 1844.975086]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1844.976046]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1844.977056]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1844.978007]  ? find_held_lock+0x2d/0x110
> [ 1844.978917]  writeback_single_inode+0xf9/0x3f0
> [ 1844.979865]  sync_inode_metadata+0x91/0xd0
> [ 1844.980786]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1844.981765]  ? lock_release+0x378/0x650
> [ 1844.982677]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1844.983687]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1844.984696]  ext4_sync_file+0x469/0xb60
> [ 1844.985608]  iomap_dio_complete+0x5d1/0x860
> [ 1844.986548]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1844.987484]  iomap_dio_complete_work+0x52/0x80
> [ 1844.988435]  process_one_work+0x898/0x14a0
> [ 1844.989352]  ? __pfx_lock_acquire+0x10/0x10
> [ 1844.990275]  ? __pfx_process_one_work+0x10/0x10
> [ 1844.991220]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1844.992164]  worker_thread+0x100/0x12c0
> [ 1844.993065]  ? __kthread_parkme+0xc1/0x1f0
> [ 1844.993977]  ? __pfx_worker_thread+0x10/0x10
> [ 1844.994934]  kthread+0x2ea/0x3c0
> [ 1844.995783]  ? __pfx_kthread+0x10/0x10
> [ 1844.996670]  ret_from_fork+0x30/0x70
> [ 1844.997544]  ? __pfx_kthread+0x10/0x10
> [ 1844.998409]  ret_from_fork_asm+0x1b/0x30
> [ 1844.999308]  </TASK>
> [ 1845.000094] INFO: task kworker/0:2:74 blocked for more than 123 seconds.
> [ 1845.001315]       Not tainted 6.5.0-rc7 #106
> [ 1845.002326] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.003630] task:kworker/0:2     state:D stack:0     pid:74    ppid:2      flags:0x00004000
> [ 1845.004991] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1845.006108] Call Trace:
> [ 1845.006975]  <TASK>
> [ 1845.007805]  __schedule+0x10ac/0x5e80
> [ 1845.008781]  ? do_raw_spin_unlock+0x54/0x1f0
> [ 1845.009780]  ? __pfx___schedule+0x10/0x10
> [ 1845.010736]  ? lock_release+0x378/0x650
> [ 1845.011666]  ? schedule+0x92/0x220
> [ 1845.012579]  ? mark_held_locks+0x96/0xe0
> [ 1845.013531]  schedule+0x133/0x220
> [ 1845.014414]  bit_wait+0x17/0xe0
> [ 1845.015287]  __wait_on_bit+0x66/0x180
> [ 1845.016219]  ? __pfx_bit_wait+0x10/0x10
> [ 1845.017164]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1845.018185]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1845.019269]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1845.020282]  ? find_held_lock+0x2d/0x110
> [ 1845.021246]  writeback_single_inode+0xf9/0x3f0
> [ 1845.022248]  sync_inode_metadata+0x91/0xd0
> [ 1845.023222]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1845.024255]  ? lock_release+0x378/0x650
> [ 1845.025207]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1845.026281]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1845.027347]  ext4_sync_file+0x469/0xb60
> [ 1845.028302]  iomap_dio_complete+0x5d1/0x860
> [ 1845.029275]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1845.030276]  iomap_dio_complete_work+0x52/0x80
> [ 1845.031281]  process_one_work+0x898/0x14a0
> [ 1845.032248]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.033199]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.034182]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.035188]  worker_thread+0x100/0x12c0
> [ 1845.036138]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.037104]  kthread+0x2ea/0x3c0
> [ 1845.037996]  ? __pfx_kthread+0x10/0x10
> [ 1845.038923]  ret_from_fork+0x30/0x70
> [ 1845.039840]  ? __pfx_kthread+0x10/0x10
> [ 1845.040763]  ret_from_fork_asm+0x1b/0x30
> [ 1845.041729]  </TASK>
> [ 1845.042531] INFO: task kworker/3:2:169 blocked for more than 123 seconds.
> [ 1845.043703]       Not tainted 6.5.0-rc7 #106
> [ 1845.044780] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.046068] task:kworker/3:2     state:D stack:0     pid:169   ppid:2      flags:0x00004000
> [ 1845.047400] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1845.048518] Call Trace:
> [ 1845.049392]  <TASK>
> [ 1845.050214]  __schedule+0x10ac/0x5e80
> [ 1845.051172]  ? lock_acquire+0x1a9/0x4e0
> [ 1845.052141]  ? __pfx___schedule+0x10/0x10
> [ 1845.053086]  ? lock_release+0x378/0x650
> [ 1845.054017]  ? schedule+0x92/0x220
> [ 1845.054920]  ? mark_held_locks+0x96/0xe0
> [ 1845.055866]  schedule+0x133/0x220
> [ 1845.056761]  bit_wait+0x17/0xe0
> [ 1845.057645]  __wait_on_bit+0x66/0x180
> [ 1845.058573]  ? __pfx_bit_wait+0x10/0x10
> [ 1845.059502]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1845.060528]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1845.061603]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1845.062604]  ? find_held_lock+0x2d/0x110
> [ 1845.063548]  writeback_single_inode+0xf9/0x3f0
> [ 1845.064564]  sync_inode_metadata+0x91/0xd0
> [ 1845.065534]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1845.066552]  ? lock_release+0x378/0x650
> [ 1845.067504]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1845.068557]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1845.069609]  ext4_sync_file+0x469/0xb60
> [ 1845.070563]  iomap_dio_complete+0x5d1/0x860
> [ 1845.071550]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1845.072543]  iomap_dio_complete_work+0x52/0x80
> [ 1845.073547]  process_one_work+0x898/0x14a0
> [ 1845.074518]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.075468]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.076456]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.077436]  worker_thread+0x100/0x12c0
> [ 1845.078382]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.079354]  kthread+0x2ea/0x3c0
> [ 1845.080230]  ? __pfx_kthread+0x10/0x10
> [ 1845.081163]  ret_from_fork+0x30/0x70
> [ 1845.082075]  ? __pfx_kthread+0x10/0x10
> [ 1845.083014]  ret_from_fork_asm+0x1b/0x30
> [ 1845.083957]  </TASK>
> [ 1845.084756] INFO: task kworker/0:3:221 blocked for more than 123 seconds.
> [ 1845.085927]       Not tainted 6.5.0-rc7 #106
> [ 1845.086911] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.088205] task:kworker/0:3     state:D stack:0     pid:221   ppid:2      flags:0x00004000
> [ 1845.089566] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1845.090635] Call Trace:
> [ 1845.091503]  <TASK>
> [ 1845.092318]  __schedule+0x10ac/0x5e80
> [ 1845.093282]  ? do_raw_spin_unlock+0x54/0x1f0
> [ 1845.094265]  ? __pfx___schedule+0x10/0x10
> [ 1845.095200]  ? lock_release+0x378/0x650
> [ 1845.096132]  ? schedule+0x92/0x220
> [ 1845.097018]  ? mark_held_locks+0x96/0xe0
> [ 1845.097959]  schedule+0x133/0x220
> [ 1845.098863]  bit_wait+0x17/0xe0
> [ 1845.099736]  __wait_on_bit+0x66/0x180
> [ 1845.100649]  ? __pfx_bit_wait+0x10/0x10
> [ 1845.101600]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1845.102606]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1845.103673]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1845.104685]  ? find_held_lock+0x2d/0x110
> [ 1845.105633]  writeback_single_inode+0xf9/0x3f0
> [ 1845.106625]  sync_inode_metadata+0x91/0xd0
> [ 1845.107612]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1845.108635]  ? lock_release+0x378/0x650
> [ 1845.109591]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1845.110645]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1845.111698]  ext4_sync_file+0x469/0xb60
> [ 1845.112657]  iomap_dio_complete+0x5d1/0x860
> [ 1845.113639]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1845.114625]  iomap_dio_complete_work+0x52/0x80
> [ 1845.115616]  process_one_work+0x898/0x14a0
> [ 1845.116582]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.117575]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.118573]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.119557]  worker_thread+0x100/0x12c0
> [ 1845.120480]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.121453]  kthread+0x2ea/0x3c0
> [ 1845.122339]  ? __pfx_kthread+0x10/0x10
> [ 1845.123277]  ret_from_fork+0x30/0x70
> [ 1845.124192]  ? __pfx_kthread+0x10/0x10
> [ 1845.125131]  ret_from_fork_asm+0x1b/0x30
> [ 1845.126085]  </TASK>
> [ 1845.127043] INFO: task kworker/1:2:230 blocked for more than 123 seconds.
> [ 1845.128574]       Not tainted 6.5.0-rc7 #106
> [ 1845.129789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.131441] task:kworker/1:2     state:D stack:0     pid:230   ppid:2      flags:0x00004000
> [ 1845.133125] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1845.134546] Call Trace:
> [ 1845.135547]  <TASK>
> [ 1845.136475]  __schedule+0x10ac/0x5e80
> [ 1845.137599]  ? lock_acquire+0x1a9/0x4e0
> [ 1845.138703]  ? __pfx___schedule+0x10/0x10
> [ 1845.139859]  ? lock_release+0x378/0x650
> [ 1845.140980]  ? schedule+0x92/0x220
> [ 1845.142026]  ? mark_held_locks+0x96/0xe0
> [ 1845.143161]  schedule+0x133/0x220
> [ 1845.144196]  bit_wait+0x17/0xe0
> [ 1845.145233]  __wait_on_bit+0x66/0x180
> [ 1845.146262]  ? __pfx_bit_wait+0x10/0x10
> [ 1845.147380]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1845.148650]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1845.149950]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1845.151181]  ? find_held_lock+0x2d/0x110
> [ 1845.152288]  writeback_single_inode+0xf9/0x3f0
> [ 1845.153474]  sync_inode_metadata+0x91/0xd0
> [ 1845.154608]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1845.155857]  ? lock_release+0x378/0x650
> [ 1845.156997]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1845.158309]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1845.159569]  ext4_sync_file+0x469/0xb60
> [ 1845.160709]  iomap_dio_complete+0x5d1/0x860
> [ 1845.161881]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1845.163086]  iomap_dio_complete_work+0x52/0x80
> [ 1845.164269]  process_one_work+0x898/0x14a0
> [ 1845.165367]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.166541]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.167706]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.168880]  worker_thread+0x100/0x12c0
> [ 1845.170006]  ? __kthread_parkme+0xc1/0x1f0
> [ 1845.171083]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.172302]  kthread+0x2ea/0x3c0
> [ 1845.173350]  ? __pfx_kthread+0x10/0x10
> [ 1845.174465]  ret_from_fork+0x30/0x70
> [ 1845.175522]  ? __pfx_kthread+0x10/0x10
> [ 1845.176616]  ret_from_fork_asm+0x1b/0x30
> [ 1845.177754]  </TASK>
> [ 1845.178624] INFO: task kworker/2:3:291 blocked for more than 123 seconds.
> [ 1845.180123]       Not tainted 6.5.0-rc7 #106
> [ 1845.181306] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.182914] task:kworker/2:3     state:D stack:0     pid:291   ppid:2      flags:0x00004000
> [ 1845.184626] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1845.186012] Call Trace:
> [ 1845.187004]  <TASK>
> [ 1845.187939]  __schedule+0x10ac/0x5e80
> [ 1845.189072]  ? do_raw_spin_unlock+0x54/0x1f0
> [ 1845.190177]  ? __pfx___schedule+0x10/0x10
> [ 1845.191356]  ? lock_release+0x378/0x650
> [ 1845.192421]  ? schedule+0x92/0x220
> [ 1845.193501]  ? mark_held_locks+0x96/0xe0
> [ 1845.194535]  schedule+0x133/0x220
> [ 1845.195595]  bit_wait+0x17/0xe0
> [ 1845.196603]  __wait_on_bit+0x66/0x180
> [ 1845.197697]  ? __pfx_bit_wait+0x10/0x10
> [ 1845.198820]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1845.200061]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1845.201315]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1845.202522]  ? find_held_lock+0x2d/0x110
> [ 1845.203679]  writeback_single_inode+0xf9/0x3f0
> [ 1845.204885]  sync_inode_metadata+0x91/0xd0
> [ 1845.205943]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1845.207190]  ? lock_release+0x378/0x650
> [ 1845.208325]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1845.209581]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1845.210883]  ext4_sync_file+0x469/0xb60
> [ 1845.212022]  iomap_dio_complete+0x5d1/0x860
> [ 1845.213177]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1845.214315]  iomap_dio_complete_work+0x52/0x80
> [ 1845.215547]  process_one_work+0x898/0x14a0
> [ 1845.216714]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.217887]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.219026]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.220280]  worker_thread+0x100/0x12c0
> [ 1845.221386]  ? __kthread_parkme+0xc1/0x1f0
> [ 1845.222569]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.223743]  kthread+0x2ea/0x3c0
> [ 1845.224788]  ? __pfx_kthread+0x10/0x10
> [ 1845.225908]  ret_from_fork+0x30/0x70
> [ 1845.226996]  ? __pfx_kthread+0x10/0x10
> [ 1845.228110]  ret_from_fork_asm+0x1b/0x30
> [ 1845.229254]  </TASK>
> [ 1845.230191] INFO: task kworker/1:3:322 blocked for more than 123 seconds.
> [ 1845.231562]       Not tainted 6.5.0-rc7 #106
> [ 1845.232622] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.233992] task:kworker/1:3     state:D stack:0     pid:322   ppid:2      flags:0x00004000
> [ 1845.235439] Workqueue: dio/dm-1 iomap_dio_complete_work
> [ 1845.236681] Call Trace:
> [ 1845.237629]  <TASK>
> [ 1845.238526]  __schedule+0x10ac/0x5e80
> [ 1845.239559]  ? do_raw_spin_unlock+0x54/0x1f0
> [ 1845.240622]  ? __pfx___schedule+0x10/0x10
> [ 1845.241639]  ? lock_release+0x378/0x650
> [ 1845.242650]  ? schedule+0x92/0x220
> [ 1845.243654]  ? mark_held_locks+0x96/0xe0
> [ 1845.244707]  schedule+0x133/0x220
> [ 1845.245657]  bit_wait+0x17/0xe0
> [ 1845.246631]  __wait_on_bit+0x66/0x180
> [ 1845.247601]  ? __pfx_bit_wait+0x10/0x10
> [ 1845.248630]  __inode_wait_for_writeback+0x12b/0x1b0
> [ 1845.249743]  ? __pfx___inode_wait_for_writeback+0x10/0x10
> [ 1845.250948]  ? __pfx_wake_bit_function+0x10/0x10
> [ 1845.252021]  ? find_held_lock+0x2d/0x110
> [ 1845.253043]  writeback_single_inode+0xf9/0x3f0
> [ 1845.254123]  sync_inode_metadata+0x91/0xd0
> [ 1845.255205]  ? __pfx_sync_inode_metadata+0x10/0x10
> [ 1845.256294]  ? lock_release+0x378/0x650
> [ 1845.257332]  ? file_check_and_advance_wb_err+0xb5/0x230
> [ 1845.258542]  generic_buffers_fsync_noflush+0x1bf/0x270
> [ 1845.259701]  ext4_sync_file+0x469/0xb60
> [ 1845.260765]  iomap_dio_complete+0x5d1/0x860
> [ 1845.261790]  ? __pfx_aio_complete_rw+0x10/0x10
> [ 1845.262907]  iomap_dio_complete_work+0x52/0x80
> [ 1845.263961]  process_one_work+0x898/0x14a0
> [ 1845.265025]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.266074]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.267197]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.268305]  worker_thread+0x100/0x12c0
> [ 1845.269328]  ? __kthread_parkme+0xc1/0x1f0
> [ 1845.270368]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.271457]  kthread+0x2ea/0x3c0
> [ 1845.272422]  ? __pfx_kthread+0x10/0x10
> [ 1845.273443]  ret_from_fork+0x30/0x70
> [ 1845.274438]  ? __pfx_kthread+0x10/0x10
> [ 1845.275475]  ret_from_fork_asm+0x1b/0x30
> [ 1845.276555]  </TASK>
> [ 1845.277433] INFO: task kworker/u8:7:2757 blocked for more than 123 seconds.
> [ 1845.278808]       Not tainted 6.5.0-rc7 #106
> [ 1845.279897] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 1845.281313] task:kworker/u8:7    state:D stack:0     pid:2757  ppid:2      flags:0x00004000
> [ 1845.282753] Workqueue: writeback wb_workfn (flush-253:1)
> [ 1845.283993] Call Trace:
> [ 1845.284945]  <TASK>
> [ 1845.285853]  __schedule+0x10ac/0x5e80
> [ 1845.286872]  ? lock_acquire+0x1b9/0x4e0
> [ 1845.287917]  ? __pfx___schedule+0x10/0x10
> [ 1845.288934]  ? __blk_flush_plug+0x27a/0x450
> [ 1845.289979]  ? inode_sleep_on_writeback+0xf4/0x160
> [ 1845.291131]  schedule+0x133/0x220
> [ 1845.292052]  inode_sleep_on_writeback+0x14e/0x160
> [ 1845.293130]  ? __pfx_inode_sleep_on_writeback+0x10/0x10
> [ 1845.294289]  ? __pfx_lock_release+0x10/0x10
> [ 1845.295362]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 1845.296574]  ? __pfx___writeback_inodes_wb+0x10/0x10
> [ 1845.297750]  wb_writeback+0x330/0x7a0
> [ 1845.298800]  ? __pfx_wb_writeback+0x10/0x10
> [ 1845.299876]  ? get_nr_dirty_inodes+0xc7/0x170
> [ 1845.300988]  wb_workfn+0x7a1/0xcc0
> [ 1845.302019]  ? __pfx_wb_workfn+0x10/0x10
> [ 1845.303071]  ? lock_acquire+0x1b9/0x4e0
> [ 1845.304127]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.305232]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.306341]  process_one_work+0x898/0x14a0
> [ 1845.307377]  ? __pfx_lock_acquire+0x10/0x10
> [ 1845.308410]  ? __pfx_process_one_work+0x10/0x10
> [ 1845.309551]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [ 1845.310678]  worker_thread+0x100/0x12c0
> [ 1845.311702]  ? __kthread_parkme+0xc1/0x1f0
> [ 1845.312778]  ? __pfx_worker_thread+0x10/0x10
> [ 1845.313864]  kthread+0x2ea/0x3c0
> [ 1845.314848]  ? __pfx_kthread+0x10/0x10
> [ 1845.315885]  ret_from_fork+0x30/0x70
> [ 1845.316879]  ? __pfx_kthread+0x10/0x10
> [ 1845.317885]  ret_from_fork_asm+0x1b/0x30
> [ 1845.318896]  </TASK>
> [ 1845.319767] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
> [ 1845.321587] 
>                Showing all locks held in the system:
> [ 1845.323498] 2 locks held by kworker/0:1/9:
> [ 1845.324569]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.326209]  #1: ffff888100877d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.327999] 1 lock held by rcu_tasks_kthre/13:
> [ 1845.329153]  #0: ffffffffa8c7b010 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xde0
> [ 1845.330838] 1 lock held by rcu_tasks_rude_/14:
> [ 1845.332043]  #0: ffffffffa8c7ad70 (rcu_tasks_rude.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xde0
> [ 1845.333713] 1 lock held by rcu_tasks_trace/15:
> [ 1845.334939]  #0: ffffffffa8c7aa70 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xde0
> [ 1845.336716] 2 locks held by kworker/1:0/25:
> [ 1845.337890]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.339639]  #1: ffff888100977d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.341440] 1 lock held by khungtaskd/43:
> [ 1845.342669]  #0: ffffffffa8c7bbe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x340
> [ 1845.344347] 2 locks held by kworker/1:1/49:
> [ 1845.345577]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.347382]  #1: ffff88810164fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.349278] 2 locks held by kworker/0:2/74:
> [ 1845.350547]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.352400]  #1: ffff88811c8ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.354301] 2 locks held by kworker/3:2/169:
> [ 1845.355618]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.357472]  #1: ffff88811f0e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.359445] 2 locks held by kworker/0:3/221:
> [ 1845.360862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.362800]  #1: ffff888126567d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.364804] 2 locks held by kworker/1:2/230:
> [ 1845.366259]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.368270]  #1: ffff8881285f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.370338] 2 locks held by kworker/2:3/291:
> [ 1845.371807]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.373789]  #1: ffff88812a1f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.375949] 2 locks held by kworker/1:3/322:
> [ 1845.377464]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.379533]  #1: ffff888105a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.381731] 1 lock held by in:imjournal/663:
> [ 1845.383335] 2 locks held by kworker/u8:7/2757:
> [ 1845.384953]  #0: ffff888101191938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.387067]  #1: ffff88813542fd98 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.389320] 2 locks held by kworker/3:4/2759:
> [ 1845.390985]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.393164]  #1: ffff888122ddfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.395410] 2 locks held by kworker/0:4/2760:
> [ 1845.397073]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.399329]  #1: ffff888107dbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.401670] 2 locks held by kworker/1:5/2762:
> [ 1845.403414]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.405626]  #1: ffff888105fbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.407962] 2 locks held by kworker/1:6/2764:
> [ 1845.409693]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.411996]  #1: ffff888134647d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.414335] 2 locks held by kworker/3:5/2765:
> [ 1845.416107]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.418376]  #1: ffff888128effd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.420758] 2 locks held by kworker/1:7/2767:
> [ 1845.422532]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.424711]  #1: ffff88810fcefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.427082] 2 locks held by kworker/1:8/2768:
> [ 1845.428790]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.431080]  #1: ffff88812a42fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.433495] 2 locks held by kworker/1:9/2770:
> [ 1845.435192]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.437507]  #1: ffff888135477d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.439982] 2 locks held by kworker/3:6/2771:
> [ 1845.441737]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.444015]  #1: ffff888127c6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.446448] 2 locks held by kworker/3:10/2776:
> [ 1845.448255]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.450561]  #1: ffff888129fafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.452971] 2 locks held by kworker/3:11/2777:
> [ 1845.454703]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.457029]  #1: ffff8881056b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.459377] 2 locks held by kworker/2:8/2779:
> [ 1845.461157]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.463483]  #1: ffff88812e997d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.465906] 2 locks held by kworker/3:13/2780:
> [ 1845.467678]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.469988]  #1: ffff888128d57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.472395] 2 locks held by kworker/3:14/2781:
> [ 1845.474175]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.476468]  #1: ffff88812c9bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.478896] 2 locks held by kworker/3:15/2782:
> [ 1845.480638]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.482919]  #1: ffff888104f27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.485299] 2 locks held by kworker/3:17/2784:
> [ 1845.487097]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.489383]  #1: ffff88812224fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.491737] 2 locks held by kworker/3:18/2785:
> [ 1845.493480]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.495790]  #1: ffff8881361afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.498159] 2 locks held by kworker/3:19/2786:
> [ 1845.499941]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.502266]  #1: ffff888127e67d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.504618] 2 locks held by kworker/3:22/2790:
> [ 1845.506418]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.508708]  #1: ffff888130d4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.511121] 2 locks held by kworker/2:10/2791:
> [ 1845.512938]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.515179]  #1: ffff888113127d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.517588] 2 locks held by kworker/3:23/2793:
> [ 1845.519372]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.521683]  #1: ffff88812a89fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.524075] 2 locks held by kworker/3:24/2794:
> [ 1845.525876]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.528115]  #1: ffff888129a1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.530515] 2 locks held by kworker/3:25/2795:
> [ 1845.532283]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.534610]  #1: ffff88812ebb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.537020] 2 locks held by kworker/3:26/2796:
> [ 1845.538809]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.541117]  #1: ffff888119577d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.543506] 2 locks held by kworker/1:11/2797:
> [ 1845.545286]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.547624]  #1: ffff88813716fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.550018] 2 locks held by kworker/3:27/2798:
> [ 1845.551827]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.554139]  #1: ffff888136747d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.556535] 2 locks held by kworker/1:13/2800:
> [ 1845.558325]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.560657]  #1: ffff888131687d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.563055] 2 locks held by kworker/1:15/2802:
> [ 1845.564867]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.567176]  #1: ffff8881342d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.569574] 2 locks held by kworker/1:17/2804:
> [ 1845.571352]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.573643]  #1: ffff888132137d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.576005] 2 locks held by kworker/1:18/2805:
> [ 1845.577768]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.580107]  #1: ffff888134a5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.582512] 2 locks held by kworker/1:19/2806:
> [ 1845.584307]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.586598]  #1: ffff888135b87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.588971] 2 locks held by kworker/1:20/2807:
> [ 1845.590771]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.593039]  #1: ffff88810513fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.595437] 2 locks held by kworker/1:22/2809:
> [ 1845.597257]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.599584]  #1: ffff8881397bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.601975] 2 locks held by kworker/1:23/2810:
> [ 1845.603756]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.606073]  #1: ffff888139807d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.608442] 2 locks held by kworker/3:30/2814:
> [ 1845.610262]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.612547]  #1: ffff888101a27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.614937] 2 locks held by kworker/2:13/2815:
> [ 1845.616711]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.618912]  #1: ffff888120087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.621317] 2 locks held by kworker/2:15/2817:
> [ 1845.623090]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.625381]  #1: ffff88812258fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.627743] 2 locks held by kworker/2:16/2818:
> [ 1845.629551]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.631844]  #1: ffff888133d47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.634251] 2 locks held by kworker/2:19/2821:
> [ 1845.636011]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.638324]  #1: ffff88812ea37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.640711] 2 locks held by kworker/2:20/2822:
> [ 1845.642514]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.644824]  #1: ffff88813abd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.647217] 2 locks held by kworker/2:21/2823:
> [ 1845.649025]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.651351]  #1: ffff88813454fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.653690] 2 locks held by kworker/2:22/2824:
> [ 1845.655501]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.657763]  #1: ffff888132e5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.660177] 2 locks held by kworker/3:31/2825:
> [ 1845.661943]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.664289]  #1: ffff888138177d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.666651] 2 locks held by kworker/3:32/2826:
> [ 1845.668418]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.670748]  #1: ffff88812a26fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.673018] 2 locks held by kworker/3:38/2832:
> [ 1845.674821]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.677132]  #1: ffff8881319b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.679533] 2 locks held by kworker/2:24/2834:
> [ 1845.681338]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.683668]  #1: ffff8881185efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.686081] 2 locks held by kworker/2:25/2835:
> [ 1845.687877]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.690160]  #1: ffff8881299a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.692548] 2 locks held by kworker/2:27/2837:
> [ 1845.694316]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.696589]  #1: ffff888105ae7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.698995] 2 locks held by kworker/2:28/2838:
> [ 1845.700799]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.703139]  #1: ffff888133fd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.705549] 2 locks held by kworker/2:30/2840:
> [ 1845.707341]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.709638]  #1: ffff888127627d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.712057] 2 locks held by kworker/2:31/2841:
> [ 1845.713853]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.716160]  #1: ffff88810a8d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.718564] 2 locks held by kworker/2:34/2845:
> [ 1845.720341]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.722653]  #1: ffff888134107d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.725061] 2 locks held by kworker/3:40/2847:
> [ 1845.726873]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.729184]  #1: ffff88812f5cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.731588] 2 locks held by kworker/2:36/2848:
> [ 1845.733384]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.735681]  #1: ffff8881184efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.738077] 2 locks held by kworker/2:37/2851:
> [ 1845.739855]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.742191]  #1: ffff88813b89fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.744532] 2 locks held by kworker/1:24/2852:
> [ 1845.746338]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.748635]  #1: ffff8881275c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.751036] 2 locks held by kworker/1:26/2854:
> [ 1845.752810]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.755139]  #1: ffff88812238fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.757498] 2 locks held by kworker/1:28/2856:
> [ 1845.759286]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.761628]  #1: ffff888122f2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.763996] 2 locks held by kworker/1:29/2857:
> [ 1845.765766]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.768067]  #1: ffff88812215fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.770425] 2 locks held by kworker/1:30/2858:
> [ 1845.772237]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.774564]  #1: ffff888137177d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.776959] 2 locks held by kworker/1:32/2860:
> [ 1845.778767]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.781058]  #1: ffff88812a6bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.783435] 2 locks held by kworker/1:34/2862:
> [ 1845.785261]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.787605]  #1: ffff888119487d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.790019] 2 locks held by kworker/1:35/2863:
> [ 1845.791759]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.794093]  #1: ffff888135497d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.796540] 2 locks held by kworker/1:37/2865:
> [ 1845.798278]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.800636]  #1: ffff8881053b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.803035] 2 locks held by kworker/2:38/2866:
> [ 1845.804808]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.807150]  #1: ffff88810533fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.809571] 2 locks held by kworker/2:39/2867:
> [ 1845.811371]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.813698]  #1: ffff888119d57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.816104] 2 locks held by kworker/2:41/2869:
> [ 1845.817858]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.820217]  #1: ffff888119d7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.822579] 2 locks held by kworker/2:46/2874:
> [ 1845.824384]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.826691]  #1: ffff888106be7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.829051] 2 locks held by kworker/2:49/2878:
> [ 1845.830865]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.833194]  #1: ffff88813af5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.835616] 2 locks held by kworker/2:51/2881:
> [ 1845.837390]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.839737]  #1: ffff888122957d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.842116] 2 locks held by kworker/2:52/2882:
> [ 1845.843933]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.846254]  #1: ffff888123fe7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.848710] 2 locks held by kworker/2:53/2883:
> [ 1845.850464]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.852749]  #1: ffff88812282fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.855191] 2 locks held by kworker/2:54/2884:
> [ 1845.856982]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.859288]  #1: ffff88813baffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.861684] 2 locks held by kworker/2:55/2885:
> [ 1845.863494]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.865779]  #1: ffff888111c97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.868184] 2 locks held by kworker/2:56/2886:
> [ 1845.869955]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.872223]  #1: ffff888111c8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.874666] 2 locks held by kworker/1:40/2888:
> [ 1845.876443]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.878794]  #1: ffff88811b197d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.881130] 2 locks held by kworker/0:5/2889:
> [ 1845.882854]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.885148]  #1: ffff888118247d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.887535] 2 locks held by kworker/2:58/2890:
> [ 1845.889341]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.891495]  #1: ffff88810cf57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.893905] 2 locks held by kworker/1:41/2897:
> [ 1845.895655]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.897934]  #1: ffff888137987d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.900296] 2 locks held by kworker/2:61/2898:
> [ 1845.902071]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.904422]  #1: ffff88811008fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.906816] 2 locks held by kworker/0:7/2899:
> [ 1845.908574]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.910857]  #1: ffff88810530fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.913250] 2 locks held by kworker/2:62/2900:
> [ 1845.915027]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.917326]  #1: ffff88812eccfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.919696] 2 locks held by kworker/0:8/2901:
> [ 1845.921496]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.923773]  #1: ffff888139277d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.926133] 2 locks held by kworker/0:9/2903:
> [ 1845.927908]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.930231]  #1: ffff888105f27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.932617] 2 locks held by kworker/1:43/2905:
> [ 1845.934393]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.936659]  #1: ffff88810629fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.939044] 2 locks held by kworker/1:44/2907:
> [ 1845.940855]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.943143]  #1: ffff88811d127d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.945543] 2 locks held by kworker/0:10/2908:
> [ 1845.947309]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.949590]  #1: ffff8881361b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.952001] 2 locks held by kworker/1:45/2909:
> [ 1845.953773]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.956004]  #1: ffff888121147d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.958426] 2 locks held by kworker/2:65/2910:
> [ 1845.960240]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.962547]  #1: ffff88810c597d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.964935] 2 locks held by kworker/1:46/2911:
> [ 1845.966701]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.968990]  #1: ffff88812b2ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.971313] 2 locks held by kworker/1:47/2913:
> [ 1845.973100]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.975451]  #1: ffff88813f79fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.977880] 2 locks held by kworker/0:11/2916:
> [ 1845.979682]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.981949]  #1: ffff88811d7e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.984317] 2 locks held by kworker/2:68/2917:
> [ 1845.986087]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.988369]  #1: ffff88812c017d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.990715] 2 locks held by kworker/1:50/2920:
> [ 1845.992496]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1845.994769]  #1: ffff888123fc7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1845.997095] 2 locks held by kworker/0:12/2921:
> [ 1845.998885]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.001218]  #1: ffff8881202f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.003603] 2 locks held by kworker/1:51/2923:
> [ 1846.005405]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.007715]  #1: ffff8881114ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.010124] 2 locks held by kworker/2:71/2924:
> [ 1846.011907]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.014223]  #1: ffff88812ef5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.016615] 2 locks held by kworker/2:73/2928:
> [ 1846.018367]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.020712]  #1: ffff888117667d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.023000] 2 locks held by kworker/2:74/2931:
> [ 1846.024774]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.027108]  #1: ffff88811322fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.029466] 2 locks held by kworker/0:14/2932:
> [ 1846.031284]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.033576]  #1: ffff88810fd5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.035945] 2 locks held by kworker/2:75/2933:
> [ 1846.037730]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.040007]  #1: ffff8881367a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.042335] 2 locks held by kworker/0:16/2935:
> [ 1846.044121]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.046392]  #1: ffff88810c55fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.048757] 2 locks held by kworker/0:17/2937:
> [ 1846.050524]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.052871]  #1: ffff8881368a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.055241] 2 locks held by kworker/2:77/2938:
> [ 1846.056990]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.059306]  #1: ffff888122217d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.061588] 2 locks held by kworker/2:78/2940:
> [ 1846.063332]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.065636]  #1: ffff8881212a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.068005] 2 locks held by kworker/1:56/2941:
> [ 1846.069793]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.072091]  #1: ffff8881192efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.074460] 2 locks held by kworker/2:79/2942:
> [ 1846.076276]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.078593]  #1: ffff88811b187d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.080997] 2 locks held by kworker/1:57/2943:
> [ 1846.082766]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.085099]  #1: ffff888139457d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.087514] 2 locks held by kworker/2:80/2944:
> [ 1846.089313]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.091623]  #1: ffff888134697d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.094002] 2 locks held by kworker/1:59/2948:
> [ 1846.095792]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.098122]  #1: ffff888107d27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.100558] 2 locks held by kworker/2:82/2949:
> [ 1846.102361]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.104650]  #1: ffff88812810fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.107035] 2 locks held by kworker/0:19/2950:
> [ 1846.108804]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.111121]  #1: ffff8881313f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.113499] 2 locks held by kworker/1:60/2951:
> [ 1846.115278]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.117586]  #1: ffff88810d01fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.120000] 2 locks held by kworker/2:84/2954:
> [ 1846.121772]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.124105]  #1: ffff88812618fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.126532] 2 locks held by kworker/0:21/2955:
> [ 1846.128332]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.130576]  #1: ffff888107c6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.132910] 2 locks held by kworker/0:24/2960:
> [ 1846.134696]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.136967]  #1: ffff888100cafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.139353] 2 locks held by kworker/0:25/2962:
> [ 1846.141106]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.143454]  #1: ffff888111267d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.145841] 2 locks held by kworker/2:88/2963:
> [ 1846.147625]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.149903]  #1: ffff888134d0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.152280] 2 locks held by kworker/3:46/2964:
> [ 1846.154068]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.156371]  #1: ffff88810f7afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.158751] 2 locks held by kworker/3:47/2967:
> [ 1846.160398]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.162653]  #1: ffff88813c7b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.165045] 2 locks held by kworker/0:28/2968:
> [ 1846.166830]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.169156]  #1: ffff88812dc77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.171558] 2 locks held by kworker/0:29/2970:
> [ 1846.173363]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.175655]  #1: ffff88812892fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.178081] 2 locks held by kworker/0:30/2971:
> [ 1846.179861]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.182198]  #1: ffff88812dfd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.184562] 2 locks held by kworker/0:31/2973:
> [ 1846.186364]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.188663]  #1: ffff8881304ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.191053] 2 locks held by kworker/3:50/2974:
> [ 1846.192850]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.195153]  #1: ffff88811fa6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.197534] 2 locks held by kworker/3:51/2975:
> [ 1846.199290]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.201640]  #1: ffff888130c0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.204059] 2 locks held by kworker/2:90/2978:
> [ 1846.205833]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.208134]  #1: ffff888138457d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.210548] 2 locks held by kworker/2:94/2983:
> [ 1846.212355]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.214684]  #1: ffff88813c5b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.217078] 2 locks held by kworker/0:33/2984:
> [ 1846.218870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.221180]  #1: ffff888118337d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.223616] 2 locks held by kworker/0:34/2987:
> [ 1846.225402]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.227712]  #1: ffff88812b827d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.230049] 2 locks held by kworker/0:35/2988:
> [ 1846.231865]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.234180]  #1: ffff88811761fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.236603] 2 locks held by kworker/0:36/2990:
> [ 1846.238405]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.240743]  #1: ffff88813a327d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.243154] 2 locks held by kworker/3:54/2991:
> [ 1846.244944]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.247254]  #1: ffff88813a32fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.249661] 2 locks held by kworker/2:96/2992:
> [ 1846.251415]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.253744]  #1: ffff88813a5cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.256072] 2 locks held by kworker/1:62/2993:
> [ 1846.257867]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.260131]  #1: ffff88810f7c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.262502] 2 locks held by kworker/2:98/2996:
> [ 1846.264306]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.266598]  #1: ffff88813544fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.268989] 2 locks held by kworker/1:64/2997:
> [ 1846.270789]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.273098]  #1: ffff88810f497d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.275485] 2 locks held by kworker/2:102/3001:
> [ 1846.277249]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.279558]  #1: ffff888107d37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.281985] 2 locks held by kworker/0:38/3004:
> [ 1846.283756]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.286069]  #1: ffff88812db1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.288455] 2 locks held by kworker/0:39/3006:
> [ 1846.290218]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.292529]  #1: ffff88812b847d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.294939] 2 locks held by kworker/2:105/3007:
> [ 1846.296685]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.299010]  #1: ffff888135e37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.301429] 2 locks held by kworker/0:40/3008:
> [ 1846.303243]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.305566]  #1: ffff888112cffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.307892] 2 locks held by kworker/2:107/3011:
> [ 1846.309698]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.312023]  #1: ffff88812e577d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.314422] 2 locks held by kworker/2:108/3013:
> [ 1846.316249]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.318523]  #1: ffff88812183fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.320928] 2 locks held by kworker/0:43/3014:
> [ 1846.322729]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.325007]  #1: ffff88813b8f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.327362] 2 locks held by kworker/1:65/3015:
> [ 1846.329133]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.331460]  #1: ffff8881230efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.333826] 2 locks held by kworker/0:44/3016:
> [ 1846.335617]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.337933]  #1: ffff888134f77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.340294] 2 locks held by kworker/2:110/3019:
> [ 1846.342063]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.344356]  #1: ffff888123877d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.346708] 2 locks held by kworker/2:111/3021:
> [ 1846.348463]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.350711]  #1: ffff88811b93fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.353066] 2 locks held by kworker/0:48/3024:
> [ 1846.354871]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.357159]  #1: ffff88812500fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.359565] 2 locks held by kworker/0:49/3026:
> [ 1846.361326]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.363655]  #1: ffff8881184a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.366021] 2 locks held by kworker/0:50/3027:
> [ 1846.367802]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.370043]  #1: ffff8881184afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.372410] 2 locks held by kworker/1:66/3028:
> [ 1846.374214]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.376522]  #1: ffff88813478fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.378915] 2 locks held by kworker/1:67/3029:
> [ 1846.380682]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.383009]  #1: ffff8881216e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.385428] 2 locks held by kworker/1:72/3034:
> [ 1846.387211]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.389542]  #1: ffff88812bdd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.391894] 2 locks held by kworker/1:73/3035:
> [ 1846.393613]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.395947]  #1: ffff88812bddfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.398383] 2 locks held by kworker/1:74/3036:
> [ 1846.400150]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.402449]  #1: ffff88811c49fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.404851] 2 locks held by kworker/1:75/3037:
> [ 1846.406632]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.408913]  #1: ffff888111587d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.411257] 2 locks held by kworker/1:77/3039:
> [ 1846.413046]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.415323]  #1: ffff88811157fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.417715] 2 locks held by kworker/1:79/3042:
> [ 1846.419479]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.421757]  #1: ffff888126f77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.424093] 2 locks held by kworker/1:80/3043:
> [ 1846.425872]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.428177]  #1: ffff888126f7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.430604] 2 locks held by kworker/1:82/3046:
> [ 1846.432382]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.434722]  #1: ffff88811b027d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.437127] 2 locks held by kworker/2:116/3052:
> [ 1846.438947]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.441264]  #1: ffff888138e4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.443697] 2 locks held by kworker/2:118/3054:
> [ 1846.445508]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.447719]  #1: ffff88813ecc7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.450101] 2 locks held by kworker/2:120/3056:
> [ 1846.451878]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.454210]  #1: ffff88813ecdfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.456602] 2 locks held by kworker/2:122/3058:
> [ 1846.458392]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.460678]  #1: ffff88811c597d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.463034] 2 locks held by kworker/2:123/3059:
> [ 1846.464820]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.467113]  #1: ffff88811c59fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.469512] 2 locks held by kworker/2:125/3061:
> [ 1846.471288]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.473547]  #1: ffff88811c47fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.475876] 2 locks held by kworker/2:127/3063:
> [ 1846.477645]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.479943]  #1: ffff88812fbf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.482357] 2 locks held by kworker/2:128/3064:
> [ 1846.484135]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.486426]  #1: ffff88810f5a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.488860] 2 locks held by kworker/2:131/3067:
> [ 1846.490666]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.492903]  #1: ffff88811f307d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.495335] 2 locks held by kworker/2:133/3069:
> [ 1846.497155]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.499482]  #1: ffff888130447d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.501832] 2 locks held by kworker/2:134/3070:
> [ 1846.503601]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.505908]  #1: ffff888130457d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.508286] 2 locks held by kworker/2:141/3077:
> [ 1846.510081]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.512419]  #1: ffff88813d78fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.514763] 2 locks held by kworker/0:55/3078:
> [ 1846.516571]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.518869]  #1: ffff88813d79fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.521270] 2 locks held by kworker/0:56/3080:
> [ 1846.523060]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.525405]  #1: ffff8881252f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.527817] 2 locks held by kworker/0:58/3082:
> [ 1846.529590]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.531794]  #1: ffff888110d6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.534196] 2 locks held by kworker/0:59/3083:
> [ 1846.535999]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.538293]  #1: ffff888110d77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.540713] 2 locks held by kworker/0:60/3084:
> [ 1846.542437]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.544711]  #1: ffff888119c07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.547111] 2 locks held by kworker/0:62/3086:
> [ 1846.548917]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.551264]  #1: ffff88811464fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.553629] 2 locks held by kworker/0:64/3088:
> [ 1846.555433]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.557729]  #1: ffff88813ee47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.560105] 2 locks held by kworker/0:65/3089:
> [ 1846.561924]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.564227]  #1: ffff88813ee4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.566623] 2 locks held by kworker/0:66/3090:
> [ 1846.568414]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.570750]  #1: ffff88813ee5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.573183] 2 locks held by kworker/0:68/3092:
> [ 1846.574932]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.577277]  #1: ffff8881169b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.579664] 2 locks held by kworker/0:69/3093:
> [ 1846.581445]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.583780]  #1: ffff8881169bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.586161] 2 locks held by kworker/0:73/3097:
> [ 1846.587954]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.590274]  #1: ffff88811632fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.592663] 2 locks held by kworker/0:74/3098:
> [ 1846.594470]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.596751]  #1: ffff88811633fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.599107] 2 locks held by kworker/0:76/3100:
> [ 1846.600881]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.603217]  #1: ffff8881169dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.605601] 2 locks held by kworker/0:77/3101:
> [ 1846.607402]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.609573]  #1: ffff8881169e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.611971] 2 locks held by kworker/0:78/3102:
> [ 1846.613730]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.616042]  #1: ffff8881169f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.618444] 2 locks held by kworker/0:79/3103:
> [ 1846.620254]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.622558]  #1: ffff8881169ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.624952] 2 locks held by kworker/0:80/3104:
> [ 1846.626680]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.628957]  #1: ffff888113257d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.631318] 2 locks held by kworker/0:82/3106:
> [ 1846.633132]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.635458]  #1: ffff88811326fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.637825] 2 locks held by kworker/2:143/3107:
> [ 1846.639535]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.641623]  #1: ffff888113277d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.643714] 2 locks held by kworker/0:83/3108:
> [ 1846.645345]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.647378]  #1: ffff888116747d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.649476] 2 locks held by kworker/0:85/3110:
> [ 1846.651108]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.653157]  #1: ffff88811675fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.655265] 2 locks held by kworker/2:145/3115:
> [ 1846.656907]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.658943]  #1: ffff88811681fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.661043] 2 locks held by kworker/0:88/3116:
> [ 1846.662672]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.664700]  #1: ffff88811682fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.666800] 2 locks held by kworker/0:89/3117:
> [ 1846.668428]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.670467]  #1: ffff888116837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.672564] 2 locks held by kworker/0:90/3118:
> [ 1846.674191]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.676231]  #1: ffff888116847d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.678334] 2 locks held by kworker/0:91/3119:
> [ 1846.679962]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.682000]  #1: ffff88811684fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.684357] 2 locks held by kworker/0:94/3122:
> [ 1846.686158]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.688463]  #1: ffff88811687fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.690827] 2 locks held by kworker/0:96/3124:
> [ 1846.692624]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.694930]  #1: ffff888116797d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.697350] 2 locks held by kworker/0:97/3125:
> [ 1846.699168]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.701468]  #1: ffff8881167a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.703838] 2 locks held by kworker/3:55/3126:
> [ 1846.705562]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.707872]  #1: ffff8881167efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.710221] 2 locks held by kworker/3:57/3129:
> [ 1846.712016]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.714368]  #1: ffff88810f7efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.716772] 2 locks held by kworker/2:147/3130:
> [ 1846.718550]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.720798]  #1: ffff888130f3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.723151] 2 locks held by kworker/3:58/3131:
> [ 1846.724961]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.727251]  #1: ffff88813387fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.729629] 2 locks held by kworker/3:60/3136:
> [ 1846.731377]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.733722]  #1: ffff88811cac7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.736070] 2 locks held by kworker/2:151/3137:
> [ 1846.737871]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.740181]  #1: ffff888119a2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.742605] 2 locks held by kworker/3:61/3138:
> [ 1846.744409]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.746708]  #1: ffff888132bbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.749059] 2 locks held by kworker/3:62/3141:
> [ 1846.750851]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.753065]  #1: ffff8881378dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.755504] 2 locks held by kworker/2:155/3144:
> [ 1846.757284]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.759575]  #1: ffff888118bffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.761933] 2 locks held by kworker/2:157/3147:
> [ 1846.763742]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.766062]  #1: ffff88812f4c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.768433] 2 locks held by kworker/3:66/3150:
> [ 1846.770245]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.772589]  #1: ffff88812c59fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.774922] 2 locks held by kworker/2:159/3151:
> [ 1846.776705]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.778997]  #1: ffff888128447d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.781418] 2 locks held by kworker/3:67/3152:
> [ 1846.783229]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.785552]  #1: ffff8881010c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.787935] 2 locks held by kworker/2:160/3153:
> [ 1846.789731]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.791997]  #1: ffff88811b8dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.794398] 2 locks held by kworker/3:68/3154:
> [ 1846.796217]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.798461]  #1: ffff8881230c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.800827] 2 locks held by kworker/3:69/3156:
> [ 1846.802626]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.804880]  #1: ffff88811a5afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.807294] 2 locks held by kworker/2:162/3157:
> [ 1846.809032]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.811342]  #1: ffff888123e27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.813731] 2 locks held by kworker/3:70/3158:
> [ 1846.815471]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.817704]  #1: ffff888119967d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.820099] 2 locks held by kworker/2:163/3159:
> [ 1846.821912]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.824259]  #1: ffff88812eb17d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.826658] 2 locks held by kworker/3:72/3162:
> [ 1846.828445]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.830737]  #1: ffff88812b71fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.833091] 2 locks held by kworker/3:73/3164:
> [ 1846.834905]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.837180]  #1: ffff8881236cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.839547] 2 locks held by kworker/2:166/3165:
> [ 1846.841359]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.843683]  #1: ffff888127ce7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.846077] 2 locks held by kworker/2:167/3166:
> [ 1846.847874]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.850188]  #1: ffff888130f5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.852581] 2 locks held by kworker/3:74/3167:
> [ 1846.854381]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.856638]  #1: ffff88812a03fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.858994] 2 locks held by kworker/2:168/3168:
> [ 1846.860803]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.863125]  #1: ffff888118547d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.865502] 2 locks held by kworker/3:76/3170:
> [ 1846.867274]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.869590]  #1: ffff8881290efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.871966] 2 locks held by kworker/2:169/3171:
> [ 1846.873759]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.876067]  #1: ffff888113537d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.878500] 2 locks held by kworker/2:170/3172:
> [ 1846.880241]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.882550]  #1: ffff88812800fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.884923] 2 locks held by kworker/2:171/3174:
> [ 1846.886717]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.888964]  #1: ffff88810b7afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.891342] 2 locks held by kworker/3:78/3175:
> [ 1846.893152]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.895444]  #1: ffff88810b7cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.897839] 2 locks held by kworker/2:173/3178:
> [ 1846.899645]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.901930]  #1: ffff88813824fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.904368] 2 locks held by kworker/2:174/3180:
> [ 1846.906166]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.908487]  #1: ffff88811fbffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.910889] 2 locks held by kworker/2:175/3181:
> [ 1846.912677]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.914993]  #1: ffff88810d657d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.917381] 2 locks held by kworker/2:176/3183:
> [ 1846.919126]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.921418]  #1: ffff88812cd0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.923802] 2 locks held by kworker/0:99/3184:
> [ 1846.925561]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.927875]  #1: ffff888129a8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.930261] 2 locks held by kworker/0:101/3188:
> [ 1846.932086]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.934427]  #1: ffff888122d0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.936855] 2 locks held by kworker/0:102/3189:
> [ 1846.938637]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.940957]  #1: ffff888135087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.943360] 2 locks held by kworker/2:179/3190:
> [ 1846.945154]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.947422]  #1: ffff88812db5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.949816] 2 locks held by kworker/2:180/3192:
> [ 1846.951610]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.953888]  #1: ffff888135c2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.956310] 2 locks held by kworker/2:181/3194:
> [ 1846.958131]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.960429]  #1: ffff88811e607d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.962842] 2 locks held by kworker/0:105/3195:
> [ 1846.964653]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.966944]  #1: ffff88810786fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.969357] 2 locks held by kworker/2:182/3196:
> [ 1846.971031]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.973347]  #1: ffff88810b6dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.975722] 2 locks held by kworker/0:106/3197:
> [ 1846.977477]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.979799]  #1: ffff888133eb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.982180] 2 locks held by kworker/2:183/3198:
> [ 1846.983956]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.986282]  #1: ffff88810fd4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.988625] 2 locks held by kworker/2:184/3200:
> [ 1846.990385]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.992687]  #1: ffff88811d3bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1846.995091] 2 locks held by kworker/0:108/3201:
> [ 1846.996880]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1846.999212]  #1: ffff8881194f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.001610] 2 locks held by kworker/2:185/3202:
> [ 1847.003419]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.005697]  #1: ffff88812201fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.008064] 2 locks held by kworker/0:109/3203:
> [ 1847.009833]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.012147]  #1: ffff88812360fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.014580] 2 locks held by kworker/0:110/3205:
> [ 1847.016323]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.018596]  #1: ffff88812dbffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.020968] 2 locks held by kworker/0:111/3206:
> [ 1847.022748]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.025005]  #1: ffff888121917d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.027404] 2 locks held by kworker/0:113/3208:
> [ 1847.029168]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.031487]  #1: ffff888125257d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.033927] 2 locks held by kworker/0:114/3209:
> [ 1847.035695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.037924]  #1: ffff888117cffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.040327] 2 locks held by kworker/0:115/3210:
> [ 1847.042093]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.044392]  #1: ffff88813ee97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.046790] 2 locks held by kworker/3:84/3214:
> [ 1847.048447]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.050752]  #1: ffff88811624fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.053143] 2 locks held by kworker/3:85/3215:
> [ 1847.054922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.057177]  #1: ffff88811625fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.059600] 2 locks held by kworker/3:86/3216:
> [ 1847.061342]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.063655]  #1: ffff888116267d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.066031] 2 locks held by kworker/3:87/3217:
> [ 1847.067846]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.070173]  #1: ffff888116277d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.072605] 2 locks held by kworker/3:88/3218:
> [ 1847.074322]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.076675]  #1: ffff88811627fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.079050] 2 locks held by kworker/3:90/3220:
> [ 1847.080866]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.083139]  #1: ffff88811629fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.085557] 2 locks held by kworker/0:116/3224:
> [ 1847.087325]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.089540]  #1: ffff8881162cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.091896] 2 locks held by kworker/0:117/3225:
> [ 1847.093708]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.095968]  #1: ffff8881162dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.098385] 2 locks held by kworker/0:120/3228:
> [ 1847.100165]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.102497]  #1: ffff8881162ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.104912] 2 locks held by kworker/0:122/3230:
> [ 1847.106730]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.109025]  #1: ffff888116617d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.111433] 2 locks held by kworker/0:124/3232:
> [ 1847.113261]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.115545]  #1: ffff888116637d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.117891] 2 locks held by kworker/0:125/3233:
> [ 1847.119688]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.121958]  #1: ffff88811664fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.124367] 2 locks held by kworker/0:126/3234:
> [ 1847.126165]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.128485]  #1: ffff888116657d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.130883] 2 locks held by kworker/0:127/3235:
> [ 1847.132680]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.134962]  #1: ffff888116667d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.137353] 2 locks held by kworker/0:128/3236:
> [ 1847.139112]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.141422]  #1: ffff88811666fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.143839] 2 locks held by kworker/0:129/3237:
> [ 1847.145625]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.147894]  #1: ffff88811667fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.150250] 2 locks held by kworker/0:130/3238:
> [ 1847.152017]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.154368]  #1: ffff888116687d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.156773] 2 locks held by kworker/0:135/3243:
> [ 1847.158555]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.160730]  #1: ffff8881166c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.163142] 2 locks held by kworker/0:136/3244:
> [ 1847.164910]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.167258]  #1: ffff8881166cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.169669] 2 locks held by kworker/3:95/3246:
> [ 1847.171438]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.173713]  #1: ffff8881166efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.176070] 2 locks held by kworker/3:97/3248:
> [ 1847.177888]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.180173]  #1: ffff88813ef07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.182623] 2 locks held by kworker/0:137/3249:
> [ 1847.184437]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.186710]  #1: ffff88813ef1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.189078] 2 locks held by kworker/3:99/3251:
> [ 1847.190888]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.193149]  #1: ffff88813ef37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.195552] 2 locks held by kworker/3:102/3254:
> [ 1847.197351]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.199679]  #1: ffff88813ef57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.202095] 2 locks held by kworker/3:104/3256:
> [ 1847.203830]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.206136]  #1: ffff88813ef6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.208511] 2 locks held by kworker/3:107/3259:
> [ 1847.210327]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.212667]  #1: ffff88813ef9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.215030] 2 locks held by kworker/3:109/3261:
> [ 1847.216850]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.219130]  #1: ffff88813efb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.221522] 2 locks held by kworker/3:110/3262:
> [ 1847.223298]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.225646]  #1: ffff88813efbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.227959] 2 locks held by kworker/3:112/3264:
> [ 1847.229749]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.232022]  #1: ffff88811600fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.234410] 2 locks held by kworker/1:85/3265:
> [ 1847.236204]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.238532]  #1: ffff88811601fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.240856] 2 locks held by kworker/1:86/3266:
> [ 1847.242656]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.244925]  #1: ffff888116027d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.247269] 2 locks held by kworker/1:87/3267:
> [ 1847.249067]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.251384]  #1: ffff88811607fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.253760] 2 locks held by kworker/1:88/3268:
> [ 1847.255546]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.257786]  #1: ffff888116087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.260211] 2 locks held by kworker/1:89/3269:
> [ 1847.262017]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.264285]  #1: ffff888116097d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.266675] 2 locks held by kworker/0:138/3270:
> [ 1847.268432]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.270712]  #1: ffff8881393cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.273097] 2 locks held by kworker/0:139/3272:
> [ 1847.274906]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.277199]  #1: ffff8881160e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.279629] 2 locks held by kworker/1:91/3273:
> [ 1847.281402]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.283688]  #1: ffff8881160f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.286055] 2 locks held by kworker/1:92/3275:
> [ 1847.287859]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.290142]  #1: ffff88811610fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.292499] 2 locks held by kworker/1:93/3277:
> [ 1847.294276]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.296572]  #1: ffff888116167d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.298959] 2 locks held by kworker/0:143/3280:
> [ 1847.300741]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.303053]  #1: ffff88811618fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.305458] 2 locks held by kworker/0:144/3282:
> [ 1847.307260]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.309562]  #1: ffff8881161a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.311971] 2 locks held by kworker/1:99/3289:
> [ 1847.313761]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.316077]  #1: ffff888116407d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.318447] 2 locks held by kworker/1:100/3291:
> [ 1847.320274]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.322556]  #1: ffff88811641fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.324976] 2 locks held by kworker/0:149/3292:
> [ 1847.326775]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.329064]  #1: ffff88811642fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.331461] 2 locks held by kworker/0:150/3294:
> [ 1847.333272]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.335568]  #1: ffff888116447d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.337986] 2 locks held by kworker/1:102/3295:
> [ 1847.339772]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.342014]  #1: ffff88811644fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.344402] 2 locks held by kworker/0:151/3296:
> [ 1847.346193]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.348481]  #1: ffff88811645fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.350902] 2 locks held by kworker/1:103/3297:
> [ 1847.352678]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.354987]  #1: ffff888116467d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.357395] 2 locks held by kworker/0:152/3298:
> [ 1847.359178]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.361441]  #1: ffff8881164afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.363841] 2 locks held by kworker/1:104/3299:
> [ 1847.365642]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.367965]  #1: ffff8881164bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.370355] 2 locks held by kworker/0:154/3301:
> [ 1847.372178]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.374487]  #1: ffff8881164d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.376872] 2 locks held by kworker/0:155/3302:
> [ 1847.378658]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.380974]  #1: ffff8881164e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.383376] 2 locks held by kworker/0:156/3303:
> [ 1847.385153]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.387476]  #1: ffff8881164efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.389924] 2 locks held by kworker/0:157/3304:
> [ 1847.391724]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.394012]  #1: ffff888116507d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.396410] 2 locks held by kworker/0:158/3306:
> [ 1847.398175]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.400498]  #1: ffff888124897d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.402864] 2 locks held by kworker/2:188/3307:
> [ 1847.404675]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.406978]  #1: ffff88811f0afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.409328] 2 locks held by kworker/0:159/3310:
> [ 1847.411083]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.413366]  #1: ffff888129117d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.415716] 2 locks held by kworker/0:160/3312:
> [ 1847.417507]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.419764]  #1: ffff888105837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.422101] 2 locks held by kworker/0:161/3314:
> [ 1847.423922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.426202]  #1: ffff88813d44fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.428598] 2 locks held by kworker/0:162/3316:
> [ 1847.430401]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.432698]  #1: ffff888121b37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.435047] 2 locks held by kworker/2:194/3317:
> [ 1847.436834]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.439139]  #1: ffff88812ba5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.441506] 2 locks held by kworker/0:163/3318:
> [ 1847.443320]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.445432]  #1: ffff88812923fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.447767] 2 locks held by kworker/2:197/3321:
> [ 1847.449571]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.451878]  #1: ffff88811ea3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.454252] 2 locks held by kworker/2:199/3323:
> [ 1847.456041]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.458354]  #1: ffff888113057d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.460746] 2 locks held by kworker/2:202/3326:
> [ 1847.462556]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.464785]  #1: ffff8881330b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.467097] 2 locks held by kworker/3:113/3328:
> [ 1847.468870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.471219]  #1: ffff888122eb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.473651] 2 locks held by kworker/1:105/3329:
> [ 1847.475411]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.477759]  #1: ffff888127057d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.480120] 2 locks held by kworker/2:204/3331:
> [ 1847.481886]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.484226]  #1: ffff888117757d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.486641] 2 locks held by kworker/2:206/3333:
> [ 1847.488454]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.490716]  #1: ffff88812be7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.493118] 2 locks held by kworker/2:209/3336:
> [ 1847.494924]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.497212]  #1: ffff88811778fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.499623] 2 locks held by kworker/2:210/3337:
> [ 1847.501424]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.503699]  #1: ffff8881304efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.506080] 2 locks held by kworker/2:213/3340:
> [ 1847.507880]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.510224]  #1: ffff88811ffbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.512684] 2 locks held by kworker/2:220/3347:
> [ 1847.514454]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.516795]  #1: ffff8881165c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.519190] 2 locks held by kworker/1:106/3348:
> [ 1847.520945]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.523275]  #1: ffff8881165cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.525698] 2 locks held by kworker/1:108/3350:
> [ 1847.527508]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.529801]  #1: ffff8881165e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.532214] 2 locks held by kworker/1:109/3351:
> [ 1847.534035]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.536369]  #1: ffff8881165f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.538787] 2 locks held by kworker/1:110/3352:
> [ 1847.540550]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.542874]  #1: ffff888116a37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.545280] 2 locks held by kworker/1:111/3353:
> [ 1847.547082]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.549357]  #1: ffff888116a47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.551733] 2 locks held by kworker/1:112/3354:
> [ 1847.553531]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.555790]  #1: ffff888116a4fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.558151] 2 locks held by kworker/1:114/3356:
> [ 1847.559954]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.562196]  #1: ffff888116a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.564639] 2 locks held by kworker/1:115/3357:
> [ 1847.566434]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.568712]  #1: ffff888116a7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.571103] 2 locks held by kworker/1:116/3358:
> [ 1847.572922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.575204]  #1: ffff888116a87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.577562] 2 locks held by kworker/1:117/3359:
> [ 1847.579381]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.581713]  #1: ffff888116a9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.584083] 2 locks held by kworker/1:119/3361:
> [ 1847.585908]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.588165]  #1: ffff888116ab7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.590577] 2 locks held by kworker/1:120/3362:
> [ 1847.592382]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.594688]  #1: ffff888116abfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.597066] 2 locks held by kworker/1:121/3363:
> [ 1847.598848]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.601147]  #1: ffff888116acfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.603549] 2 locks held by kworker/1:123/3365:
> [ 1847.605340]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.607647]  #1: ffff888116ae7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.610047] 2 locks held by kworker/1:124/3366:
> [ 1847.611864]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.614188]  #1: ffff888116aefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.616565] 2 locks held by kworker/1:125/3367:
> [ 1847.618340]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.620638]  #1: ffff888116affd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.623046] 2 locks held by kworker/1:126/3368:
> [ 1847.624844]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.627133]  #1: ffff888116b07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.629539] 2 locks held by kworker/1:127/3369:
> [ 1847.631330]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.633633]  #1: ffff888116b17d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.636059] 2 locks held by kworker/1:129/3371:
> [ 1847.637882]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.640201]  #1: ffff888116b2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.642586] 2 locks held by kworker/1:130/3372:
> [ 1847.644401]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.646695]  #1: ffff888116b3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.649024] 2 locks held by kworker/1:132/3374:
> [ 1847.650768]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.653070]  #1: ffff888116b5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.655470] 2 locks held by kworker/1:134/3376:
> [ 1847.657301]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.659606]  #1: ffff888116b77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.661962] 2 locks held by kworker/1:135/3377:
> [ 1847.663777]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.666019]  #1: ffff888116b87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.668437] 2 locks held by kworker/1:136/3378:
> [ 1847.670250]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.672595]  #1: ffff888116b8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.674955] 2 locks held by kworker/1:137/3379:
> [ 1847.676736]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.678987]  #1: ffff888116b9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.681417] 2 locks held by kworker/1:138/3380:
> [ 1847.683182]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.685500]  #1: ffff888116ba7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.687873] 2 locks held by kworker/1:141/3383:
> [ 1847.689653]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.691934]  #1: ffff888116bcfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.694359] 2 locks held by kworker/1:143/3385:
> [ 1847.696172]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.698478]  #1: ffff888116befd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.700903] 2 locks held by kworker/1:144/3386:
> [ 1847.702685]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.704954]  #1: ffff888116bf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.707320] 2 locks held by kworker/1:146/3388:
> [ 1847.709107]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.711342]  #1: ffff88813e40fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.713773] 2 locks held by kworker/1:147/3389:
> [ 1847.715521]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.717823]  #1: ffff88813e41fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.720135] 2 locks held by kworker/2:226/3395:
> [ 1847.721952]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.724259]  #1: ffff88813e46fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.726683] 2 locks held by kworker/2:230/3399:
> [ 1847.728488]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.730738]  #1: ffff88813e4a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.733151] 2 locks held by kworker/2:235/3404:
> [ 1847.734971]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.737282]  #1: ffff88813e4dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.739700] 2 locks held by kworker/2:237/3406:
> [ 1847.741471]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.743751]  #1: ffff88813e4f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.746141] 2 locks held by kworker/2:238/3407:
> [ 1847.747934]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.750210]  #1: ffff88813e507d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.752594] 2 locks held by kworker/2:240/3409:
> [ 1847.754389]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.756657]  #1: ffff88813e51fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.759038] 2 locks held by kworker/0:165/3410:
> [ 1847.760840]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.763088]  #1: ffff88813e52fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.765501] 2 locks held by kworker/0:166/3411:
> [ 1847.767292]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.769539]  #1: ffff88813e587d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.771912] 2 locks held by kworker/0:167/3412:
> [ 1847.773703]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.775930]  #1: ffff88813e58fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.778359] 2 locks held by kworker/0:170/3415:
> [ 1847.780177]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.782469]  #1: ffff88813e5b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.784868] 2 locks held by kworker/0:171/3416:
> [ 1847.786668]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.788962]  #1: ffff88813e5bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.791373] 2 locks held by kworker/0:172/3417:
> [ 1847.793191]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.795535]  #1: ffff88813e5cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.797917] 2 locks held by kworker/0:173/3418:
> [ 1847.799712]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.802009]  #1: ffff88813e5d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.804327] 2 locks held by kworker/0:174/3419:
> [ 1847.806122]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.808399]  #1: ffff88813e5e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.810832] 2 locks held by kworker/0:175/3420:
> [ 1847.812621]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.814863]  #1: ffff88813e5efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.817249] 2 locks held by kworker/0:177/3422:
> [ 1847.819043]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.821388]  #1: ffff88813e607d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.823768] 2 locks held by kworker/0:181/3426:
> [ 1847.825577]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.827834]  #1: ffff88811e057d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.830158] 2 locks held by kworker/0:184/3429:
> [ 1847.831957]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.834282]  #1: ffff88811d1bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.836701] 2 locks held by kworker/2:241/3430:
> [ 1847.838428]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.840701]  #1: ffff88813b6efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.843032] 2 locks held by kworker/2:242/3431:
> [ 1847.844838]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.847128]  #1: ffff888138427d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.849525] 2 locks held by kworker/2:245/3434:
> [ 1847.851328]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.853618]  #1: ffff88813e617d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.856041] 2 locks held by kworker/2:250/3439:
> [ 1847.857842]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.860122]  #1: ffff88813e657d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.862556] 2 locks held by kworker/2:251/3440:
> [ 1847.864373]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.866672]  #1: ffff88813e667d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.869094] 2 locks held by kworker/2:253/3442:
> [ 1847.870890]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.873154]  #1: ffff88813e67fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.875581] 2 locks held by kworker/3:114/3447:
> [ 1847.877394]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.879740]  #1: ffff88813e6b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.882098] 2 locks held by kworker/3:115/3448:
> [ 1847.883918]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.886171]  #1: ffff88813e6c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.888583] 2 locks held by kworker/3:116/3449:
> [ 1847.890398]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.892705]  #1: ffff88813e747d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.895051] 2 locks held by kworker/3:118/3451:
> [ 1847.896847]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.899132]  #1: ffff88813e767d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.901507] 2 locks held by kworker/3:120/3453:
> [ 1847.903236]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.905527]  #1: ffff88813e77fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.907901] 2 locks held by kworker/3:122/3455:
> [ 1847.909708]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.912026]  #1: ffff88813e797d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.914454] 2 locks held by kworker/3:124/3457:
> [ 1847.916231]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.918510]  #1: ffff88813e7afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.920927] 2 locks held by kworker/3:125/3458:
> [ 1847.922695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.925008]  #1: ffff88813e7bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.927436] 2 locks held by kworker/3:128/3461:
> [ 1847.929255]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.931546]  #1: ffff88813e7dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.933891] 2 locks held by kworker/3:131/3464:
> [ 1847.935689]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.937985]  #1: ffff88813e047d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.940359] 2 locks held by kworker/0:186/3467:
> [ 1847.942142]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.944467]  #1: ffff88813e06fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.946777] 2 locks held by kworker/0:188/3469:
> [ 1847.948559]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.950874]  #1: ffff88813e087d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.953271] 2 locks held by kworker/0:189/3470:
> [ 1847.955060]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.957359]  #1: ffff88813e097d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.959762] 2 locks held by kworker/0:191/3472:
> [ 1847.961559]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.963833]  #1: ffff88813e0afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.966201] 2 locks held by kworker/0:192/3473:
> [ 1847.967990]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.970260]  #1: ffff88813e0b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.972661] 2 locks held by kworker/0:193/3474:
> [ 1847.974439]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.976762]  #1: ffff88813e0c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.979010] 2 locks held by kworker/0:195/3476:
> [ 1847.980748]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.983039]  #1: ffff88813e0e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.985423] 2 locks held by kworker/0:197/3478:
> [ 1847.987205]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.989489]  #1: ffff88813e0ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.991878] 2 locks held by kworker/0:198/3479:
> [ 1847.993608]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1847.995881]  #1: ffff88813e13fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1847.998236] 2 locks held by kworker/0:199/3480:
> [ 1848.000010]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.002319]  #1: ffff88813e14fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.004719] 2 locks held by kworker/0:200/3481:
> [ 1848.006525]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.008835]  #1: ffff88813e157d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.011188] 2 locks held by kworker/0:203/3484:
> [ 1848.012969]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.015327]  #1: ffff88813e17fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.017729] 2 locks held by kworker/0:205/3486:
> [ 1848.019539]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.021850]  #1: ffff88813e19fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.024275] 2 locks held by kworker/0:206/3487:
> [ 1848.026073]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.028400]  #1: ffff88813e1a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.030781] 2 locks held by kworker/0:207/3488:
> [ 1848.032591]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.034794]  #1: ffff88813e1b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.037098] 2 locks held by kworker/0:208/3489:
> [ 1848.038870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.041174]  #1: ffff88813e1c7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.043556] 2 locks held by kworker/0:209/3490:
> [ 1848.045370]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.047648]  #1: ffff88813e1d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.050062] 2 locks held by kworker/0:211/3492:
> [ 1848.051827]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.054123]  #1: ffff88813e227d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.056524] 2 locks held by kworker/0:215/3496:
> [ 1848.058345]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.060635]  #1: ffff88813e257d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.063063] 2 locks held by kworker/0:219/3500:
> [ 1848.064887]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.067150]  #1: ffff88813e287d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.069530] 2 locks held by kworker/0:220/3501:
> [ 1848.071321]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.073575]  #1: ffff88813e28fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.075995] 2 locks held by kworker/0:221/3502:
> [ 1848.077815]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.080025]  #1: ffff8881348afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.082419] 2 locks held by kworker/0:222/3503:
> [ 1848.084240]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.086510]  #1: ffff88812e54fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.088878] 2 locks held by kworker/0:224/3505:
> [ 1848.090652]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.092903]  #1: ffff888126f0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.095228] 2 locks held by kworker/3:133/3506:
> [ 1848.097027]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.099344]  #1: ffff88813c507d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.101705] 2 locks held by kworker/0:225/3507:
> [ 1848.103476]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.105768]  #1: ffff88811f2d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.108190] 2 locks held by kworker/0:228/3510:
> [ 1848.109978]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.112277]  #1: ffff888130bc7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.114622] 2 locks held by kworker/0:229/3511:
> [ 1848.116439]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.118707]  #1: ffff88811cd5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.121118] 2 locks held by kworker/0:231/3513:
> [ 1848.122938]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.125207]  #1: ffff888122837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.127584] 2 locks held by kworker/0:234/3516:
> [ 1848.129347]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.131683]  #1: ffff8881277bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.134053] 2 locks held by kworker/0:235/3517:
> [ 1848.135872]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.138204]  #1: ffff88811a1bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.140577] 2 locks held by kworker/0:237/3519:
> [ 1848.142368]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.144709]  #1: ffff8881182f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.147107] 2 locks held by kworker/0:238/3520:
> [ 1848.148899]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.151163]  #1: ffff8881394ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.153550] 2 locks held by kworker/0:239/3521:
> [ 1848.155363]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.157675]  #1: ffff888120a3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.160047] 2 locks held by kworker/0:240/3522:
> [ 1848.161866]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.164186]  #1: ffff88812cf97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.166588] 2 locks held by kworker/0:241/3523:
> [ 1848.168362]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.170636]  #1: ffff888132a37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.172998] 2 locks held by kworker/1:149/3528:
> [ 1848.174801]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.177106]  #1: ffff88813b2b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.179521] 2 locks held by kworker/1:153/3532:
> [ 1848.181318]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.183570]  #1: ffff888115c87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.185967] 2 locks held by kworker/1:154/3533:
> [ 1848.187772]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.190094]  #1: ffff888115c8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.192475] 2 locks held by kworker/1:156/3535:
> [ 1848.194287]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.196585]  #1: ffff888115ca7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.198982] 2 locks held by kworker/1:157/3536:
> [ 1848.200771]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.203095]  #1: ffff88813e3bfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.205467] 2 locks held by kworker/1:159/3538:
> [ 1848.207299]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.209643]  #1: ffff88813e3d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.212044] 2 locks held by kworker/1:160/3539:
> [ 1848.213848]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.216173]  #1: ffff88813e3e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.218566] 2 locks held by kworker/1:162/3541:
> [ 1848.220380]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.222663]  #1: ffff88813e3ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.225041] 2 locks held by kworker/1:163/3542:
> [ 1848.226859]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.229127]  #1: ffff88813dc0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.231558] 2 locks held by kworker/1:164/3543:
> [ 1848.233346]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.235588]  #1: ffff88813dc17d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.237998] 2 locks held by kworker/1:165/3544:
> [ 1848.239818]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.242102]  #1: ffff88813dc27d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.244507] 2 locks held by kworker/0:245/3546:
> [ 1848.246246]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.248542]  #1: ffff88813dc3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.250948] 2 locks held by kworker/0:248/3549:
> [ 1848.252757]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.254964]  #1: ffff88813dc67d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.257370] 2 locks held by kworker/0:249/3550:
> [ 1848.259162]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.261488]  #1: ffff88813dc77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.263876] 2 locks held by kworker/0:250/3551:
> [ 1848.265634]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.267917]  #1: ffff88813dc7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.270327] 2 locks held by kworker/0:252/3553:
> [ 1848.272112]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.274439]  #1: ffff88813dc97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.276827] 2 locks held by kworker/0:253/3554:
> [ 1848.278595]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.280893]  #1: ffff88813dca7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.283292] 2 locks held by kworker/0:255/3556:
> [ 1848.285086]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.287396]  #1: ffff88813dcbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.289755] 2 locks held by kworker/3:134/3558:
> [ 1848.291566]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.293838]  #1: ffff88813dcdfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.296224] 2 locks held by kworker/3:135/3559:
> [ 1848.297985]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.300275]  #1: ffff88813dce7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.302673] 2 locks held by kworker/3:136/3560:
> [ 1848.304445]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.306727]  #1: ffff88813dcf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.309148] 2 locks held by kworker/3:137/3561:
> [ 1848.310950]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.313250]  #1: ffff88813dd37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.315666] 2 locks held by kworker/3:141/3565:
> [ 1848.317476]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.319782]  #1: ffff88813dd6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.322179] 2 locks held by kworker/3:143/3567:
> [ 1848.323971]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.326288]  #1: ffff88813dd87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.328681] 2 locks held by kworker/3:144/3568:
> [ 1848.330412]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.332695]  #1: ffff88813dd97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.335071] 2 locks held by kworker/3:146/3570:
> [ 1848.336870]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.339177]  #1: ffff88813ddafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.341567] 2 locks held by kworker/3:149/3573:
> [ 1848.343358]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.345637]  #1: ffff88813ddcfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.348027] 2 locks held by kworker/3:150/3574:
> [ 1848.349819]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.352086]  #1: ffff88813dddfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.354507] 2 locks held by kworker/3:151/3575:
> [ 1848.356320]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.358664]  #1: ffff88813ddf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.361065] 2 locks held by kworker/3:152/3576:
> [ 1848.362867]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.365196]  #1: ffff88813de07d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.367604] 2 locks held by kworker/3:153/3577:
> [ 1848.369354]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.371643]  #1: ffff88813de0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.374031] 2 locks held by kworker/0:257/3578:
> [ 1848.375841]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.378138]  #1: ffff88813de1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.380570] 2 locks held by kworker/3:154/3579:
> [ 1848.382385]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.384693]  #1: ffff88813de3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.387099] 2 locks held by kworker/3:156/3581:
> [ 1848.388912]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.391253]  #1: ffff88813dedfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.393640] 2 locks held by kworker/1:167/3585:
> [ 1848.395440]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.397769]  #1: ffff888134f9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.400195] 2 locks held by kworker/1:168/3586:
> [ 1848.402010]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.404334]  #1: ffff8881304a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.406710] 2 locks held by kworker/1:169/3587:
> [ 1848.408518]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.410797]  #1: ffff888128997d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.413123] 2 locks held by kworker/1:170/3588:
> [ 1848.414922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.417223]  #1: ffff888128c0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.419652] 2 locks held by kworker/1:173/3591:
> [ 1848.421465]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.423770]  #1: ffff88812479fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.426139] 2 locks held by kworker/3:159/3592:
> [ 1848.427922]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.430183]  #1: ffff88813b37fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.432620] 2 locks held by kworker/3:161/3594:
> [ 1848.434390]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.436736]  #1: ffff88812f527d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.439124] 2 locks held by kworker/1:174/3595:
> [ 1848.440806]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.443037]  #1: ffff88812ddefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.445407] 2 locks held by kworker/1:175/3596:
> [ 1848.447227]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.449537]  #1: ffff88813d93fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.451925] 2 locks held by kworker/1:176/3597:
> [ 1848.453695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.456000]  #1: ffff88813d94fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.458405] 2 locks held by kworker/1:178/3599:
> [ 1848.460161]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.462413]  #1: ffff88813d967d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.464819] 2 locks held by kworker/1:179/3600:
> [ 1848.466623]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.468907]  #1: ffff88813dadfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.471296] 2 locks held by kworker/1:180/3601:
> [ 1848.473040]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.475356]  #1: ffff88813dae7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.477785] 2 locks held by kworker/1:181/3602:
> [ 1848.479595]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.481902]  #1: ffff88813daf7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.484286] 2 locks held by kworker/1:182/3603:
> [ 1848.486088]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.488360]  #1: ffff88813daffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.490762] 2 locks held by kworker/1:184/3605:
> [ 1848.492571]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.494862]  #1: ffff88813db1fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.497218] 2 locks held by kworker/1:185/3606:
> [ 1848.498997]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.501268]  #1: ffff88813db2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.503383] 2 locks held by kworker/1:186/3607:
> [ 1848.505022]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.507066]  #1: ffff88813db37d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.509171] 2 locks held by kworker/1:189/3610:
> [ 1848.510820]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.512859]  #1: ffff88813db5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.514967] 2 locks held by kworker/1:191/3612:
> [ 1848.516610]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.518642]  #1: ffff88813db77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.520738] 2 locks held by kworker/1:192/3613:
> [ 1848.522384]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.524420]  #1: ffff88813db7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.526519] 2 locks held by kworker/1:193/3614:
> [ 1848.528154]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.530192]  #1: ffff88813db8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.532301] 2 locks held by kworker/1:194/3615:
> [ 1848.533949]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.535994]  #1: ffff88813db9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.538100] 2 locks held by kworker/1:195/3616:
> [ 1848.539738]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.541777]  #1: ffff88813dbafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.543878] 2 locks held by kworker/1:196/3617:
> [ 1848.545523]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.547550]  #1: ffff88813dbb7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.549645] 2 locks held by kworker/1:198/3619:
> [ 1848.551279]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.553424]  #1: ffff88813dbd7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.555635] 2 locks held by kworker/1:199/3620:
> [ 1848.557345]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.559466]  #1: ffff88813dbe7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.561573] 2 locks held by kworker/1:200/3621:
> [ 1848.563208]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.565245]  #1: ffff88813dbefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.567358] 2 locks held by kworker/1:203/3624:
> [ 1848.568987]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.571029]  #1: ffff888161817d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.573136] 2 locks held by kworker/1:206/3627:
> [ 1848.574789]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.576827]  #1: ffff888161837d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.578934] 2 locks held by kworker/1:209/3630:
> [ 1848.580574]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.582608]  #1: ffff88816185fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.584704] 2 locks held by kworker/1:210/3631:
> [ 1848.586343]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.588374]  #1: ffff88816186fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.590480] 2 locks held by kworker/1:211/3632:
> [ 1848.592125]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.594163]  #1: ffff88816187fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.596268] 2 locks held by kworker/3:162/3633:
> [ 1848.597914]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.599956]  #1: ffff88816189fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.602064] 2 locks held by kworker/3:163/3634:
> [ 1848.603707]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.605740]  #1: ffff888127a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.607845] 2 locks held by kworker/3:164/3635:
> [ 1848.609485]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.611528]  #1: ffff888128f3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.613627] 2 locks held by kworker/3:166/3637:
> [ 1848.615263]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.617310]  #1: ffff88812b83fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.619419] 2 locks held by kworker/3:167/3638:
> [ 1848.621064]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.623106]  #1: ffff88812aa57d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.625217] 2 locks held by kworker/3:168/3639:
> [ 1848.626872]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.628921]  #1: ffff888127d3fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.631031] 2 locks held by kworker/3:170/3641:
> [ 1848.632673]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.634706]  #1: ffff88811ec6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.636811] 2 locks held by kworker/3:171/3642:
> [ 1848.638453]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.640493]  #1: ffff88812f687d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.642594] 2 locks held by kworker/3:172/3643:
> [ 1848.644233]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.646272]  #1: ffff8881380a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.648388] 2 locks held by kworker/1:212/3644:
> [ 1848.650034]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.652083]  #1: ffff888126e6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.654204] 2 locks held by kworker/1:213/3645:
> [ 1848.655862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.657906]  #1: ffff8881276afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.660014] 2 locks held by kworker/1:214/3646:
> [ 1848.661658]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.663690]  #1: ffff8881323dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.665799] 2 locks held by kworker/1:215/3647:
> [ 1848.667443]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.669472]  #1: ffff888129ecfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.671566] 2 locks held by kworker/1:216/3648:
> [ 1848.673206]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.675248]  #1: ffff88810f47fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.677353] 2 locks held by kworker/1:218/3650:
> [ 1848.679001]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.681046]  #1: ffff888126487d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.683158] 2 locks held by kworker/1:220/3652:
> [ 1848.684810]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.686858]  #1: ffff88813d47fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.688970] 2 locks held by kworker/1:222/3654:
> [ 1848.690618]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.692656]  #1: ffff8881289d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.694758] 2 locks held by kworker/1:223/3655:
> [ 1848.696401]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.698445]  #1: ffff888126a6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.700559] 2 locks held by kworker/1:224/3656:
> [ 1848.702204]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.704245]  #1: ffff88812338fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.706363] 2 locks held by kworker/1:226/3658:
> [ 1848.708009]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.710057]  #1: ffff888105697d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.712165] 2 locks held by kworker/1:227/3659:
> [ 1848.713818]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.715864]  #1: ffff888130d6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.717969] 2 locks held by kworker/1:229/3661:
> [ 1848.719616]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.721652]  #1: ffff88813c977d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.723760] 2 locks held by kworker/3:173/3663:
> [ 1848.725406]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.727446]  #1: ffff88812b3a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.729563] 2 locks held by kworker/3:174/3664:
> [ 1848.731188]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.733232]  #1: ffff88812b28fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.735350] 2 locks held by kworker/3:176/3666:
> [ 1848.736998]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.739041]  #1: ffff888130617d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.741151] 2 locks held by kworker/3:177/3667:
> [ 1848.742800]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.744841]  #1: ffff88812fcbfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.746948] 2 locks held by kworker/3:180/3670:
> [ 1848.748596]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.750633]  #1: ffff88812f107d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.752736] 2 locks held by kworker/3:181/3671:
> [ 1848.754382]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.756410]  #1: ffff88812feffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.758503] 2 locks held by kworker/3:182/3672:
> [ 1848.760134]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.762174]  #1: ffff88812bc8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.764287] 2 locks held by kworker/3:185/3675:
> [ 1848.765941]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.767986]  #1: ffff8881348dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.770095] 2 locks held by kworker/3:187/3677:
> [ 1848.771739]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.773785]  #1: ffff888132c87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.775892] 2 locks held by kworker/3:188/3678:
> [ 1848.777537]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.779565]  #1: ffff888121e2fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.781669] 2 locks held by kworker/3:195/3685:
> [ 1848.783314]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.785354]  #1: ffff88812c187d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.787460] 2 locks held by kworker/3:197/3687:
> [ 1848.789094]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.791137]  #1: ffff888131f5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.793245] 2 locks held by kworker/3:198/3688:
> [ 1848.794897]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.796942]  #1: ffff88813516fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.799050] 2 locks held by kworker/3:202/3692:
> [ 1848.800695]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.802729]  #1: ffff8881350efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.804833] 2 locks held by kworker/3:204/3694:
> [ 1848.806476]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.808520]  #1: ffff88811e2a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.810620] 2 locks held by kworker/3:205/3695:
> [ 1848.812256]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.814287]  #1: ffff88812f4cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.816397] 2 locks held by kworker/3:207/3697:
> [ 1848.818041]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.820082]  #1: ffff8881247dfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.822188] 2 locks held by kworker/3:208/3698:
> [ 1848.823848]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.825889]  #1: ffff88811934fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.827995] 2 locks held by kworker/3:209/3699:
> [ 1848.829643]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.831677]  #1: ffff8881231d7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.833786] 2 locks held by kworker/3:211/3701:
> [ 1848.835431]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.837472]  #1: ffff888133b6fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.839583] 2 locks held by kworker/3:212/3702:
> [ 1848.841220]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.843267]  #1: ffff88813242fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.845377] 2 locks held by kworker/3:214/3704:
> [ 1848.847024]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.849080]  #1: ffff8881316b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.851197] 2 locks held by kworker/3:217/3707:
> [ 1848.852849]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.854894]  #1: ffff88811476fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.856998] 2 locks held by kworker/3:218/3708:
> [ 1848.858649]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.860681]  #1: ffff888132bdfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.862786] 2 locks held by kworker/3:220/3710:
> [ 1848.864428]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.866459]  #1: ffff888125137d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.868556] 2 locks held by kworker/3:221/3711:
> [ 1848.870182]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.872225]  #1: ffff888132597d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.874338] 2 locks held by kworker/3:223/3713:
> [ 1848.875983]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.878025]  #1: ffff8881209cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.880131] 2 locks held by kworker/3:224/3714:
> [ 1848.881784]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.883828]  #1: ffff88811f877d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.885935] 2 locks held by kworker/3:225/3715:
> [ 1848.887576]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.889601]  #1: ffff88811cf47d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.891689] 2 locks held by kworker/3:226/3716:
> [ 1848.893331]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.895372]  #1: ffff88811cd7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.897468] 2 locks held by kworker/3:227/3717:
> [ 1848.899110]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.901150]  #1: ffff888111b9fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.903256] 2 locks held by kworker/3:232/3722:
> [ 1848.904908]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.906947]  #1: ffff88812aed7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.909055] 2 locks held by kworker/3:233/3723:
> [ 1848.910698]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.912732]  #1: ffff888130637d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.914839] 2 locks held by kworker/3:238/3728:
> [ 1848.916481]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.918521]  #1: ffff8881399efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.920623] 2 locks held by kworker/1:231/3737:
> [ 1848.922259]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.924298]  #1: ffff8881290a7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.926411] 2 locks held by kworker/1:232/3738:
> [ 1848.928047]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.930086]  #1: ffff888120b77d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.932193] 2 locks held by kworker/1:237/3743:
> [ 1848.933842]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.935906]  #1: ffff8881100f7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.938012] 2 locks held by kworker/1:238/3744:
> [ 1848.939659]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.941690]  #1: ffff88812e0e7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.943802] 2 locks held by kworker/1:239/3745:
> [ 1848.945451]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.947488]  #1: ffff88810ad5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.949607] 2 locks held by kworker/1:241/3747:
> [ 1848.951246]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.953289]  #1: ffff88811fb5fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.955408] 2 locks held by kworker/1:242/3748:
> [ 1848.957058]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.959102]  #1: ffff888119eefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.961208] 2 locks held by kworker/1:243/3749:
> [ 1848.962862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.964903]  #1: ffff888130d87d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.967016] 2 locks held by kworker/1:244/3750:
> [ 1848.968662]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.970700]  #1: ffff8881289afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.972810] 2 locks held by kworker/1:245/3751:
> [ 1848.974456]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.976486]  #1: ffff8881063cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.978601] 2 locks held by kworker/1:246/3752:
> [ 1848.980230]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.982276]  #1: ffff88811fca7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.984392] 2 locks held by kworker/1:248/3754:
> [ 1848.986033]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.988077]  #1: ffff888106997d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.990184] 2 locks held by kworker/1:249/3755:
> [ 1848.991840]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.993885]  #1: ffff8881372afd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1848.995994] 2 locks held by kworker/1:250/3756:
> [ 1848.997641]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1848.999679]  #1: ffff8881209b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.001787] 2 locks held by kworker/1:251/3757:
> [ 1849.003436]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.005482]  #1: ffff8881314ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.007582] 2 locks held by kworker/1:252/3758:
> [ 1849.009219]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.011260]  #1: ffff888130d8fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.013377] 2 locks held by kworker/1:253/3759:
> [ 1849.015026]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.017066]  #1: ffff8881371b7d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.019201] 2 locks held by kworker/1:255/3761:
> [ 1849.020862]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.022902]  #1: ffff88811d897d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.025007] 2 locks held by kworker/1:256/3762:
> [ 1849.026649]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.028680]  #1: ffff88813b99fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.030792] 2 locks held by kworker/1:257/3763:
> [ 1849.032440]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.034483]  #1: ffff88813b0efd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.036590] 2 locks held by kworker/3:247/3765:
> [ 1849.038220]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.040260]  #1: ffff888134867d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.042382] 2 locks held by kworker/3:248/3766:
> [ 1849.044023]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.046066]  #1: ffff888124b7fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.048170] 2 locks held by kworker/3:249/3767:
> [ 1849.049821]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.051860]  #1: ffff888131aafd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.053964] 2 locks held by kworker/3:251/3769:
> [ 1849.055610]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.057647]  #1: ffff8881068ffd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.059746] 2 locks held by kworker/3:252/3770:
> [ 1849.061399]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.063434]  #1: ffff88810b757d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.065537] 2 locks held by kworker/3:254/3772:
> [ 1849.067174]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.069213]  #1: ffff888136d97d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.071346] 2 locks held by kworker/3:255/3773:
> [ 1849.072992]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.075027]  #1: ffff88811830fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.077139] 2 locks held by kworker/3:256/3774:
> [ 1849.078795]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.080838]  #1: ffff888127547d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.082946] 2 locks held by kworker/1:258/4004:
> [ 1849.084592]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.086626]  #1: ffff88812cbefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.088727] 2 locks held by kworker/0:18/13817:
> [ 1849.090368]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.092406]  #1: ffff8881213cfd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.094515] 2 locks held by kworker/1:97/23521:
> [ 1849.096151]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.098194]  #1: ffff88810c33fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.100300] 2 locks held by kworker/1:259/28552:
> [ 1849.101959]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.103997]  #1: ffff888140777d98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.106102] 2 locks held by kworker/3:258/38106:
> [ 1849.107766]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.109802]  #1: ffff888111b0fd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> [ 1849.111912] 2 locks held by kworker/1:172/39248:
> [ 1849.113563]  #0: ffff88813c00c938 ((wq_completion)dio/dm-1){+.+.}-{0:0}, at: process_one_work+0x790/0x14a0
> [ 1849.115594]  #1: ffff888110eefd98 ((work_completion)(&dio->aio.work)){+.+.}-{0:0}, at: process_one_work+0x7be/0x14a0
> 
> [ 1849.119116] =============================================
> 
> 
> [2]
> 
> $ ps axuw | grep " D "
> root           9  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/0:1+dio/dm-1]
> root          25  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:0+dio/dm-1]
> root          49  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:1+dio/dm-1]
> root          74  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/0:2+dio/dm-1]
> root         169  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/3:2+dio/dm-1]
> root         221  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/0:3+dio/dm-1]
> root         230  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:2+dio/dm-1]
> root         291  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/2:3+dio/dm-1]
> root         322  0.0  0.0      0     0 ?        D    10:55   0:00 [kworker/1:3+dio/dm-1]
> root        2757  2.1  0.0      0     0 ?        D    10:57   1:14 [kworker/u8:7+flush-253:1]
> root        2759  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/3:4+dio/dm-1]
> root        2760  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/0:4+dio/dm-1]
> root        2762  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/1:5+dio/dm-1]
> root        2764  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/1:6+dio/dm-1]
> root        2765  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/3:5+dio/dm-1]
> ...

Shinichiro,

I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
002 and 011 fairly often. I have not been able to figure out the root cause but suspect that
there is a timing issue in the srp drivers which cannot handle the slowness of the software
RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
try to figure this out.

Bob Pearson
rpearson@hpe.com (rpearsonhpe@gmail.com)

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-22  1:46 ` Bob Pearson
@ 2023-08-22 10:18   ` Shinichiro Kawasaki
  2023-08-22 15:20     ` Bart Van Assche
  0 siblings, 1 reply; 87+ messages in thread
From: Shinichiro Kawasaki @ 2023-08-22 10:18 UTC (permalink / raw)
  To: Bob Pearson; +Cc: linux-rdma, linux-scsi, Bart Van Assche

CC+: Bart,

On Aug 21, 2023 / 20:46, Bob Pearson wrote:
[...]
> Shinichiro,

Hello Bob, thanks for the response.

> 
> I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
> 002 and 011 fairly often.

I repeated the test case srp/011, and observed it hangs. This hang at srp/011
also can be recreated in stable manner. I reverted the commit 9b4b7c1f9f54
then observed the srp/011 hang disappeared. So, I guess these two hangs have
same root cause.

> I have not been able to figure out the root cause but suspect that
> there is a timing issue in the srp drivers which cannot handle the slowness of the software
> RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
> try to figure this out.

Thanks for sharing your thoughts. I myself do not have srp driver knowledge, and
not sure what clue I should provide. If you have any idea of the action I can
take, please let me know.

IMHO, srp/011 hang looks easier to dig in than srp/002, because srp/011 does not
involve filesystems. And at srp/011 hang, kernel reported many "SRP abort"s [X],
which is similar as the srp/002 hang.

[X]

[  196.330820] run blktests srp/011 at 2023-08-22 17:22:42
[  196.819383] null_blk: module loaded
[  196.870572] null_blk: disk nullb0 created
[  196.886712] null_blk: disk nullb1 created
[  197.081369] rdma_rxe: loaded
[  197.103766] (null): rxe_set_mtu: Set mtu to 1024
[  197.139726] infiniband ens3_rxe: set active
[  197.142649] infiniband ens3_rxe: added ens3
[  197.196229] scsi_debug:sdebug_add_store: dif_storep 524288 bytes @ 000000005234c247
[  197.200354] scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1)
[  197.202780] scsi_debug:sdebug_driver_probe: host protection DIF3 DIX3
[  197.204566] scsi host3: scsi_debug: version 0191 [20210520]
                 dev_size_mb=32, opts=0x0, submit_queues=1, statistics=0
[  197.209853] scsi 3:0:0:0: Direct-Access     Linux    scsi_debug       0191 PQ: 0 ANSI: 7
[  197.213521] scsi 3:0:0:0: Power-on or device reset occurred
[  197.217732] sd 3:0:0:0: [sdc] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  197.218797] sd 3:0:0:0: Attached scsi generic sg2 type 0
[  197.219951] sd 3:0:0:0: [sdc] Write Protect is off
[  197.223066] sd 3:0:0:0: [sdc] Mode Sense: 73 00 10 08
[  197.225611] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
[  197.229701] sd 3:0:0:0: [sdc] Enabling DIX T10-DIF-TYPE3-CRC, application tag size 6 bytes
[  197.232015] sd 3:0:0:0: [sdc] Enabling DIF Type 3 protection
[  197.233863] sd 3:0:0:0: [sdc] Preferred minimum I/O size 512 bytes
[  197.235412] sd 3:0:0:0: [sdc] Optimal transfer size 524288 bytes
[  197.241520] sd 3:0:0:0: [sdc] Attached SCSI disk
[  197.654951] Rounding down aligned max_sectors from 4294967295 to 4294967288
[  197.710283] ib_srpt:srpt_add_one: ib_srpt device = 00000000685934b8
[  197.710340] ib_srpt:srpt_use_srq: ib_srpt srpt_use_srq(ens3_rxe): use_srq = 0; ret = 0
[  197.710345] ib_srpt:srpt_add_one: ib_srpt Target login info: id_ext=505400fffe123456,ioc_guid=505400fffe123456,pkey=ffff,service_id=505400fffe123456
[  197.710657] ib_srpt:srpt_add_one: ib_srpt added ens3_rxe.
[  198.184239] Rounding down aligned max_sectors from 255 to 248
[  198.247444] Rounding down aligned max_sectors from 255 to 248
[  198.311742] Rounding down aligned max_sectors from 4294967295 to 4294967288
[  198.798620] ib_srp:srp_add_one: ib_srp: srp_add_one: 18446744073709551615 / 4096 = 4503599627370495 <> 512
[  198.798630] ib_srp:srp_add_one: ib_srp: ens3_rxe: mr_page_shift = 12, device->max_mr_size = 0xffffffffffffffff, device->max_fast_reg_page_list_len = 512, max_pages_per_mr = 512, mr_max_size = 0x200000
[  198.898881] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  198.898908] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  198.898942] ib_srp:add_target_store: ib_srp: max_sectors = 1024; max_pages_per_mr = 512; mr_page_size = 4096; max_sectors_per_mr = 4096; mr_per_cmd = 2
[  198.898947] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  198.910816] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  198.916313] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  198.919848] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 00000000d71a59ab
[  198.920007] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  198.920308] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=00000000a5feaed8 name=10.0.2.15 ch=00000000d71a59ab
[  198.921661] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  198.921688] scsi host4: ib_srp: using immediate data
[  198.921951] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-18: queued zerolength write
[  198.922831] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-18 wc->status 0
[  198.931958] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  198.937206] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  198.939984] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 000000009f3b3382
[  198.940133] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  198.940173] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=00000000c70b88d5 name=10.0.2.15 ch=000000009f3b3382
[  198.940454] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  198.940460] scsi host4: ib_srp: using immediate data
[  198.940840] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-20: queued zerolength write
[  198.941071] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-20 wc->status 0
[  198.950276] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  198.955351] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  198.958102] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 000000002f3d11a8
[  198.958270] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  198.958312] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=000000008dd11076 name=10.0.2.15 ch=000000002f3d11a8
[  198.958626] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  198.958632] scsi host4: ib_srp: using immediate data
[  198.959552] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-22: queued zerolength write
[  198.959815] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-22 wc->status 0
[  198.968720] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  198.973609] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  198.976219] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 00000000b6291aea
[  198.976369] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  198.976413] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=00000000d8231f1e name=10.0.2.15 ch=00000000b6291aea
[  198.976694] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  198.976700] scsi host4: ib_srp: using immediate data
[  198.976810] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-24: queued zerolength write
[  198.976929] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-24 wc->status 0
[  198.977610] scsi host4: SRP.T10:505400FFFE123456
[  198.987781] scsi 4:0:0:0: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 6
[  198.996088] scsi 4:0:0:0: LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
[  199.000231] scsi 4:0:0:0: alua: supports implicit and explicit TPGS
[  199.002500] scsi 4:0:0:0: alua: device naa.60014056e756c6c62300000000000000 port group 0 rel port 1
[  199.007201] sd 4:0:0:0: [sdd] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  199.007936] sd 4:0:0:0: Attached scsi generic sg3 type 0
[  199.010141] sd 4:0:0:0: [sdd] Write Protect is off
[  199.012388] sd 4:0:0:0: [sdd] Mode Sense: 43 00 00 08
[  199.014718] scsi 4:0:0:2: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 6
[  199.015810] sd 4:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  199.019705] sd 4:0:0:0: [sdd] Preferred minimum I/O size 512 bytes
[  199.021312] sd 4:0:0:0: [sdd] Optimal transfer size 126976 bytes
[  199.023796] scsi 4:0:0:2: alua: supports implicit and explicit TPGS
[  199.025378] scsi 4:0:0:2: alua: device naa.60014057363736964626700000000000 port group 0 rel port 1
[  199.029763] sd 4:0:0:2: [sde] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  199.029822] sd 4:0:0:2: Attached scsi generic sg4 type 0
[  199.030670] sd 4:0:0:2: [sde] Write Protect is off
[  199.034314] sd 4:0:0:2: [sde] Mode Sense: 43 00 10 08
[  199.036643] sd 4:0:0:0: [sdd] Attached SCSI disk
[  199.038861] scsi 4:0:0:1: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 6
[  199.039148] sd 4:0:0:2: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA
[  199.046070] sd 4:0:0:2: [sde] Preferred minimum I/O size 512 bytes
[  199.047580] scsi 4:0:0:1: LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
[  199.047685] sd 4:0:0:2: [sde] Optimal transfer size 524288 bytes
[  199.049654] scsi 4:0:0:1: alua: supports implicit and explicit TPGS
[  199.053197] scsi 4:0:0:1: alua: device naa.60014056e756c6c62310000000000000 port group 0 rel port 1
[  199.056642] sd 4:0:0:1: Attached scsi generic sg5 type 0
[  199.056679] sd 4:0:0:1: [sdf] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  199.057539] ib_srp:srp_add_target: ib_srp: host4: SCSI scan succeeded - detected 3 LUNs
[  199.057979] sd 4:0:0:1: [sdf] Write Protect is off
[  199.058888] scsi host4: ib_srp: new target: id_ext 505400fffe123456 ioc_guid 505400fffe123456 sgid fe80:0000:0000:0000:5054:00ff:fe12:3456 dest 10.0.2.15
[  199.059238] sd 4:0:0:1: [sdf] Mode Sense: 43 00 00 08
[  199.064721] sd 4:0:0:2: [sde] Attached SCSI disk
[  199.066646] sd 4:0:0:1: [sdf] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  199.069653] sd 4:0:0:1: [sdf] Preferred minimum I/O size 512 bytes
[  199.071330] sd 4:0:0:1: [sdf] Optimal transfer size 126976 bytes
[  199.072389] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  199.072952] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  199.072985] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  199.073001] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  199.073012] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  199.083799] sd 4:0:0:1: [sdf] Attached SCSI disk
[  199.095910] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  199.095929] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  199.095959] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  199.095975] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  199.096005] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  199.096020] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  199.096030] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  199.431496] sd 4:0:0:1: alua: transition timeout set to 60 seconds
[  199.433258] sd 4:0:0:1: alua: port group 00 state A non-preferred supports TOlUSNA
[  199.456782] sd 4:0:0:2: alua: transition timeout set to 60 seconds
[  199.458737] sd 4:0:0:2: alua: port group 00 state A non-preferred supports TOlUSNA
[  199.488105] sd 4:0:0:0: alua: transition timeout set to 60 seconds
[  199.489964] sd 4:0:0:0: alua: port group 00 state A non-preferred supports TOlUSNA
[  204.807887] device-mapper: multipath: 253:3: Failing path 8:48.
[  204.856553] scsi 4:0:0:0: alua: Detached
[  204.868122] sd 4:0:0:2: [sde] Synchronizing SCSI cache
[  204.886615] scsi 4:0:0:2: alua: Detached
[  204.919557] scsi 4:0:0:1: alua: Detached
[  204.925989] ib_srpt receiving failed for ioctx 00000000ddab6801 with status 5
[  204.926715] ib_srpt receiving failed for ioctx 000000000bc9beb4 with status 5
[  204.927759] ib_srpt receiving failed for ioctx 000000002ec13abb with status 5
[  204.927762] ib_srpt receiving failed for ioctx 00000000a73075da with status 5
[  204.927764] ib_srpt receiving failed for ioctx 00000000db73d7b8 with status 5
[  204.927766] ib_srpt receiving failed for ioctx 00000000b7c85b9d with status 5
[  204.927767] ib_srpt receiving failed for ioctx 00000000d70acd70 with status 5
[  204.927769] ib_srpt receiving failed for ioctx 0000000059193fad with status 5
[  204.927771] ib_srpt receiving failed for ioctx 0000000019e9ec9e with status 5
[  204.927773] ib_srpt receiving failed for ioctx 0000000033e124b9 with status 5
[  205.443422] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-20: queued zerolength write
[  205.444973] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-18: queued zerolength write
[  205.445056] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-20 wc->status 5
[  205.446047] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-18 wc->status 5
[  205.446190] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-20
[  205.448320] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-18
[  205.506138] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-24: queued zerolength write
[  205.506195] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-22: queued zerolength write
[  205.506263] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-24 wc->status 5
[  205.506329] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-22 wc->status 5
[  205.506354] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-24
[  205.506381] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-22
[  209.945988] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  209.946026] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  209.946046] ib_srp:add_target_store: ib_srp: max_sectors = 1024; max_pages_per_mr = 512; mr_page_size = 4096; max_sectors_per_mr = 4096; mr_per_cmd = 2
[  209.946055] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  209.958117] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  209.962963] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  209.965421] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 00000000fec65d93
[  209.965591] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  209.965635] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=000000009f6a881a name=10.0.2.15 ch=00000000fec65d93
[  209.966180] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  209.966187] scsi host4: ib_srp: using immediate data
[  209.967393] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-26: queued zerolength write
[  209.967518] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-26 wc->status 0
[  209.976127] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  209.984015] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  209.988890] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 0000000029b704ac
[  209.989221] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  209.989297] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=0000000078cf4fe1 name=10.0.2.15 ch=0000000029b704ac
[  209.989641] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  209.989647] scsi host4: ib_srp: using immediate data
[  209.989814] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-28: queued zerolength write
[  209.989997] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-28 wc->status 0
[  209.999235] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  210.004184] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  210.006893] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 00000000492e551c
[  210.007050] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  210.007096] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=000000008b9aa995 name=10.0.2.15 ch=00000000492e551c
[  210.007402] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  210.007410] scsi host4: ib_srp: using immediate data
[  210.007582] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-30: queued zerolength write
[  210.008212] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-30 wc->status 0
[  210.017177] ib_srpt Received SRP_LOGIN_REQ with i_port_id fe80:0000:0000:0000:5054:00ff:fe12:3456, t_port_id 5054:00ff:fe12:3456:5054:00ff:fe12:3456 and it_iu_len 8260 on port 1 (guid=fe80:0000:0000:0000:5054:00ff:fe12:3456); pkey 0xffff
[  210.022684] ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
[  210.025487] ib_srpt:srpt_create_ch_ib: ib_srpt srpt_create_ch_ib: max_cqe= 8191 max_sge= 32 sq_size = 4096 ch= 0000000033dc05a8
[  210.025663] ib_srpt:srpt_cm_req_recv: ib_srpt registering src addr 10.0.2.15 or i_port_id 0xfe80000000000000505400fffe123456
[  210.025707] ib_srpt:srpt_cm_req_recv: ib_srpt Establish connection sess=0000000066092653 name=10.0.2.15 ch=0000000033dc05a8
[  210.026031] ib_srp:srp_max_it_iu_len: ib_srp: max_iu_len = 8260
[  210.026038] scsi host4: ib_srp: using immediate data
[  210.026169] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-32: queued zerolength write
[  210.026743] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-32 wc->status 0
[  210.026940] scsi host4: SRP.T10:505400FFFE123456
[  210.032959] scsi 4:0:0:0: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 6
[  210.041064] scsi 4:0:0:0: alua: supports implicit and explicit TPGS
[  210.043448] scsi 4:0:0:0: alua: device naa.60014056e756c6c62300000000000000 port group 0 rel port 1
[  210.047197] sd 4:0:0:0: Attached scsi generic sg3 type 0
[  210.047772] sd 4:0:0:0: [sdd] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  210.051913] sd 4:0:0:0: [sdd] Write Protect is off
[  210.053426] sd 4:0:0:0: [sdd] Mode Sense: 43 00 00 08
[  210.054089] scsi 4:0:0:2: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 6
[  210.054483] sd 4:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  210.060433] sd 4:0:0:0: [sdd] Preferred minimum I/O size 512 bytes
[  210.062061] sd 4:0:0:0: [sdd] Optimal transfer size 126976 bytes
[  210.063874] scsi 4:0:0:2: alua: supports implicit and explicit TPGS
[  210.065561] scsi 4:0:0:2: alua: device naa.60014057363736964626700000000000 port group 0 rel port 1
[  210.069906] sd 4:0:0:2: Attached scsi generic sg4 type 0
[  210.071244] sd 4:0:0:0: [sdd] Attached SCSI disk
[  210.071438] sd 4:0:0:2: [sde] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  210.075126] sd 4:0:0:2: [sde] Write Protect is off
[  210.076561] sd 4:0:0:2: [sde] Mode Sense: 43 00 10 08
[  210.077417] sd 4:0:0:2: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA
[  210.080889] sd 4:0:0:2: [sde] Preferred minimum I/O size 512 bytes
[  210.081559] scsi 4:0:0:1: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 6
[  210.082390] sd 4:0:0:2: [sde] Optimal transfer size 524288 bytes
[  210.090475] scsi 4:0:0:1: alua: supports implicit and explicit TPGS
[  210.092667] scsi 4:0:0:1: alua: device naa.60014056e756c6c62310000000000000 port group 0 rel port 1
[  210.096228] sd 4:0:0:1: [sdf] 65536 512-byte logical blocks: (33.6 MB/32.0 MiB)
[  210.098474] sd 4:0:0:1: [sdf] Write Protect is off
[  210.099943] sd 4:0:0:1: [sdf] Mode Sense: 43 00 00 08
[  210.100772] sd 4:0:0:1: [sdf] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  210.104885] sd 4:0:0:1: [sdf] Preferred minimum I/O size 512 bytes
[  210.106065] sd 4:0:0:2: [sde] Attached SCSI disk
[  210.106425] sd 4:0:0:1: [sdf] Optimal transfer size 126976 bytes
[  210.109617] sd 4:0:0:1: Attached scsi generic sg5 type 0
[  210.112866] ib_srp:srp_add_target: ib_srp: host4: SCSI scan succeeded - detected 3 LUNs
[  210.112873] scsi host4: ib_srp: new target: id_ext 505400fffe123456 ioc_guid 505400fffe123456 sgid fe80:0000:0000:0000:5054:00ff:fe12:3456 dest 10.0.2.15
[  210.114809] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  210.114827] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  210.114857] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  210.114873] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  210.114883] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  210.121314] sd 4:0:0:1: [sdf] Attached SCSI disk
[  210.133745] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  210.133764] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  210.133796] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  210.133813] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  210.133846] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  210.133861] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  210.133871] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  210.325220] sd 4:0:0:0: alua: transition timeout set to 60 seconds
[  210.327176] sd 4:0:0:0: alua: port group 00 state A non-preferred supports TOlUSNA
[  210.512382] sd 4:0:0:1: alua: transition timeout set to 60 seconds
[  210.514366] sd 4:0:0:1: alua: port group 00 state A non-preferred supports TOlUSNA
[  210.537067] sd 4:0:0:2: alua: transition timeout set to 60 seconds
[  210.538788] sd 4:0:0:2: alua: port group 00 state A non-preferred supports TOlUSNA
[  217.322048] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.322067] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.322078] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  217.336141] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.336160] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.336190] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  217.336206] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  217.336216] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  217.351059] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.351079] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.351109] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  217.351125] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  217.351155] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  217.351171] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  217.351180] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  217.583935] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.583961] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.583974] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  217.599109] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.599128] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.599158] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  217.599174] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  217.599184] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  217.617214] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.617234] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.617270] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  217.617285] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  217.617316] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  217.617331] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  217.617341] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  217.839147] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.839168] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.839187] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  217.853795] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.853815] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.853846] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  217.853861] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  217.853872] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  217.875042] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  217.875061] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  217.875092] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  217.875107] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  217.875138] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  217.875152] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  217.875162] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  218.110548] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.110585] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.110603] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  218.127935] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.127959] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.128003] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.128023] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.128036] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  218.145223] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.145243] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.145274] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.145295] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.145326] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  218.145340] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  218.145351] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  218.379877] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.379897] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.379908] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  218.399268] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.399298] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.399330] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.399346] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.399356] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  218.414922] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.414942] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.414973] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.414989] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.415019] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  218.415034] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  218.415044] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  218.657313] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.657347] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.657365] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  218.672418] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.672437] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.672468] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.672483] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.672494] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  218.687795] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.687815] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.687846] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.687861] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.687892] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  218.687907] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  218.687917] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  218.932504] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.932549] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.932561] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  218.948134] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.948155] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.948185] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.948200] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.948210] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  218.961885] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  218.961904] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  218.961935] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  218.961951] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  218.961981] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  218.961996] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  218.962006] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  219.196670] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.196691] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.196701] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  219.213009] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.213029] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.213059] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  219.213075] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  219.213085] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  219.231373] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.231392] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.231424] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  219.231439] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  219.231470] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  219.231485] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  219.231495] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  219.483889] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.483910] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.483920] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  219.498333] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.498365] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.498405] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  219.498425] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  219.498439] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  219.515059] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.515080] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.515110] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  219.515128] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  219.515158] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  219.515173] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  219.515183] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  219.760020] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.760040] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.760051] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  219.777862] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.777896] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.777958] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  219.777987] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  219.778005] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  219.799030] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  219.799051] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  219.799087] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  219.799102] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  219.799133] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  219.799147] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  219.799158] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  227.086363] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.086383] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.086393] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  227.107902] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.107922] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.107952] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.107968] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.107978] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  227.125254] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.125274] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.125304] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.125320] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.125350] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  227.125365] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  227.125375] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  227.355831] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.355852] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.355862] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  227.369483] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.369502] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.369533] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.369572] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.369584] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  227.386900] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.386920] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.386951] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.386966] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.386997] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  227.387012] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  227.387022] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  227.610540] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.610603] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.610620] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  227.628902] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.628922] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.628955] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.628971] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.628981] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  227.645234] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.645255] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.645285] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.645301] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.645334] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  227.645349] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  227.645359] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  227.873626] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.873646] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.873656] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  227.890838] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.890857] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.890887] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.890903] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.890913] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  227.905153] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  227.905173] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  227.905203] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  227.905218] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  227.905249] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  227.905264] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  227.905274] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  228.130167] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.130187] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.130197] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  228.151246] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.151271] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.151312] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.151332] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.151345] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  228.177952] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.177972] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.178003] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.178020] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.178051] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  228.178066] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  228.178076] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  228.408180] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.408204] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.408217] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  228.429107] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.429133] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.429173] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.429193] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.429206] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  228.446183] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.446202] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.446233] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.446248] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.446279] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  228.446295] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  228.446305] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  228.681133] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.681166] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.681180] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  228.699467] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.699490] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.699521] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.699536] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.699547] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  228.715076] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.715096] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.715127] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.715142] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.715173] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  228.715188] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  228.715197] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  228.942150] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.942176] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.942190] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  228.957037] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.957068] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.957125] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.957153] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.957172] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  228.973879] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  228.973900] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  228.973931] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  228.973947] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  228.973978] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  228.973993] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  228.974003] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  229.204277] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.204298] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.204308] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  229.224969] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.224989] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.225021] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  229.225037] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  229.225047] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  229.241092] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.241111] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.241142] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  229.241157] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  229.241187] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  229.241202] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  229.241212] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  229.467761] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.467786] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.467797] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  229.483759] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.483790] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.483839] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  229.483866] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  229.483883] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  229.501487] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.501512] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.501581] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  229.501615] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  229.501660] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  229.501680] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  229.501693] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  229.752662] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.752682] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.752692] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  229.764884] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.764903] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.764934] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  229.764951] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  229.764961] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  229.781797] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  229.781817] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  229.781848] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  229.781863] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  229.781893] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  229.781908] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  229.781918] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  230.010623] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.010643] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.010654] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  230.026757] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.026777] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.026816] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.026832] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.026842] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  230.047533] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.047671] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.047714] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.047735] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.047777] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  230.047796] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  230.047815] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  230.276899] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.276919] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.276929] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  230.294911] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.294932] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.294963] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.294979] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.294989] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  230.316396] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.316417] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.316447] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.316463] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.316493] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  230.316508] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  230.316518] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  230.543058] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.543081] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.543091] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  230.560038] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.560058] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.560089] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.560105] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.560115] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  230.580843] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.580864] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.580895] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.580911] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.580941] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  230.580956] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  230.580966] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  230.821544] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.821589] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.821600] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  230.833829] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.833861] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.833892] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.833908] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.833918] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  230.854790] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  230.854809] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  230.854845] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  230.854861] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  230.854891] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  230.854906] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  230.854916] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  231.101599] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.101620] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.101630] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  231.122996] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.123015] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.123046] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.123061] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.123072] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  231.136982] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.137001] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.137032] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.137047] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.137078] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  231.137093] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  231.137103] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  231.380246] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.380267] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.380278] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  231.393413] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.393433] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.393464] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.393480] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.393490] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  231.414392] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.414425] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.414487] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.414516] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.414601] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  231.414631] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  231.414649] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  231.645614] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.645634] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.645645] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  231.664063] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.664082] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.664113] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.664129] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.664139] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  231.681247] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.681267] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.681298] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.681313] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.681344] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  231.681358] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  231.681368] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  231.924774] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.924793] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.924803] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  231.944478] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.944499] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.944531] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.944547] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.944557] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  231.962526] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  231.962545] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  231.962606] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  231.962623] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  231.962653] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  231.962668] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  231.962678] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  232.185860] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  232.185880] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  232.185890] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=10.0.2.15
[  232.202741] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  232.202763] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  232.202794] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  232.202809] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  232.202820] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fec0:0000:0000:0000:5054:00ff:fe12:3456
[  232.223744] ib_srp:srp_parse_in: ib_srp: 10.0.2.15 -> 10.0.2.15:0
[  232.223763] ib_srp:srp_parse_in: ib_srp: 10.0.2.15:5555 -> 10.0.2.15:5555
[  232.223793] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456] -> [fec0::5054:ff:fe12:3456]:0/167772687%0
[  232.223809] ib_srp:srp_parse_in: ib_srp: [fec0::5054:ff:fe12:3456]:5555 -> [fec0::5054:ff:fe12:3456]:5555/167772687%0
[  232.223839] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2] -> [fe80::5054:ff:fe12:3456]:0/167772687%2
[  232.223854] ib_srp:srp_parse_in: ib_srp: [fe80::5054:ff:fe12:3456%2]:5555 -> [fe80::5054:ff:fe12:3456]:5555/167772687%2
[  232.223864] scsi host5: ib_srp: Already connected to target port with id_ext=505400fffe123456;ioc_guid=505400fffe123456;dest=fe80:0000:0000:0000:5054:00ff:fe12:3456
[  242.836918] scsi host4: SRP abort called
[  242.840917] scsi host4: Sending SRP abort for tag 0x3d
[  242.846319] ib_srpt:srpt_handle_tsk_mgmt: ib_srpt recv tsk_mgmt fn 1 for task_tag 61 and cmd tag 2147483649 ch 00000000fec65d93 sess 000000009f6a881a
[  242.846638] ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: 61
[  242.847642] scsi host4: Null scmnd for RSP w/tag 0x0000000000003d received on ch 0 / QP 0x19
[  242.854933] scsi host4: SRP abort called
[  242.857399] scsi host4: SRP abort called
[  242.859682] scsi host4: SRP abort called
[  242.861854] scsi host4: SRP abort called
[  242.863946] scsi host4: SRP abort called
[  242.866014] scsi host4: SRP abort called
[  242.868081] scsi host4: SRP abort called
[  242.870154] scsi host4: SRP abort called
[  242.871974] scsi host4: SRP abort called
[  242.874470] scsi host4: SRP abort called
[  242.876505] scsi host4: SRP abort called
[  242.878640] scsi host4: SRP abort called
[  242.880548] scsi host4: SRP abort called
[  242.882525] scsi host4: SRP abort called
[  242.884463] scsi host4: SRP abort called
[  242.886145] scsi host4: SRP abort called
[  242.887812] scsi host4: SRP abort called
[  242.889573] scsi host4: SRP abort called
[  242.891210] scsi host4: SRP abort called
[  242.892901] scsi host4: SRP abort called
[  242.903730] device-mapper: multipath: 253:3: Failing path 8:48.
[  242.928724] scsi 4:0:0:0: alua: Detached
[  242.948278] sd 4:0:0:2: [sde] Synchronizing SCSI cache
[  242.969091] scsi 4:0:0:2: alua: Detached
[  242.996739] srpt_recv_done: 502 callbacks suppressed
[  242.996743] ib_srpt receiving failed for ioctx 000000002b03f6bc with status 5
[  242.996898] ib_srpt receiving failed for ioctx 0000000067119178 with status 5
[  242.997255] ib_srpt receiving failed for ioctx 00000000451fc813 with status 5
[  242.997850] ib_srpt receiving failed for ioctx 0000000006e2d4c1 with status 5
[  242.997853] ib_srpt receiving failed for ioctx 000000007db43a18 with status 5
[  242.997855] ib_srpt receiving failed for ioctx 00000000976247d6 with status 5
[  242.997856] ib_srpt receiving failed for ioctx 000000008e5c98aa with status 5
[  242.997858] ib_srpt receiving failed for ioctx 00000000f17ceb65 with status 5
[  242.997860] ib_srpt receiving failed for ioctx 00000000e0ba06d1 with status 5
[  242.997861] ib_srpt receiving failed for ioctx 000000007ac01832 with status 5
[  243.020721] scsi 4:0:0:1: alua: Detached
[  243.522706] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-32: queued zerolength write
[  243.522742] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-30: queued zerolength write
[  243.522769] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-28: queued zerolength write
[  243.522785] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-32 wc->status 5
[  243.522795] ib_srpt:srpt_zerolength_write: ib_srpt 10.0.2.15-26: queued zerolength write
[  243.522806] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-32
[  243.522848] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-30 wc->status 5
[  243.522879] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-26 wc->status 5
[  243.522905] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-30
[  243.522910] ib_srpt:srpt_zerolength_write_done: ib_srpt 10.0.2.15-28 wc->status 5
[  243.522921] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-26
[  243.522934] ib_srpt:srpt_release_channel_work: ib_srpt 10.0.2.15-28

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-22 10:18   ` Shinichiro Kawasaki
@ 2023-08-22 15:20     ` Bart Van Assche
  2023-08-23 16:19       ` Bob Pearson
  2023-08-25  1:11       ` Shinichiro Kawasaki
  0 siblings, 2 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-08-22 15:20 UTC (permalink / raw)
  To: Shinichiro Kawasaki, Bob Pearson; +Cc: linux-rdma, linux-scsi

On 8/22/23 03:18, Shinichiro Kawasaki wrote:
> CC+: Bart,
> 
> On Aug 21, 2023 / 20:46, Bob Pearson wrote:
> [...]
>> Shinichiro,
> 
> Hello Bob, thanks for the response.
> 
>>
>> I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
>> 002 and 011 fairly often.
> 
> I repeated the test case srp/011, and observed it hangs. This hang at srp/011
> also can be recreated in stable manner. I reverted the commit 9b4b7c1f9f54
> then observed the srp/011 hang disappeared. So, I guess these two hangs have
> same root cause.
> 
>> I have not been able to figure out the root cause but suspect that
>> there is a timing issue in the srp drivers which cannot handle the slowness of the software
>> RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
>> try to figure this out.
> 
> Thanks for sharing your thoughts. I myself do not have srp driver knowledge, and
> not sure what clue I should provide. If you have any idea of the action I can
> take, please let me know.

Hi Shinichiro and Bob,

When I initially developed the SRP tests these were working reliably in
combination with the rdma_rxe driver. Since 2017 I frequently see issues when
running the SRP tests on top of the rdma_rxe driver, issues that I do not see
if I run the SRP tests on top of the soft-iWARP driver (siw). How about
changing the default for the SRP tests from rdma_rxe to siw and to let the
RDMA community resolve the rdma_rxe issues?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-22 15:20     ` Bart Van Assche
@ 2023-08-23 16:19       ` Bob Pearson
  2023-08-23 19:46         ` Bart Van Assche
                           ` (2 more replies)
  2023-08-25  1:11       ` Shinichiro Kawasaki
  1 sibling, 3 replies; 87+ messages in thread
From: Bob Pearson @ 2023-08-23 16:19 UTC (permalink / raw)
  To: Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 8/22/23 10:20, Bart Van Assche wrote:
> On 8/22/23 03:18, Shinichiro Kawasaki wrote:
>> CC+: Bart,
>>
>> On Aug 21, 2023 / 20:46, Bob Pearson wrote:
>> [...]
>>> Shinichiro,
>>
>> Hello Bob, thanks for the response.
>>
>>>
>>> I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
>>> 002 and 011 fairly often.
>>
>> I repeated the test case srp/011, and observed it hangs. This hang at srp/011
>> also can be recreated in stable manner. I reverted the commit 9b4b7c1f9f54
>> then observed the srp/011 hang disappeared. So, I guess these two hangs have
>> same root cause.
>>
>>> I have not been able to figure out the root cause but suspect that
>>> there is a timing issue in the srp drivers which cannot handle the slowness of the software
>>> RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
>>> try to figure this out.
>>
>> Thanks for sharing your thoughts. I myself do not have srp driver knowledge, and
>> not sure what clue I should provide. If you have any idea of the action I can
>> take, please let me know.
> 
> Hi Shinichiro and Bob,
> 
> When I initially developed the SRP tests these were working reliably in
> combination with the rdma_rxe driver. Since 2017 I frequently see issues when
> running the SRP tests on top of the rdma_rxe driver, issues that I do not see
> if I run the SRP tests on top of the soft-iWARP driver (siw). How about
> changing the default for the SRP tests from rdma_rxe to siw and to let the
> RDMA community resolve the rdma_rxe issues?
> 
> Thanks,
> 
> Bart.
> 

Bart,

I have also seen the same hangs in siw. Not as frequently but the same symptoms.
About every month or so I take another run at trying to find and fix this bug but
I have not succeeded yet. I haven't seen anything that looks like bad behavior from 
the rxe side but that doesn't prove anything. I also saw these hangs on my system
before the WQ patch went in if my memory serves. Out main application for this
driver at HPE is Lustre which is a little different than SRP but uses the same
general approach with fast MRs. Currently we are finding the driver to be quite stable
even under very heavy stress.

I would be happy to collaborate with someone (you?) who knows the SRP side well to resolve
this hang. I think that is the quickest way to fix this. I have no idea what SRP is waiting for.

Best regards,

Bob 

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-23 16:19       ` Bob Pearson
@ 2023-08-23 19:46         ` Bart Van Assche
  2023-08-24 16:24           ` Bob Pearson
  2023-08-24  8:55         ` Bernard Metzler
  2023-08-24 15:35         ` Bernard Metzler
  2 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-08-23 19:46 UTC (permalink / raw)
  To: Bob Pearson, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 8/23/23 09:19, Bob Pearson wrote:
> I have also seen the same hangs in siw. Not as frequently but the same symptoms.
> About every month or so I take another run at trying to find and fix this bug but
> I have not succeeded yet. I haven't seen anything that looks like bad behavior from
> the rxe side but that doesn't prove anything. I also saw these hangs on my system
> before the WQ patch went in if my memory serves. Out main application for this
> driver at HPE is Lustre which is a little different than SRP but uses the same
> general approach with fast MRs. Currently we are finding the driver to be quite stable
> even under very heavy stress.
> 
> I would be happy to collaborate with someone (you?) who knows the SRP side well to resolve
> this hang. I think that is the quickest way to fix this. I have no idea what SRP is waiting for.

Hi Bob,

I cannot reproduce these issues. All SRP tests work reliably on my test setup on
top of the v6.5-rc7 kernel, whether I use the siw driver or whether I use the
rdma_rxe driver. Additionally, I do not see any SRP abort messages.

# uname -a
Linux opensuse-vm 6.5.0-rc7 #28 SMP PREEMPT_DYNAMIC Wed Aug 23 10:42:35 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux
# journalctl --since=today | grep 'SRP abort' | wc
       0       0       0

Since I installed openSUSE Tumbleweed in the VM in which I run kernel tests: if
you are using a Linux distro that is based on Debian it may include a buggy
version of multipathd. Last time I ran the SRP tests in a Debian VM I had to
build multipathd from source - the SRP tests did not work with the Debian version
of multipathd. The shell script that I use to build and install multipathd is as
follows (must be run in the multipath-tools source directory):

#!/bin/bash

scriptdir="$(dirname "$0")"

if type -p zypper >/dev/null 2>&1; then
     rpms=(device-mapper-devel libaio-devel libjson-c-devel librados-devel
	  liburcu-devel readline-devel systemd-devel)
     for p in "${rpms[@]}"; do
	sudo zypper install -y "$p"
     done
elif type -p apt-get >/dev/null 2>&1; then
     export LIB=/lib
     sudo apt-get install -y libaio-dev libdevmapper-dev libjson-c-dev librados-dev \
	    libreadline-dev libsystemd-dev liburcu-dev
fi

git clean -f
make -s "$@"
sudo make -s "$@" install

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* RE: Re: [bug report] blktests srp/002 hang
  2023-08-23 16:19       ` Bob Pearson
  2023-08-23 19:46         ` Bart Van Assche
@ 2023-08-24  8:55         ` Bernard Metzler
  2023-08-24 15:35         ` Bernard Metzler
  2 siblings, 0 replies; 87+ messages in thread
From: Bernard Metzler @ 2023-08-24  8:55 UTC (permalink / raw)
  To: Bob Pearson, Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi


> -----Original Message-----
> From: Bob Pearson <rpearsonhpe@gmail.com>
> Sent: Wednesday, 23 August 2023 18:19
> To: Bart Van Assche <bvanassche@acm.org>; Shinichiro Kawasaki
> <shinichiro.kawasaki@wdc.com>
> Cc: linux-rdma@vger.kernel.org; linux-scsi@vger.kernel.org
> Subject: [EXTERNAL] Re: [bug report] blktests srp/002 hang
> 
> On 8/22/23 10:20, Bart Van Assche wrote:
> > On 8/22/23 03:18, Shinichiro Kawasaki wrote:
> >> CC+: Bart,
> >>
> >> On Aug 21, 2023 / 20:46, Bob Pearson wrote:
> >> [...]
> >>> Shinichiro,
> >>
> >> Hello Bob, thanks for the response.
> >>
> >>>
> >>> I have been aware for a long time that there is a problem with
> blktests/srp. I see hangs in
> >>> 002 and 011 fairly often.
> >>
> >> I repeated the test case srp/011, and observed it hangs. This hang at
> srp/011
> >> also can be recreated in stable manner. I reverted the commit
> 9b4b7c1f9f54
> >> then observed the srp/011 hang disappeared. So, I guess these two hangs
> have
> >> same root cause.
> >>
> >>> I have not been able to figure out the root cause but suspect that
> >>> there is a timing issue in the srp drivers which cannot handle the
> slowness of the software
> >>> RoCE implemtation. If you can give me any clues about what you are
> seeing I am happy to help
> >>> try to figure this out.
> >>
> >> Thanks for sharing your thoughts. I myself do not have srp driver
> knowledge, and
> >> not sure what clue I should provide. If you have any idea of the action
> I can
> >> take, please let me know.
> >
> > Hi Shinichiro and Bob,
> >
> > When I initially developed the SRP tests these were working reliably in
> > combination with the rdma_rxe driver. Since 2017 I frequently see issues
> when
> > running the SRP tests on top of the rdma_rxe driver, issues that I do not
> see
> > if I run the SRP tests on top of the soft-iWARP driver (siw). How about
> > changing the default for the SRP tests from rdma_rxe to siw and to let
> the
> > RDMA community resolve the rdma_rxe issues?
> >
> > Thanks,
> >
> > Bart.
> >
> 
> Bart,
> 
> I have also seen the same hangs in siw. Not as frequently but the same
> symptoms.

I did not hear about that one form siw side, but will try to make up some
time to reproduce it and fix siw in case. I'll let you know if I find
something, Bob.

Bernard.

> About every month or so I take another run at trying to find and fix this
> bug but
> I have not succeeded yet. I haven't seen anything that looks like bad
> behavior from
> the rxe side but that doesn't prove anything. I also saw these hangs on my
> system
> before the WQ patch went in if my memory serves. Out main application for
> this
> driver at HPE is Lustre which is a little different than SRP but uses the
> same
> general approach with fast MRs. Currently we are finding the driver to be
> quite stable
> even under very heavy stress.
> 
> I would be happy to collaborate with someone (you?) who knows the SRP side
> well to resolve
> this hang. I think that is the quickest way to fix this. I have no idea
> what SRP is waiting for.
> 
> Best regards,
> 
> Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* RE: Re: [bug report] blktests srp/002 hang
  2023-08-23 16:19       ` Bob Pearson
  2023-08-23 19:46         ` Bart Van Assche
  2023-08-24  8:55         ` Bernard Metzler
@ 2023-08-24 15:35         ` Bernard Metzler
  2023-08-24 16:05           ` Bart Van Assche
  2 siblings, 1 reply; 87+ messages in thread
From: Bernard Metzler @ 2023-08-24 15:35 UTC (permalink / raw)
  To: Bob Pearson, Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi



> -----Original Message-----
> From: Bob Pearson <rpearsonhpe@gmail.com>
> Sent: Wednesday, 23 August 2023 18:19
> To: Bart Van Assche <bvanassche@acm.org>; Shinichiro Kawasaki
> <shinichiro.kawasaki@wdc.com>
> Cc: linux-rdma@vger.kernel.org; linux-scsi@vger.kernel.org
> Subject: [EXTERNAL] Re: [bug report] blktests srp/002 hang
> 
> On 8/22/23 10:20, Bart Van Assche wrote:
> > On 8/22/23 03:18, Shinichiro Kawasaki wrote:
> >> CC+: Bart,
> >>
> >> On Aug 21, 2023 / 20:46, Bob Pearson wrote:
> >> [...]
> >>> Shinichiro,
> >>
> >> Hello Bob, thanks for the response.
> >>
> >>>
> >>> I have been aware for a long time that there is a problem with
> blktests/srp. I see hangs in
> >>> 002 and 011 fairly often.
> >>
> >> I repeated the test case srp/011, and observed it hangs. This hang at
> srp/011
> >> also can be recreated in stable manner. I reverted the commit
> 9b4b7c1f9f54
> >> then observed the srp/011 hang disappeared. So, I guess these two hangs
> have
> >> same root cause.
> >>
> >>> I have not been able to figure out the root cause but suspect that
> >>> there is a timing issue in the srp drivers which cannot handle the
> slowness of the software
> >>> RoCE implemtation. If you can give me any clues about what you are
> seeing I am happy to help
> >>> try to figure this out.
> >>
> >> Thanks for sharing your thoughts. I myself do not have srp driver
> knowledge, and
> >> not sure what clue I should provide. If you have any idea of the action
> I can
> >> take, please let me know.
> >
> > Hi Shinichiro and Bob,
> >
> > When I initially developed the SRP tests these were working reliably in
> > combination with the rdma_rxe driver. Since 2017 I frequently see issues
> when
> > running the SRP tests on top of the rdma_rxe driver, issues that I do not
> see
> > if I run the SRP tests on top of the soft-iWARP driver (siw). How about
> > changing the default for the SRP tests from rdma_rxe to siw and to let
> the
> > RDMA community resolve the rdma_rxe issues?
> >
> > Thanks,
> >
> > Bart.
> >
> 
> Bart,
> 
> I have also seen the same hangs in siw. Not as frequently but the same
> symptoms.
> About every month or so I take another run at trying to find and fix this
> bug but
> I have not succeeded yet. I haven't seen anything that looks like bad
> behavior from
> the rxe side but that doesn't prove anything. I also saw these hangs on my
> system
> before the WQ patch went in if my memory serves. Out main application for
> this
> driver at HPE is Lustre which is a little different than SRP but uses the
> same
> general approach with fast MRs. Currently we are finding the driver to be
> quite stable
> even under very heavy stress.
> 
> I would be happy to collaborate with someone (you?) who knows the SRP side
> well to resolve
> this hang. I think that is the quickest way to fix this. I have no idea
> what SRP is waiting for.
> 
> Best regards,
> 
> Bob

Hi Bart,
I spent some time testing the srp/002 blktest with siw, still
trying to get it hanging.
Looking closer into the logs: While most of the time RDMA CM
connection setup works, I also see some connection rejects being
created by the passive ULP side during setup:

[16848.757937] scsi host11: ib_srp: REJ received
[16848.757939] scsi host11:   REJ reason 0xffffff98 

This does not affect the overall success of the current test
run, other connect attempts succeed etc. Is that connection
rejection intended behavior of the test?

Thanks!
Bernard.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-24 15:35         ` Bernard Metzler
@ 2023-08-24 16:05           ` Bart Van Assche
  2023-08-24 16:27             ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-08-24 16:05 UTC (permalink / raw)
  To: Bernard Metzler, Bob Pearson, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 8/24/23 08:35, Bernard Metzler wrote:
> I spent some time testing the srp/002 blktest with siw, still
> trying to get it hanging.
> Looking closer into the logs: While most of the time RDMA CM
> connection setup works, I also see some connection rejects being
> created by the passive ULP side during setup:
> 
> [16848.757937] scsi host11: ib_srp: REJ received
> [16848.757939] scsi host11:   REJ reason 0xffffff98
> 
> This does not affect the overall success of the current test
> run, other connect attempts succeed etc. Is that connection
> rejection intended behavior of the test?

Hi Bernard,

In the logs I see that the SRP initiator (ib_srp) may try to log in before
the SRP target driver (ib_srpt) has finished associating with the configured
RDMA ports. I think this is why REJ messages appear in the logs. The retry
loop in the test script should be sufficient to deal with this.

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-23 19:46         ` Bart Van Assche
@ 2023-08-24 16:24           ` Bob Pearson
  0 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-08-24 16:24 UTC (permalink / raw)
  To: Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 8/23/23 14:46, Bart Van Assche wrote:
> On 8/23/23 09:19, Bob Pearson wrote:
>> I have also seen the same hangs in siw. Not as frequently but the same symptoms.
>> About every month or so I take another run at trying to find and fix this bug but
>> I have not succeeded yet. I haven't seen anything that looks like bad behavior from
>> the rxe side but that doesn't prove anything. I also saw these hangs on my system
>> before the WQ patch went in if my memory serves. Out main application for this
>> driver at HPE is Lustre which is a little different than SRP but uses the same
>> general approach with fast MRs. Currently we are finding the driver to be quite stable
>> even under very heavy stress.
>>
>> I would be happy to collaborate with someone (you?) who knows the SRP side well to resolve
>> this hang. I think that is the quickest way to fix this. I have no idea what SRP is waiting for.
> 
> Hi Bob,
> 
> I cannot reproduce these issues. All SRP tests work reliably on my test setup on
> top of the v6.5-rc7 kernel, whether I use the siw driver or whether I use the
> rdma_rxe driver. Additionally, I do not see any SRP abort messages.

Thank you for this. This is good news.
> 
> # uname -a
> Linux opensuse-vm 6.5.0-rc7 #28 SMP PREEMPT_DYNAMIC Wed Aug 23 10:42:35 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux
> # journalctl --since=today | grep 'SRP abort' | wc
>       0       0       0
> 
> Since I installed openSUSE Tumbleweed in the VM in which I run kernel tests: if
> you are using a Linux distro that is based on Debian it may include a buggy
> version of multipathd. Last time I ran the SRP tests in a Debian VM I had to
> build multipathd from source - the SRP tests did not work with the Debian version
> of multipathd. The shell script that I use to build and install multipathd is as
> follows (must be run in the multipath-tools source directory):

I run on Ubuntu which is Debian based. So perhaps that is the root of the problems
I have been seeing.

I'll try to follow your lead here.

Bob
> 
> #!/bin/bash
> 
> scriptdir="$(dirname "$0")"
> 
> if type -p zypper >/dev/null 2>&1; then
>     rpms=(device-mapper-devel libaio-devel libjson-c-devel librados-devel
>       liburcu-devel readline-devel systemd-devel)
>     for p in "${rpms[@]}"; do
>     sudo zypper install -y "$p"
>     done
> elif type -p apt-get >/dev/null 2>&1; then
>     export LIB=/lib
>     sudo apt-get install -y libaio-dev libdevmapper-dev libjson-c-dev librados-dev \
>         libreadline-dev libsystemd-dev liburcu-dev
> fi
> 
> git clean -f
> make -s "$@"
> sudo make -s "$@" install
> 
> Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-24 16:05           ` Bart Van Assche
@ 2023-08-24 16:27             ` Bob Pearson
  0 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-08-24 16:27 UTC (permalink / raw)
  To: Bart Van Assche, Bernard Metzler, Shinichiro Kawasaki
  Cc: linux-rdma, linux-scsi

On 8/24/23 11:05, Bart Van Assche wrote:
> On 8/24/23 08:35, Bernard Metzler wrote:
>> I spent some time testing the srp/002 blktest with siw, still
>> trying to get it hanging.
>> Looking closer into the logs: While most of the time RDMA CM
>> connection setup works, I also see some connection rejects being
>> created by the passive ULP side during setup:
>>
>> [16848.757937] scsi host11: ib_srp: REJ received
>> [16848.757939] scsi host11:   REJ reason 0xffffff98
>>
>> This does not affect the overall success of the current test
>> run, other connect attempts succeed etc. Is that connection
>> rejection intended behavior of the test?
> 
> Hi Bernard,
> 
> In the logs I see that the SRP initiator (ib_srp) may try to log in before
> the SRP target driver (ib_srpt) has finished associating with the configured
> RDMA ports. I think this is why REJ messages appear in the logs. The retry
> loop in the test script should be sufficient to deal with this.
> 
> Bart.

Thanks to both of you for taking the time to look at this.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-22 15:20     ` Bart Van Assche
  2023-08-23 16:19       ` Bob Pearson
@ 2023-08-25  1:11       ` Shinichiro Kawasaki
  2023-08-25  1:36         ` Bob Pearson
  2023-08-25 13:52         ` Bart Van Assche
  1 sibling, 2 replies; 87+ messages in thread
From: Shinichiro Kawasaki @ 2023-08-25  1:11 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Bob Pearson, linux-rdma, linux-scsi

On Aug 22, 2023 / 08:20, Bart Van Assche wrote:
> On 8/22/23 03:18, Shinichiro Kawasaki wrote:
> > CC+: Bart,
> > 
> > On Aug 21, 2023 / 20:46, Bob Pearson wrote:
> > [...]
> > > Shinichiro,
> > 
> > Hello Bob, thanks for the response.
> > 
> > > 
> > > I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
> > > 002 and 011 fairly often.
> > 
> > I repeated the test case srp/011, and observed it hangs. This hang at srp/011
> > also can be recreated in stable manner. I reverted the commit 9b4b7c1f9f54
> > then observed the srp/011 hang disappeared. So, I guess these two hangs have
> > same root cause.
> > 
> > > I have not been able to figure out the root cause but suspect that
> > > there is a timing issue in the srp drivers which cannot handle the slowness of the software
> > > RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
> > > try to figure this out.
> > 
> > Thanks for sharing your thoughts. I myself do not have srp driver knowledge, and
> > not sure what clue I should provide. If you have any idea of the action I can
> > take, please let me know.
> 
> Hi Shinichiro and Bob,
> 
> When I initially developed the SRP tests these were working reliably in
> combination with the rdma_rxe driver. Since 2017 I frequently see issues when
> running the SRP tests on top of the rdma_rxe driver, issues that I do not see
> if I run the SRP tests on top of the soft-iWARP driver (siw). How about
> changing the default for the SRP tests from rdma_rxe to siw and to let the
> RDMA community resolve the rdma_rxe issues?

If it takes time to resolve the issues, it sounds a good idea to make siw driver
default, since it will make the hangs less painful for blktests users. Another
idea to reduce the pain is to improve srp/002 and srp/011 to detect the hangs
and report them as failures.

Having said that, some discussion started on this thread for resolution
(thanks!) I would wait for a while and see how long it will take for solution,
and if the actions on blktests side are valuable or not.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-25  1:11       ` Shinichiro Kawasaki
@ 2023-08-25  1:36         ` Bob Pearson
  2023-08-25 10:16           ` Shinichiro Kawasaki
  2023-08-25 13:49           ` Bart Van Assche
  2023-08-25 13:52         ` Bart Van Assche
  1 sibling, 2 replies; 87+ messages in thread
From: Bob Pearson @ 2023-08-25  1:36 UTC (permalink / raw)
  To: Shinichiro Kawasaki, Bart Van Assche; +Cc: linux-rdma, linux-scsi

On 8/24/23 20:11, Shinichiro Kawasaki wrote:
> On Aug 22, 2023 / 08:20, Bart Van Assche wrote:
>> On 8/22/23 03:18, Shinichiro Kawasaki wrote:
>>> CC+: Bart,
>>>
>>> On Aug 21, 2023 / 20:46, Bob Pearson wrote:
>>> [...]
>>>> Shinichiro,
>>>
>>> Hello Bob, thanks for the response.
>>>
>>>>
>>>> I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
>>>> 002 and 011 fairly often.
>>>
>>> I repeated the test case srp/011, and observed it hangs. This hang at srp/011
>>> also can be recreated in stable manner. I reverted the commit 9b4b7c1f9f54
>>> then observed the srp/011 hang disappeared. So, I guess these two hangs have
>>> same root cause.
>>>
>>>> I have not been able to figure out the root cause but suspect that
>>>> there is a timing issue in the srp drivers which cannot handle the slowness of the software
>>>> RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
>>>> try to figure this out.
>>>
>>> Thanks for sharing your thoughts. I myself do not have srp driver knowledge, and
>>> not sure what clue I should provide. If you have any idea of the action I can
>>> take, please let me know.
>>
>> Hi Shinichiro and Bob,
>>
>> When I initially developed the SRP tests these were working reliably in
>> combination with the rdma_rxe driver. Since 2017 I frequently see issues when
>> running the SRP tests on top of the rdma_rxe driver, issues that I do not see
>> if I run the SRP tests on top of the soft-iWARP driver (siw). How about
>> changing the default for the SRP tests from rdma_rxe to siw and to let the
>> RDMA community resolve the rdma_rxe issues?
> 
> If it takes time to resolve the issues, it sounds a good idea to make siw driver
> default, since it will make the hangs less painful for blktests users. Another
> idea to reduce the pain is to improve srp/002 and srp/011 to detect the hangs
> and report them as failures.
> 
> Having said that, some discussion started on this thread for resolution
> (thanks!) I would wait for a while and see how long it will take for solution,
> and if the actions on blktests side are valuable or not.

Did you see Bart's comment about srp not working with older versions of multipathd?
He is currently not seeing any hangs at all.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-25  1:36         ` Bob Pearson
@ 2023-08-25 10:16           ` Shinichiro Kawasaki
  2023-08-25 13:49           ` Bart Van Assche
  1 sibling, 0 replies; 87+ messages in thread
From: Shinichiro Kawasaki @ 2023-08-25 10:16 UTC (permalink / raw)
  To: Bob Pearson; +Cc: Bart Van Assche, linux-rdma, linux-scsi

On Aug 24, 2023 / 20:36, Bob Pearson wrote:
> On 8/24/23 20:11, Shinichiro Kawasaki wrote:
> > On Aug 22, 2023 / 08:20, Bart Van Assche wrote:
> >> On 8/22/23 03:18, Shinichiro Kawasaki wrote:
> >>> CC+: Bart,
> >>>
> >>> On Aug 21, 2023 / 20:46, Bob Pearson wrote:
> >>> [...]
> >>>> Shinichiro,
> >>>
> >>> Hello Bob, thanks for the response.
> >>>
> >>>>
> >>>> I have been aware for a long time that there is a problem with blktests/srp. I see hangs in
> >>>> 002 and 011 fairly often.
> >>>
> >>> I repeated the test case srp/011, and observed it hangs. This hang at srp/011
> >>> also can be recreated in stable manner. I reverted the commit 9b4b7c1f9f54
> >>> then observed the srp/011 hang disappeared. So, I guess these two hangs have
> >>> same root cause.
> >>>
> >>>> I have not been able to figure out the root cause but suspect that
> >>>> there is a timing issue in the srp drivers which cannot handle the slowness of the software
> >>>> RoCE implemtation. If you can give me any clues about what you are seeing I am happy to help
> >>>> try to figure this out.
> >>>
> >>> Thanks for sharing your thoughts. I myself do not have srp driver knowledge, and
> >>> not sure what clue I should provide. If you have any idea of the action I can
> >>> take, please let me know.
> >>
> >> Hi Shinichiro and Bob,
> >>
> >> When I initially developed the SRP tests these were working reliably in
> >> combination with the rdma_rxe driver. Since 2017 I frequently see issues when
> >> running the SRP tests on top of the rdma_rxe driver, issues that I do not see
> >> if I run the SRP tests on top of the soft-iWARP driver (siw). How about
> >> changing the default for the SRP tests from rdma_rxe to siw and to let the
> >> RDMA community resolve the rdma_rxe issues?
> > 
> > If it takes time to resolve the issues, it sounds a good idea to make siw driver
> > default, since it will make the hangs less painful for blktests users. Another
> > idea to reduce the pain is to improve srp/002 and srp/011 to detect the hangs
> > and report them as failures.
> > 
> > Having said that, some discussion started on this thread for resolution
> > (thanks!) I would wait for a while and see how long it will take for solution,
> > and if the actions on blktests side are valuable or not.
> 
> Did you see Bart's comment about srp not working with older versions of multipathd?
> He is currently not seeing any hangs at all.

Yes, I saw it. My test system is Fedora 38 with device-mapper-multipathd package
version 0.9.4. I compiled and installed the latest multipath-tools but still see
the hangs. Not sure why it is observed on my test system and not observed on
Bart's system.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-25  1:36         ` Bob Pearson
  2023-08-25 10:16           ` Shinichiro Kawasaki
@ 2023-08-25 13:49           ` Bart Van Assche
  1 sibling, 0 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-08-25 13:49 UTC (permalink / raw)
  To: Bob Pearson, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 8/24/23 18:36, Bob Pearson wrote:
> Did you see Bart's comment about srp not working with older versions of multipathd?
> He is currently not seeing any hangs at all.

Hi Bob,

It seems like my comment was not clear enough. The SRP tests are compatible
with all upstream versions of multipathd, including those from ten years ago.
While testing on Debian, one year ago I noticed that the only way to make
the SRP tests pass was to replace the Debian version of multipathd with an
upstream version. I'm not sure of this but my guess is that I encountered a
Debian version of multipathd with a bug introduced by the Debian maintainers.

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-25  1:11       ` Shinichiro Kawasaki
  2023-08-25  1:36         ` Bob Pearson
@ 2023-08-25 13:52         ` Bart Van Assche
  2023-09-13 17:36           ` Bob Pearson
  1 sibling, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-08-25 13:52 UTC (permalink / raw)
  To: Shinichiro Kawasaki; +Cc: Bob Pearson, linux-rdma, linux-scsi

On 8/24/23 18:11, Shinichiro Kawasaki wrote:
> If it takes time to resolve the issues, it sounds a good idea to make siw driver
> default, since it will make the hangs less painful for blktests users. Another
> idea to reduce the pain is to improve srp/002 and srp/011 to detect the hangs
> and report them as failures.

At this moment we don't know whether the hangs can be converted into failures.
Answering this question is only possible after we have found the root cause of
the hang. If the hang would be caused by commands getting stuck in multipathd
then it can be solved by changing the path configuration (see also the dmsetup
message commands in blktests). If the hang is caused by a kernel bug then it's
very well possible that there is no way to recover other than by rebooting the
system on which the tests are run.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-25 13:52         ` Bart Van Assche
@ 2023-09-13 17:36           ` Bob Pearson
  2023-09-13 23:38             ` Zhu Yanjun
  0 siblings, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-09-13 17:36 UTC (permalink / raw)
  To: Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 8/25/23 08:52, Bart Van Assche wrote:
> On 8/24/23 18:11, Shinichiro Kawasaki wrote:
>> If it takes time to resolve the issues, it sounds a good idea to make siw driver
>> default, since it will make the hangs less painful for blktests users. Another
>> idea to reduce the pain is to improve srp/002 and srp/011 to detect the hangs
>> and report them as failures.
> 
> At this moment we don't know whether the hangs can be converted into failures.
> Answering this question is only possible after we have found the root cause of
> the hang. If the hang would be caused by commands getting stuck in multipathd
> then it can be solved by changing the path configuration (see also the dmsetup
> message commands in blktests). If the hang is caused by a kernel bug then it's
> very well possible that there is no way to recover other than by rebooting the
> system on which the tests are run.
> 
> Thanks,
> 
> Bart.

Since 6.6.0-rc1 came out I decided to give blktests srp another try with the current
rdma for-next branch on my Ubuntu (debian) system. For the first time in a very long
time all the srp test cases run correctly multiple times. I ran each one 3X.

I had tried to build multipath-tools from source but ran into problems so I reinstalled
the current Ubuntu packages. I have no idea what was the root cause that finally went
away but I don't think it was in rxe as there aren't any recent patches related to the
blktests failures. I did notice that the dmesg traces picked up a couple of lines after
the place where it used to hang. Something about setting an ALUA timeout to 60 seconds.

Thanks to all who worked on this.

Bob Pearson

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-13 17:36           ` Bob Pearson
@ 2023-09-13 23:38             ` Zhu Yanjun
  2023-09-16  5:59               ` Zhu Yanjun
  0 siblings, 1 reply; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-13 23:38 UTC (permalink / raw)
  To: Bob Pearson, Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

在 2023/9/14 1:36, Bob Pearson 写道:
> On 8/25/23 08:52, Bart Van Assche wrote:
>> On 8/24/23 18:11, Shinichiro Kawasaki wrote:
>>> If it takes time to resolve the issues, it sounds a good idea to make siw driver
>>> default, since it will make the hangs less painful for blktests users. Another
>>> idea to reduce the pain is to improve srp/002 and srp/011 to detect the hangs
>>> and report them as failures.
>>
>> At this moment we don't know whether the hangs can be converted into failures.
>> Answering this question is only possible after we have found the root cause of
>> the hang. If the hang would be caused by commands getting stuck in multipathd
>> then it can be solved by changing the path configuration (see also the dmsetup
>> message commands in blktests). If the hang is caused by a kernel bug then it's
>> very well possible that there is no way to recover other than by rebooting the
>> system on which the tests are run.
>>
>> Thanks,
>>
>> Bart.
> 
> Since 6.6.0-rc1 came out I decided to give blktests srp another try with the current
> rdma for-next branch on my Ubuntu (debian) system. For the first time in a very long
> time all the srp test cases run correctly multiple times. I ran each one 3X.
> 
> I had tried to build multipath-tools from source but ran into problems so I reinstalled
> the current Ubuntu packages. I have no idea what was the root cause that finally went
> away but I don't think it was in rxe as there aren't any recent patches related to the
> blktests failures. I did notice that the dmesg traces picked up a couple of lines after
> the place where it used to hang. Something about setting an ALUA timeout to 60 seconds.
> 
> Thanks to all who worked on this.

Hi, Bob

About this problem, IIRC, this problem easily occurred on Debian and 
Fedora 38 and with the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue 
support for rxe tasks").

And on Debian, with the latest multipathd, this problem seems to disappear.

On Fedora 38, even with the latest multipathd, this problem still can be 
observed.

On Ubuntu, it is difficult to reproduce this problem.

Perhaps this is why you can not reproduce this problem on Ubuntu.

It seems that this problem is related with linux distribution and the 
version of multipathd.

If I am missing something, please feel free to let me know.

Zhu Yanjun

> 
> Bob Pearson


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-13 23:38             ` Zhu Yanjun
@ 2023-09-16  5:59               ` Zhu Yanjun
  2023-09-19  4:14                 ` Shinichiro Kawasaki
  0 siblings, 1 reply; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-16  5:59 UTC (permalink / raw)
  To: Bob Pearson, Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

[-- Attachment #1: Type: text/plain, Size: 3005 bytes --]



在 2023/9/14 7:38, Zhu Yanjun 写道:
> 在 2023/9/14 1:36, Bob Pearson 写道:
>> On 8/25/23 08:52, Bart Van Assche wrote:
>>> On 8/24/23 18:11, Shinichiro Kawasaki wrote:
>>>> If it takes time to resolve the issues, it sounds a good idea to 
>>>> make siw driver
>>>> default, since it will make the hangs less painful for blktests 
>>>> users. Another
>>>> idea to reduce the pain is to improve srp/002 and srp/011 to detect 
>>>> the hangs
>>>> and report them as failures.
>>>
>>> At this moment we don't know whether the hangs can be converted into 
>>> failures.
>>> Answering this question is only possible after we have found the root 
>>> cause of
>>> the hang. If the hang would be caused by commands getting stuck in 
>>> multipathd
>>> then it can be solved by changing the path configuration (see also 
>>> the dmsetup
>>> message commands in blktests). If the hang is caused by a kernel bug 
>>> then it's
>>> very well possible that there is no way to recover other than by 
>>> rebooting the
>>> system on which the tests are run.
>>>
>>> Thanks,
>>>
>>> Bart.
>>
>> Since 6.6.0-rc1 came out I decided to give blktests srp another try 
>> with the current
>> rdma for-next branch on my Ubuntu (debian) system. For the first time 
>> in a very long
>> time all the srp test cases run correctly multiple times. I ran each 
>> one 3X.
>>
>> I had tried to build multipath-tools from source but ran into problems 
>> so I reinstalled
>> the current Ubuntu packages. I have no idea what was the root cause 
>> that finally went
>> away but I don't think it was in rxe as there aren't any recent 
>> patches related to the
>> blktests failures. I did notice that the dmesg traces picked up a 
>> couple of lines after
>> the place where it used to hang. Something about setting an ALUA 
>> timeout to 60 seconds.
>>
>> Thanks to all who worked on this.
> 
> Hi, Bob
> 
> About this problem, IIRC, this problem easily occurred on Debian and 
> Fedora 38 and with the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue 
> support for rxe tasks").
> 
> And on Debian, with the latest multipathd, this problem seems to disappear.
> 
> On Fedora 38, even with the latest multipathd, this problem still can be 
> observed.

On Debian, with the latest multipathd or revert the commit 9b4b7c1f9f54 
("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will 
disappear.

On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue 
support for rxe tasks") is reverted, will this problem still appear?
I do not have such test environment. The commit is in the attachment,
can anyone have a test? Please let us know the test result. Thanks.

Zhu Yanjun

> 
> On Ubuntu, it is difficult to reproduce this problem.
> 
> Perhaps this is why you can not reproduce this problem on Ubuntu.
> 
> It seems that this problem is related with linux distribution and the 
> version of multipathd.
> 
> If I am missing something, please feel free to let me know.
> 
> Zhu Yanjun
> 
>>
>> Bob Pearson
> 

[-- Attachment #2: 0001-Revert-RDMA-rxe-Add-workqueue-support-for-rxe-tasks.patch --]
[-- Type: text/plain, Size: 9149 bytes --]

From fd2360edbc9171298d2e91fd9b74b4c3022db9d4 Mon Sep 17 00:00:00 2001
From: Zhu Yanjun <yanjun.zhu@linux.dev>
Date: Fri, 15 Sep 2023 23:07:17 -0400
Subject: [PATCH 1/1] Revert "RDMA/rxe: Add workqueue support for rxe tasks"

This reverts commit 9b4b7c1f9f54120940e243251e2b1407767b3381.

Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
---
 drivers/infiniband/sw/rxe/rxe.c      |   9 +--
 drivers/infiniband/sw/rxe/rxe_task.c | 110 ++++++++++++---------------
 drivers/infiniband/sw/rxe/rxe_task.h |   6 +-
 3 files changed, 49 insertions(+), 76 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
index 54c723a6edda..7a7e713de52d 100644
--- a/drivers/infiniband/sw/rxe/rxe.c
+++ b/drivers/infiniband/sw/rxe/rxe.c
@@ -212,15 +212,9 @@ static int __init rxe_module_init(void)
 {
 	int err;
 
-	err = rxe_alloc_wq();
-	if (err)
-		return err;
-
 	err = rxe_net_init();
-	if (err) {
-		rxe_destroy_wq();
+	if (err)
 		return err;
-	}
 
 	rdma_link_register(&rxe_link_ops);
 	pr_info("loaded\n");
@@ -232,7 +226,6 @@ static void __exit rxe_module_exit(void)
 	rdma_link_unregister(&rxe_link_ops);
 	ib_unregister_driver(RDMA_DRIVER_RXE);
 	rxe_net_exit();
-	rxe_destroy_wq();
 
 	pr_info("unloaded\n");
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
index 1501120d4f52..fb9a6bc8e620 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.c
+++ b/drivers/infiniband/sw/rxe/rxe_task.c
@@ -6,24 +6,8 @@
 
 #include "rxe.h"
 
-static struct workqueue_struct *rxe_wq;
-
-int rxe_alloc_wq(void)
-{
-	rxe_wq = alloc_workqueue("rxe_wq", WQ_UNBOUND, WQ_MAX_ACTIVE);
-	if (!rxe_wq)
-		return -ENOMEM;
-
-	return 0;
-}
-
-void rxe_destroy_wq(void)
-{
-	destroy_workqueue(rxe_wq);
-}
-
 /* Check if task is idle i.e. not running, not scheduled in
- * work queue and not draining. If so move to busy to
+ * tasklet queue and not draining. If so move to busy to
  * reserve a slot in do_task() by setting to busy and taking
  * a qp reference to cover the gap from now until the task finishes.
  * state will move out of busy if task returns a non zero value
@@ -37,6 +21,9 @@ static bool __reserve_if_idle(struct rxe_task *task)
 {
 	WARN_ON(rxe_read(task->qp) <= 0);
 
+	if (task->tasklet.state & BIT(TASKLET_STATE_SCHED))
+		return false;
+
 	if (task->state == TASK_STATE_IDLE) {
 		rxe_get(task->qp);
 		task->state = TASK_STATE_BUSY;
@@ -51,7 +38,7 @@ static bool __reserve_if_idle(struct rxe_task *task)
 }
 
 /* check if task is idle or drained and not currently
- * scheduled in the work queue. This routine is
+ * scheduled in the tasklet queue. This routine is
  * called by rxe_cleanup_task or rxe_disable_task to
  * see if the queue is empty.
  * Context: caller should hold task->lock.
@@ -59,7 +46,7 @@ static bool __reserve_if_idle(struct rxe_task *task)
  */
 static bool __is_done(struct rxe_task *task)
 {
-	if (work_pending(&task->work))
+	if (task->tasklet.state & BIT(TASKLET_STATE_SCHED))
 		return false;
 
 	if (task->state == TASK_STATE_IDLE ||
@@ -90,23 +77,23 @@ static bool is_done(struct rxe_task *task)
  * schedules the task. They must call __reserve_if_idle to
  * move the task to busy before calling or scheduling.
  * The task can also be moved to drained or invalid
- * by calls to rxe_cleanup_task or rxe_disable_task.
+ * by calls to rxe-cleanup_task or rxe_disable_task.
  * In that case tasks which get here are not executed but
  * just flushed. The tasks are designed to look to see if
- * there is work to do and then do part of it before returning
+ * there is work to do and do part of it before returning
  * here with a return value of zero until all the work
- * has been consumed then it returns a non-zero value.
+ * has been consumed then it retuens a non-zero value.
  * The number of times the task can be run is limited by
  * max iterations so one task cannot hold the cpu forever.
- * If the limit is hit and work remains the task is rescheduled.
  */
-static void do_task(struct rxe_task *task)
+static void do_task(struct tasklet_struct *t)
 {
+	int cont;
+	int ret;
+	struct rxe_task *task = from_tasklet(task, t, tasklet);
 	unsigned int iterations;
 	unsigned long flags;
 	int resched = 0;
-	int cont;
-	int ret;
 
 	WARN_ON(rxe_read(task->qp) <= 0);
 
@@ -128,22 +115,25 @@ static void do_task(struct rxe_task *task)
 		} while (ret == 0 && iterations-- > 0);
 
 		spin_lock_irqsave(&task->lock, flags);
-		/* we're not done yet but we ran out of iterations.
-		 * yield the cpu and reschedule the task
-		 */
-		if (!ret) {
-			task->state = TASK_STATE_IDLE;
-			resched = 1;
-			goto exit;
-		}
-
 		switch (task->state) {
 		case TASK_STATE_BUSY:
-			task->state = TASK_STATE_IDLE;
+			if (ret) {
+				task->state = TASK_STATE_IDLE;
+			} else {
+				/* This can happen if the client
+				 * can add work faster than the
+				 * tasklet can finish it.
+				 * Reschedule the tasklet and exit
+				 * the loop to give up the cpu
+				 */
+				task->state = TASK_STATE_IDLE;
+				resched = 1;
+			}
 			break;
 
-		/* someone tried to schedule the task while we
-		 * were running, keep going
+		/* someone tried to run the task since the last time we called
+		 * func, so we will call one more time regardless of the
+		 * return value
 		 */
 		case TASK_STATE_ARMED:
 			task->state = TASK_STATE_BUSY;
@@ -151,24 +141,22 @@ static void do_task(struct rxe_task *task)
 			break;
 
 		case TASK_STATE_DRAINING:
-			task->state = TASK_STATE_DRAINED;
+			if (ret)
+				task->state = TASK_STATE_DRAINED;
+			else
+				cont = 1;
 			break;
 
 		default:
 			WARN_ON(1);
-			rxe_dbg_qp(task->qp, "unexpected task state = %d",
-				   task->state);
-			task->state = TASK_STATE_IDLE;
+			rxe_info_qp(task->qp, "unexpected task state = %d", task->state);
 		}
 
-exit:
 		if (!cont) {
 			task->num_done++;
 			if (WARN_ON(task->num_done != task->num_sched))
-				rxe_dbg_qp(
-					task->qp,
-					"%ld tasks scheduled, %ld tasks done",
-					task->num_sched, task->num_done);
+				rxe_err_qp(task->qp, "%ld tasks scheduled, %ld tasks done",
+					   task->num_sched, task->num_done);
 		}
 		spin_unlock_irqrestore(&task->lock, flags);
 	} while (cont);
@@ -181,12 +169,6 @@ static void do_task(struct rxe_task *task)
 	rxe_put(task->qp);
 }
 
-/* wrapper around do_task to fix argument for work queue */
-static void do_work(struct work_struct *work)
-{
-	do_task(container_of(work, struct rxe_task, work));
-}
-
 int rxe_init_task(struct rxe_task *task, struct rxe_qp *qp,
 		  int (*func)(struct rxe_qp *))
 {
@@ -194,9 +176,11 @@ int rxe_init_task(struct rxe_task *task, struct rxe_qp *qp,
 
 	task->qp = qp;
 	task->func = func;
+
+	tasklet_setup(&task->tasklet, do_task);
+
 	task->state = TASK_STATE_IDLE;
 	spin_lock_init(&task->lock);
-	INIT_WORK(&task->work, do_work);
 
 	return 0;
 }
@@ -229,6 +213,8 @@ void rxe_cleanup_task(struct rxe_task *task)
 	while (!is_done(task))
 		cond_resched();
 
+	tasklet_kill(&task->tasklet);
+
 	spin_lock_irqsave(&task->lock, flags);
 	task->state = TASK_STATE_INVALID;
 	spin_unlock_irqrestore(&task->lock, flags);
@@ -240,7 +226,7 @@ void rxe_cleanup_task(struct rxe_task *task)
 void rxe_run_task(struct rxe_task *task)
 {
 	unsigned long flags;
-	bool run;
+	int run;
 
 	WARN_ON(rxe_read(task->qp) <= 0);
 
@@ -249,11 +235,11 @@ void rxe_run_task(struct rxe_task *task)
 	spin_unlock_irqrestore(&task->lock, flags);
 
 	if (run)
-		do_task(task);
+		do_task(&task->tasklet);
 }
 
-/* schedule the task to run later as a work queue entry.
- * the queue_work call can be called holding
+/* schedule the task to run later as a tasklet.
+ * the tasklet)schedule call can be called holding
  * the lock.
  */
 void rxe_sched_task(struct rxe_task *task)
@@ -264,7 +250,7 @@ void rxe_sched_task(struct rxe_task *task)
 
 	spin_lock_irqsave(&task->lock, flags);
 	if (__reserve_if_idle(task))
-		queue_work(rxe_wq, &task->work);
+		tasklet_schedule(&task->tasklet);
 	spin_unlock_irqrestore(&task->lock, flags);
 }
 
@@ -291,9 +277,7 @@ void rxe_disable_task(struct rxe_task *task)
 	while (!is_done(task))
 		cond_resched();
 
-	spin_lock_irqsave(&task->lock, flags);
-	task->state = TASK_STATE_DRAINED;
-	spin_unlock_irqrestore(&task->lock, flags);
+	tasklet_disable(&task->tasklet);
 }
 
 void rxe_enable_task(struct rxe_task *task)
@@ -307,7 +291,7 @@ void rxe_enable_task(struct rxe_task *task)
 		spin_unlock_irqrestore(&task->lock, flags);
 		return;
 	}
-
 	task->state = TASK_STATE_IDLE;
+	tasklet_enable(&task->tasklet);
 	spin_unlock_irqrestore(&task->lock, flags);
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h
index a63e258b3d66..facb7c8e3729 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.h
+++ b/drivers/infiniband/sw/rxe/rxe_task.h
@@ -22,7 +22,7 @@ enum {
  * called again.
  */
 struct rxe_task {
-	struct work_struct	work;
+	struct tasklet_struct	tasklet;
 	int			state;
 	spinlock_t		lock;
 	struct rxe_qp		*qp;
@@ -32,10 +32,6 @@ struct rxe_task {
 	long			num_done;
 };
 
-int rxe_alloc_wq(void);
-
-void rxe_destroy_wq(void);
-
 /*
  * init rxe_task structure
  *	qp  => parameter to pass to func
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-16  5:59               ` Zhu Yanjun
@ 2023-09-19  4:14                 ` Shinichiro Kawasaki
  2023-09-19  8:07                   ` Zhu Yanjun
  0 siblings, 1 reply; 87+ messages in thread
From: Shinichiro Kawasaki @ 2023-09-19  4:14 UTC (permalink / raw)
  To: Zhu Yanjun; +Cc: Bob Pearson, Bart Van Assche, linux-rdma, linux-scsi

On Sep 16, 2023 / 13:59, Zhu Yanjun wrote:
[...]
> On Debian, with the latest multipathd or revert the commit 9b4b7c1f9f54
> ("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will
> disappear.

Zhu, thank you for the actions.

> On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
> for rxe tasks") is reverted, will this problem still appear?
> I do not have such test environment. The commit is in the attachment,
> can anyone have a test? Please let us know the test result. Thanks.

I tried the latest kernel tag v6.6-rc2 with my Fedora 38 test systems. With the
v6.6-rc2 kernel, I still see the hang. I repeated the blktests test case srp/002
30 time or so, then the hang was recreated. Then I reverted the commit
9b4b7c1f9f54 from v6.6-rc2, and the hang disappeared. I repeated the blktests
test case 100 times, and did not see the hang.

I confirmed these results under two multipathd conditions: 1) with Fedora latest
device-mapper-multipath package v0.9.4, and 2) the latest multipath-tools v0.9.6
that I built from source code.

So, when the commit gets reverted, the hang disappears as I reported for
v6.5-rcX kernels.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-19  4:14                 ` Shinichiro Kawasaki
@ 2023-09-19  8:07                   ` Zhu Yanjun
  2023-09-19 16:30                     ` Pearson, Robert B
  2023-09-19 18:11                     ` Bob Pearson
  0 siblings, 2 replies; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-19  8:07 UTC (permalink / raw)
  To: Shinichiro Kawasaki; +Cc: Bob Pearson, Bart Van Assche, linux-rdma, linux-scsi

在 2023/9/19 12:14, Shinichiro Kawasaki 写道:
> On Sep 16, 2023 / 13:59, Zhu Yanjun wrote:
> [...]
>> On Debian, with the latest multipathd or revert the commit 9b4b7c1f9f54
>> ("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will
>> disappear.
> 
> Zhu, thank you for the actions.
> 
>> On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
>> for rxe tasks") is reverted, will this problem still appear?
>> I do not have such test environment. The commit is in the attachment,
>> can anyone have a test? Please let us know the test result. Thanks.
> 
> I tried the latest kernel tag v6.6-rc2 with my Fedora 38 test systems. With the
> v6.6-rc2 kernel, I still see the hang. I repeated the blktests test case srp/002
> 30 time or so, then the hang was recreated. Then I reverted the commit
> 9b4b7c1f9f54 from v6.6-rc2, and the hang disappeared. I repeated the blktests
> test case 100 times, and did not see the hang.
> 
> I confirmed these results under two multipathd conditions: 1) with Fedora latest
> device-mapper-multipath package v0.9.4, and 2) the latest multipath-tools v0.9.6
> that I built from source code.
> 
> So, when the commit gets reverted, the hang disappears as I reported for
> v6.5-rcX kernels.
Thanks, Shinichiro Kawasaki. Your helps are appreciated.

This problem is related with the followings:

1). Linux distributions: Ubuntu, Debian and Fedora;

2). multipathd;

3). the commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe 
tasks")

On Ubuntu, with or without the commit, this problem does not occur.

On Debian, without this commit, this problem does not occur. With this 
commit, this problem will occur.

On Fedora, without this commit, this problem does not occur. With this 
commit, this problem will occur.

The commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe 
tasks") is from Bob Pearson.

Hi, Bob, do you have any comments about this problem? It seems that this 
commit is not compatible with blktests.

Hi, Jason and Leon, please comment on this problem.

Thanks a lot.

Zhu Yanjun

^ permalink raw reply	[flat|nested] 87+ messages in thread

* RE: [bug report] blktests srp/002 hang
  2023-09-19  8:07                   ` Zhu Yanjun
@ 2023-09-19 16:30                     ` Pearson, Robert B
  2023-09-19 18:11                     ` Bob Pearson
  1 sibling, 0 replies; 87+ messages in thread
From: Pearson, Robert B @ 2023-09-19 16:30 UTC (permalink / raw)
  To: rpearsonhpe; +Cc: Bob Pearson, Bart Van Assche, linux-rdma, linux-scsi

My belief is that the issue is related to timing not the logical operation of the code.
Work queues are just kernel processes and can be scheduled (if not holding spinlocks)
while soft IRQs lock up the CPU until they exit. This can cause longer delays in responding
to ULPs. The work queue tasks for each QP are strictly single threaded which is managed by
the work queue framework the same as tasklets. The other evidence ofthis 

-----Original Message-----
From: Zhu Yanjun <yanjun.zhu@linux.dev> 
Sent: Tuesday, September 19, 2023 3:07 AM
To: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Cc: Bob Pearson <rpearsonhpe@gmail.com>; Bart Van Assche <bvanassche@acm.org>; linux-rdma@vger.kernel.org; linux-scsi@vger.kernel.org
Subject: Re: [bug report] blktests srp/002 hang

在 2023/9/19 12:14, Shinichiro Kawasaki 写道:
> On Sep 16, 2023 / 13:59, Zhu Yanjun wrote:
> [...]
>> On Debian, with the latest multipathd or revert the commit 
>> 9b4b7c1f9f54
>> ("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will 
>> disappear.
> 
> Zhu, thank you for the actions.
> 
>> On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue 
>> support for rxe tasks") is reverted, will this problem still appear?
>> I do not have such test environment. The commit is in the attachment, 
>> can anyone have a test? Please let us know the test result. Thanks.
> 
> I tried the latest kernel tag v6.6-rc2 with my Fedora 38 test systems. 
> With the
> v6.6-rc2 kernel, I still see the hang. I repeated the blktests test 
> case srp/002
> 30 time or so, then the hang was recreated. Then I reverted the commit
> 9b4b7c1f9f54 from v6.6-rc2, and the hang disappeared. I repeated the 
> blktests test case 100 times, and did not see the hang.
> 
> I confirmed these results under two multipathd conditions: 1) with 
> Fedora latest device-mapper-multipath package v0.9.4, and 2) the 
> latest multipath-tools v0.9.6 that I built from source code.
> 
> So, when the commit gets reverted, the hang disappears as I reported 
> for v6.5-rcX kernels.
Thanks, Shinichiro Kawasaki. Your helps are appreciated.

This problem is related with the followings:

1). Linux distributions: Ubuntu, Debian and Fedora;

2). multipathd;

3). the commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe
tasks")

On Ubuntu, with or without the commit, this problem does not occur.

On Debian, without this commit, this problem does not occur. With this commit, this problem will occur.

On Fedora, without this commit, this problem does not occur. With this commit, this problem will occur.

The commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe
tasks") is from Bob Pearson.

Hi, Bob, do you have any comments about this problem? It seems that this commit is not compatible with blktests.

Hi, Jason and Leon, please comment on this problem.

Thanks a lot.

Zhu Yanjun

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-19  8:07                   ` Zhu Yanjun
  2023-09-19 16:30                     ` Pearson, Robert B
@ 2023-09-19 18:11                     ` Bob Pearson
  2023-09-20  4:22                       ` Zhu Yanjun
  1 sibling, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-09-19 18:11 UTC (permalink / raw)
  To: Zhu Yanjun, Shinichiro Kawasaki; +Cc: Bart Van Assche, linux-rdma, linux-scsi

On 9/19/23 03:07, Zhu Yanjun wrote:
> 在 2023/9/19 12:14, Shinichiro Kawasaki 写道:
>> On Sep 16, 2023 / 13:59, Zhu Yanjun wrote:
>> [...]
>>> On Debian, with the latest multipathd or revert the commit 9b4b7c1f9f54
>>> ("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will
>>> disappear.
>>
>> Zhu, thank you for the actions.
>>
>>> On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
>>> for rxe tasks") is reverted, will this problem still appear?
>>> I do not have such test environment. The commit is in the attachment,
>>> can anyone have a test? Please let us know the test result. Thanks.
>>
>> I tried the latest kernel tag v6.6-rc2 with my Fedora 38 test systems. With the
>> v6.6-rc2 kernel, I still see the hang. I repeated the blktests test case srp/002
>> 30 time or so, then the hang was recreated. Then I reverted the commit
>> 9b4b7c1f9f54 from v6.6-rc2, and the hang disappeared. I repeated the blktests
>> test case 100 times, and did not see the hang.
>>
>> I confirmed these results under two multipathd conditions: 1) with Fedora latest
>> device-mapper-multipath package v0.9.4, and 2) the latest multipath-tools v0.9.6
>> that I built from source code.
>>
>> So, when the commit gets reverted, the hang disappears as I reported for
>> v6.5-rcX kernels.
> Thanks, Shinichiro Kawasaki. Your helps are appreciated.
> 
> This problem is related with the followings:
> 
> 1). Linux distributions: Ubuntu, Debian and Fedora;
> 
> 2). multipathd;
> 
> 3). the commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks")
> 
> On Ubuntu, with or without the commit, this problem does not occur.
> 
> On Debian, without this commit, this problem does not occur. With this commit, this problem will occur.
> 
> On Fedora, without this commit, this problem does not occur. With this commit, this problem will occur.
> 
> The commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks") is from Bob Pearson.
> 
> Hi, Bob, do you have any comments about this problem? It seems that this commit is not compatible with blktests.
> 
> Hi, Jason and Leon, please comment on this problem.
> 
> Thanks a lot.
> 
> Zhu Yanjun

My belief is that the issue is related to timing not the logical operation of the code.
Work queues are just kernel processes and can be scheduled (if not holding spinlocks)
while soft IRQs lock up the CPU until they exit. This can cause longer delays in responding
to ULPs. The work queue tasks for each QP are strictly single threaded which is managed by
the work queue framework the same as tasklets.

Earlier in time I have also seen the exact same hang behavior with the siw driver but not
recently. Also I have seen sensitivity to logging changes in the hang behavior. These are
indications that timing may be the cause of the issue.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-19 18:11                     ` Bob Pearson
@ 2023-09-20  4:22                       ` Zhu Yanjun
  2023-09-20 16:24                         ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-20  4:22 UTC (permalink / raw)
  To: Bob Pearson, Shinichiro Kawasaki; +Cc: Bart Van Assche, linux-rdma, linux-scsi

[-- Attachment #1: Type: text/plain, Size: 4134 bytes --]


在 2023/9/20 2:11, Bob Pearson 写道:
> On 9/19/23 03:07, Zhu Yanjun wrote:
>> 在 2023/9/19 12:14, Shinichiro Kawasaki 写道:
>>> On Sep 16, 2023 / 13:59, Zhu Yanjun wrote:
>>> [...]
>>>> On Debian, with the latest multipathd or revert the commit 9b4b7c1f9f54
>>>> ("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will
>>>> disappear.
>>> Zhu, thank you for the actions.
>>>
>>>> On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
>>>> for rxe tasks") is reverted, will this problem still appear?
>>>> I do not have such test environment. The commit is in the attachment,
>>>> can anyone have a test? Please let us know the test result. Thanks.
>>> I tried the latest kernel tag v6.6-rc2 with my Fedora 38 test systems. With the
>>> v6.6-rc2 kernel, I still see the hang. I repeated the blktests test case srp/002
>>> 30 time or so, then the hang was recreated. Then I reverted the commit
>>> 9b4b7c1f9f54 from v6.6-rc2, and the hang disappeared. I repeated the blktests
>>> test case 100 times, and did not see the hang.
>>>
>>> I confirmed these results under two multipathd conditions: 1) with Fedora latest
>>> device-mapper-multipath package v0.9.4, and 2) the latest multipath-tools v0.9.6
>>> that I built from source code.
>>>
>>> So, when the commit gets reverted, the hang disappears as I reported for
>>> v6.5-rcX kernels.
>> Thanks, Shinichiro Kawasaki. Your helps are appreciated.
>>
>> This problem is related with the followings:
>>
>> 1). Linux distributions: Ubuntu, Debian and Fedora;
>>
>> 2). multipathd;
>>
>> 3). the commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks")
>>
>> On Ubuntu, with or without the commit, this problem does not occur.
>>
>> On Debian, without this commit, this problem does not occur. With this commit, this problem will occur.
>>
>> On Fedora, without this commit, this problem does not occur. With this commit, this problem will occur.
>>
>> The commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks") is from Bob Pearson.
>>
>> Hi, Bob, do you have any comments about this problem? It seems that this commit is not compatible with blktests.
>>
>> Hi, Jason and Leon, please comment on this problem.
>>
>> Thanks a lot.
>>
>> Zhu Yanjun
> My belief is that the issue is related to timing not the logical operation of the code.
> Work queues are just kernel processes and can be scheduled (if not holding spinlocks)
> while soft IRQs lock up the CPU until they exit. This can cause longer delays in responding
> to ULPs. The work queue tasks for each QP are strictly single threaded which is managed by
> the work queue framework the same as tasklets.

Thanks, Bob. From you, the workqueue can be scheduled, this can cause 
longer delays in reponding to ULPs.

This will cause ULPs to hang. But the tasklet will lock up the CPU until 
it exits. So the tasklet will repond to

ULPs in time.

To this, there are 3 solutins:

1). Try to make workqueue respond ULPs in time, this hang problem should 
be avoided. so this will not cause

this problem. But from the kernel, workqueue should be scheduled,So it 
is difficult to avoid this longer delay.


2). Make tasklet and workqueue both work in RXE.  We can make one of 
tasklet or workqueue as the default. The user

can choose to use tasklet or workqueue via kernel module parameter or 
sysctl variables. This will cost a lot of time

and efforts to implement it.


3). Revert the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for 
rxe tasks"). Shinichiro Kawasaki

confirmed that this can fix this regression. And the patch is in the 
attachment.


Hi, Bob, Please comment.

Hi, Jason && Leon, please also comment on this.

Thanks a lot.

>
> Earlier in time I have also seen the exact same hang behavior with the siw driver but not
> recently. Also I have seen sensitivity to logging changes in the hang behavior. These are

This is a regression to RXE which is caused by the the commit 
9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").

We should fix it.

Zhu Yanjun

> indications that timing may be the cause of the issue.
>
> Bob

[-- Attachment #2: 0001-Revert-RDMA-rxe-Add-workqueue-support-for-rxe-tasks.patch --]
[-- Type: text/plain, Size: 9149 bytes --]

From fd2360edbc9171298d2e91fd9b74b4c3022db9d4 Mon Sep 17 00:00:00 2001
From: Zhu Yanjun <yanjun.zhu@linux.dev>
Date: Fri, 15 Sep 2023 23:07:17 -0400
Subject: [PATCH 1/1] Revert "RDMA/rxe: Add workqueue support for rxe tasks"

This reverts commit 9b4b7c1f9f54120940e243251e2b1407767b3381.

Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
---
 drivers/infiniband/sw/rxe/rxe.c      |   9 +--
 drivers/infiniband/sw/rxe/rxe_task.c | 110 ++++++++++++---------------
 drivers/infiniband/sw/rxe/rxe_task.h |   6 +-
 3 files changed, 49 insertions(+), 76 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
index 54c723a6edda..7a7e713de52d 100644
--- a/drivers/infiniband/sw/rxe/rxe.c
+++ b/drivers/infiniband/sw/rxe/rxe.c
@@ -212,15 +212,9 @@ static int __init rxe_module_init(void)
 {
 	int err;
 
-	err = rxe_alloc_wq();
-	if (err)
-		return err;
-
 	err = rxe_net_init();
-	if (err) {
-		rxe_destroy_wq();
+	if (err)
 		return err;
-	}
 
 	rdma_link_register(&rxe_link_ops);
 	pr_info("loaded\n");
@@ -232,7 +226,6 @@ static void __exit rxe_module_exit(void)
 	rdma_link_unregister(&rxe_link_ops);
 	ib_unregister_driver(RDMA_DRIVER_RXE);
 	rxe_net_exit();
-	rxe_destroy_wq();
 
 	pr_info("unloaded\n");
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
index 1501120d4f52..fb9a6bc8e620 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.c
+++ b/drivers/infiniband/sw/rxe/rxe_task.c
@@ -6,24 +6,8 @@
 
 #include "rxe.h"
 
-static struct workqueue_struct *rxe_wq;
-
-int rxe_alloc_wq(void)
-{
-	rxe_wq = alloc_workqueue("rxe_wq", WQ_UNBOUND, WQ_MAX_ACTIVE);
-	if (!rxe_wq)
-		return -ENOMEM;
-
-	return 0;
-}
-
-void rxe_destroy_wq(void)
-{
-	destroy_workqueue(rxe_wq);
-}
-
 /* Check if task is idle i.e. not running, not scheduled in
- * work queue and not draining. If so move to busy to
+ * tasklet queue and not draining. If so move to busy to
  * reserve a slot in do_task() by setting to busy and taking
  * a qp reference to cover the gap from now until the task finishes.
  * state will move out of busy if task returns a non zero value
@@ -37,6 +21,9 @@ static bool __reserve_if_idle(struct rxe_task *task)
 {
 	WARN_ON(rxe_read(task->qp) <= 0);
 
+	if (task->tasklet.state & BIT(TASKLET_STATE_SCHED))
+		return false;
+
 	if (task->state == TASK_STATE_IDLE) {
 		rxe_get(task->qp);
 		task->state = TASK_STATE_BUSY;
@@ -51,7 +38,7 @@ static bool __reserve_if_idle(struct rxe_task *task)
 }
 
 /* check if task is idle or drained and not currently
- * scheduled in the work queue. This routine is
+ * scheduled in the tasklet queue. This routine is
  * called by rxe_cleanup_task or rxe_disable_task to
  * see if the queue is empty.
  * Context: caller should hold task->lock.
@@ -59,7 +46,7 @@ static bool __reserve_if_idle(struct rxe_task *task)
  */
 static bool __is_done(struct rxe_task *task)
 {
-	if (work_pending(&task->work))
+	if (task->tasklet.state & BIT(TASKLET_STATE_SCHED))
 		return false;
 
 	if (task->state == TASK_STATE_IDLE ||
@@ -90,23 +77,23 @@ static bool is_done(struct rxe_task *task)
  * schedules the task. They must call __reserve_if_idle to
  * move the task to busy before calling or scheduling.
  * The task can also be moved to drained or invalid
- * by calls to rxe_cleanup_task or rxe_disable_task.
+ * by calls to rxe-cleanup_task or rxe_disable_task.
  * In that case tasks which get here are not executed but
  * just flushed. The tasks are designed to look to see if
- * there is work to do and then do part of it before returning
+ * there is work to do and do part of it before returning
  * here with a return value of zero until all the work
- * has been consumed then it returns a non-zero value.
+ * has been consumed then it retuens a non-zero value.
  * The number of times the task can be run is limited by
  * max iterations so one task cannot hold the cpu forever.
- * If the limit is hit and work remains the task is rescheduled.
  */
-static void do_task(struct rxe_task *task)
+static void do_task(struct tasklet_struct *t)
 {
+	int cont;
+	int ret;
+	struct rxe_task *task = from_tasklet(task, t, tasklet);
 	unsigned int iterations;
 	unsigned long flags;
 	int resched = 0;
-	int cont;
-	int ret;
 
 	WARN_ON(rxe_read(task->qp) <= 0);
 
@@ -128,22 +115,25 @@ static void do_task(struct rxe_task *task)
 		} while (ret == 0 && iterations-- > 0);
 
 		spin_lock_irqsave(&task->lock, flags);
-		/* we're not done yet but we ran out of iterations.
-		 * yield the cpu and reschedule the task
-		 */
-		if (!ret) {
-			task->state = TASK_STATE_IDLE;
-			resched = 1;
-			goto exit;
-		}
-
 		switch (task->state) {
 		case TASK_STATE_BUSY:
-			task->state = TASK_STATE_IDLE;
+			if (ret) {
+				task->state = TASK_STATE_IDLE;
+			} else {
+				/* This can happen if the client
+				 * can add work faster than the
+				 * tasklet can finish it.
+				 * Reschedule the tasklet and exit
+				 * the loop to give up the cpu
+				 */
+				task->state = TASK_STATE_IDLE;
+				resched = 1;
+			}
 			break;
 
-		/* someone tried to schedule the task while we
-		 * were running, keep going
+		/* someone tried to run the task since the last time we called
+		 * func, so we will call one more time regardless of the
+		 * return value
 		 */
 		case TASK_STATE_ARMED:
 			task->state = TASK_STATE_BUSY;
@@ -151,24 +141,22 @@ static void do_task(struct rxe_task *task)
 			break;
 
 		case TASK_STATE_DRAINING:
-			task->state = TASK_STATE_DRAINED;
+			if (ret)
+				task->state = TASK_STATE_DRAINED;
+			else
+				cont = 1;
 			break;
 
 		default:
 			WARN_ON(1);
-			rxe_dbg_qp(task->qp, "unexpected task state = %d",
-				   task->state);
-			task->state = TASK_STATE_IDLE;
+			rxe_info_qp(task->qp, "unexpected task state = %d", task->state);
 		}
 
-exit:
 		if (!cont) {
 			task->num_done++;
 			if (WARN_ON(task->num_done != task->num_sched))
-				rxe_dbg_qp(
-					task->qp,
-					"%ld tasks scheduled, %ld tasks done",
-					task->num_sched, task->num_done);
+				rxe_err_qp(task->qp, "%ld tasks scheduled, %ld tasks done",
+					   task->num_sched, task->num_done);
 		}
 		spin_unlock_irqrestore(&task->lock, flags);
 	} while (cont);
@@ -181,12 +169,6 @@ static void do_task(struct rxe_task *task)
 	rxe_put(task->qp);
 }
 
-/* wrapper around do_task to fix argument for work queue */
-static void do_work(struct work_struct *work)
-{
-	do_task(container_of(work, struct rxe_task, work));
-}
-
 int rxe_init_task(struct rxe_task *task, struct rxe_qp *qp,
 		  int (*func)(struct rxe_qp *))
 {
@@ -194,9 +176,11 @@ int rxe_init_task(struct rxe_task *task, struct rxe_qp *qp,
 
 	task->qp = qp;
 	task->func = func;
+
+	tasklet_setup(&task->tasklet, do_task);
+
 	task->state = TASK_STATE_IDLE;
 	spin_lock_init(&task->lock);
-	INIT_WORK(&task->work, do_work);
 
 	return 0;
 }
@@ -229,6 +213,8 @@ void rxe_cleanup_task(struct rxe_task *task)
 	while (!is_done(task))
 		cond_resched();
 
+	tasklet_kill(&task->tasklet);
+
 	spin_lock_irqsave(&task->lock, flags);
 	task->state = TASK_STATE_INVALID;
 	spin_unlock_irqrestore(&task->lock, flags);
@@ -240,7 +226,7 @@ void rxe_cleanup_task(struct rxe_task *task)
 void rxe_run_task(struct rxe_task *task)
 {
 	unsigned long flags;
-	bool run;
+	int run;
 
 	WARN_ON(rxe_read(task->qp) <= 0);
 
@@ -249,11 +235,11 @@ void rxe_run_task(struct rxe_task *task)
 	spin_unlock_irqrestore(&task->lock, flags);
 
 	if (run)
-		do_task(task);
+		do_task(&task->tasklet);
 }
 
-/* schedule the task to run later as a work queue entry.
- * the queue_work call can be called holding
+/* schedule the task to run later as a tasklet.
+ * the tasklet)schedule call can be called holding
  * the lock.
  */
 void rxe_sched_task(struct rxe_task *task)
@@ -264,7 +250,7 @@ void rxe_sched_task(struct rxe_task *task)
 
 	spin_lock_irqsave(&task->lock, flags);
 	if (__reserve_if_idle(task))
-		queue_work(rxe_wq, &task->work);
+		tasklet_schedule(&task->tasklet);
 	spin_unlock_irqrestore(&task->lock, flags);
 }
 
@@ -291,9 +277,7 @@ void rxe_disable_task(struct rxe_task *task)
 	while (!is_done(task))
 		cond_resched();
 
-	spin_lock_irqsave(&task->lock, flags);
-	task->state = TASK_STATE_DRAINED;
-	spin_unlock_irqrestore(&task->lock, flags);
+	tasklet_disable(&task->tasklet);
 }
 
 void rxe_enable_task(struct rxe_task *task)
@@ -307,7 +291,7 @@ void rxe_enable_task(struct rxe_task *task)
 		spin_unlock_irqrestore(&task->lock, flags);
 		return;
 	}
-
 	task->state = TASK_STATE_IDLE;
+	tasklet_enable(&task->tasklet);
 	spin_unlock_irqrestore(&task->lock, flags);
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h
index a63e258b3d66..facb7c8e3729 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.h
+++ b/drivers/infiniband/sw/rxe/rxe_task.h
@@ -22,7 +22,7 @@ enum {
  * called again.
  */
 struct rxe_task {
-	struct work_struct	work;
+	struct tasklet_struct	tasklet;
 	int			state;
 	spinlock_t		lock;
 	struct rxe_qp		*qp;
@@ -32,10 +32,6 @@ struct rxe_task {
 	long			num_done;
 };
 
-int rxe_alloc_wq(void);
-
-void rxe_destroy_wq(void);
-
 /*
  * init rxe_task structure
  *	qp  => parameter to pass to func
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20  4:22                       ` Zhu Yanjun
@ 2023-09-20 16:24                         ` Bob Pearson
  2023-09-20 16:36                           ` Bart Van Assche
  0 siblings, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-09-20 16:24 UTC (permalink / raw)
  To: Zhu Yanjun, Shinichiro Kawasaki; +Cc: Bart Van Assche, linux-rdma, linux-scsi

On 9/19/23 23:22, Zhu Yanjun wrote:
> 
> 在 2023/9/20 2:11, Bob Pearson 写道:
>> On 9/19/23 03:07, Zhu Yanjun wrote:
>>> 在 2023/9/19 12:14, Shinichiro Kawasaki 写道:
>>>> On Sep 16, 2023 / 13:59, Zhu Yanjun wrote:
>>>> [...]
>>>>> On Debian, with the latest multipathd or revert the commit 9b4b7c1f9f54
>>>>> ("RDMA/rxe: Add workqueue support for rxe tasks"), this problem will
>>>>> disappear.
>>>> Zhu, thank you for the actions.
>>>>
>>>>> On Fedora 38, if the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
>>>>> for rxe tasks") is reverted, will this problem still appear?
>>>>> I do not have such test environment. The commit is in the attachment,
>>>>> can anyone have a test? Please let us know the test result. Thanks.
>>>> I tried the latest kernel tag v6.6-rc2 with my Fedora 38 test systems. With the
>>>> v6.6-rc2 kernel, I still see the hang. I repeated the blktests test case srp/002
>>>> 30 time or so, then the hang was recreated. Then I reverted the commit
>>>> 9b4b7c1f9f54 from v6.6-rc2, and the hang disappeared. I repeated the blktests
>>>> test case 100 times, and did not see the hang.
>>>>
>>>> I confirmed these results under two multipathd conditions: 1) with Fedora latest
>>>> device-mapper-multipath package v0.9.4, and 2) the latest multipath-tools v0.9.6
>>>> that I built from source code.
>>>>
>>>> So, when the commit gets reverted, the hang disappears as I reported for
>>>> v6.5-rcX kernels.
>>> Thanks, Shinichiro Kawasaki. Your helps are appreciated.
>>>
>>> This problem is related with the followings:
>>>
>>> 1). Linux distributions: Ubuntu, Debian and Fedora;
>>>
>>> 2). multipathd;
>>>
>>> 3). the commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks")
>>>
>>> On Ubuntu, with or without the commit, this problem does not occur.
>>>
>>> On Debian, without this commit, this problem does not occur. With this commit, this problem will occur.
>>>
>>> On Fedora, without this commit, this problem does not occur. With this commit, this problem will occur.
>>>
>>> The commits 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks") is from Bob Pearson.
>>>
>>> Hi, Bob, do you have any comments about this problem? It seems that this commit is not compatible with blktests.
>>>
>>> Hi, Jason and Leon, please comment on this problem.
>>>
>>> Thanks a lot.
>>>
>>> Zhu Yanjun
>> My belief is that the issue is related to timing not the logical operation of the code.
>> Work queues are just kernel processes and can be scheduled (if not holding spinlocks)
>> while soft IRQs lock up the CPU until they exit. This can cause longer delays in responding
>> to ULPs. The work queue tasks for each QP are strictly single threaded which is managed by
>> the work queue framework the same as tasklets.
> 
> Thanks, Bob. From you, the workqueue can be scheduled, this can cause longer delays in reponding to ULPs.
> 
> This will cause ULPs to hang. But the tasklet will lock up the CPU until it exits. So the tasklet will repond to
> 
> ULPs in time.
> 
> To this, there are 3 solutins:
> 
> 1). Try to make workqueue respond ULPs in time, this hang problem should be avoided. so this will not cause
> 
> this problem. But from the kernel, workqueue should be scheduled,So it is difficult to avoid this longer delay.
> 
> 
> 2). Make tasklet and workqueue both work in RXE.  We can make one of tasklet or workqueue as the default. The user
> 
> can choose to use tasklet or workqueue via kernel module parameter or sysctl variables. This will cost a lot of time
> 
> and efforts to implement it.
> 
> 
> 3). Revert the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks"). Shinichiro Kawasaki
> 
> confirmed that this can fix this regression. And the patch is in the attachment.
> 
> 
> Hi, Bob, Please comment.
> 
> Hi, Jason && Leon, please also comment on this.
> 
> Thanks a lot.
> 
>>
>> Earlier in time I have also seen the exact same hang behavior with the siw driver but not
>> recently. Also I have seen sensitivity to logging changes in the hang behavior. These are
> 
> This is a regression to RXE which is caused by the the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> 
> We should fix it.
> 
> Zhu Yanjun
> 
>> indications that timing may be the cause of the issue.
>>
>> Bob

The verbs APIs do not make real time commitments. If a ULP fails because of response times it is the
problem in the ULP not in the verbs provider.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 16:24                         ` Bob Pearson
@ 2023-09-20 16:36                           ` Bart Van Assche
  2023-09-20 17:18                             ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-09-20 16:36 UTC (permalink / raw)
  To: Bob Pearson, Zhu Yanjun, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 9/20/23 09:24, Bob Pearson wrote:
> The verbs APIs do not make real time commitments. If a ULP fails 
> because of response times it is the problem in the ULP not in the 
> verbs provider.

I think there is evidence that the root cause is in the RXE driver. I
haven't seen any evidence that there would be any issues in any of the
involved ULP drivers. Am I perhaps missing something?

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 16:36                           ` Bart Van Assche
@ 2023-09-20 17:18                             ` Bob Pearson
  2023-09-20 17:22                               ` Bart Van Assche
  0 siblings, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-09-20 17:18 UTC (permalink / raw)
  To: Bart Van Assche, Zhu Yanjun, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 9/20/23 11:36, Bart Van Assche wrote:
> On 9/20/23 09:24, Bob Pearson wrote:
>> The verbs APIs do not make real time commitments. If a ULP fails because of response times it is the problem in the ULP not in the verbs provider.
> 
> I think there is evidence that the root cause is in the RXE driver. I
> haven't seen any evidence that there would be any issues in any of the
> involved ULP drivers. Am I perhaps missing something?
> 
> Bart.

I agree it is definitely possible. But I have also seen the same behavior in the siw driver which is completely
independent. I have tried but have not been able to figure out what the ULPs are waiting for when the hangs
occur. If someone who has a good understanding of the ULPs could catch a hang and figure what is missing it
would give a clue as to what is going on.

As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of
the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.

Bob


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 17:18                             ` Bob Pearson
@ 2023-09-20 17:22                               ` Bart Van Assche
  2023-09-20 17:29                                 ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-09-20 17:22 UTC (permalink / raw)
  To: Bob Pearson, Zhu Yanjun, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 9/20/23 10:18, Bob Pearson wrote:
> But I have also seen the same behavior in the siw driver which is
> completely independent.

Hmm ... I haven't seen any hangs yet with the siw driver.

> As mentioned above at the moment Ubuntu is failing rarely. But it 
> used to fail reliably (srp/002 about 75% of the time and srp/011 
> about 99% of the time.) There haven't been any changes to rxe to 
> explain this.

I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
support for rxe tasks")?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 17:22                               ` Bart Van Assche
@ 2023-09-20 17:29                                 ` Bob Pearson
  2023-09-21  5:46                                   ` Zhu Yanjun
                                                     ` (2 more replies)
  0 siblings, 3 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-20 17:29 UTC (permalink / raw)
  To: Bart Van Assche, Zhu Yanjun, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi

On 9/20/23 12:22, Bart Van Assche wrote:
> On 9/20/23 10:18, Bob Pearson wrote:
>> But I have also seen the same behavior in the siw driver which is
>> completely independent.
> 
> Hmm ... I haven't seen any hangs yet with the siw driver.

I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
> 
>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
> 
> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
> support for rxe tasks")?

That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
and wqs. But after updating Ubuntu and the kernel at some point they all went away.

> 
> Thanks,
> 
> Bart.



^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 17:29                                 ` Bob Pearson
@ 2023-09-21  5:46                                   ` Zhu Yanjun
  2023-09-21 10:06                                   ` Zhu Yanjun
  2023-09-21 14:23                                   ` Rain River
  2 siblings, 0 replies; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-21  5:46 UTC (permalink / raw)
  To: Bob Pearson, Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi


在 2023/9/21 1:29, Bob Pearson 写道:
> On 9/20/23 12:22, Bart Van Assche wrote:
>> On 9/20/23 10:18, Bob Pearson wrote:
>>> But I have also seen the same behavior in the siw driver which is
>>> completely independent.
>> Hmm ... I haven't seen any hangs yet with the siw driver.
> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>> support for rxe tasks")?
> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
> and wqs. But after updating Ubuntu and the kernel at some point they all went away.

Thanks, Bob. From what you said, in Ubuntu, this problem does not occur 
now.

To now,

On Debian, without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue 
support for rxe tasks"), this hang does not occur.

On Fedora, similar to Debian.

On Ubuntu, this problem does not occur now. But not sure if this commit 
exists or not.

Hi, Bob, can you make tests without the above commit to verify if the 
same problem occurs or not on Ubuntu?

Can any one who has test environments to verify if this problem still 
occurs on Ubuntu without this commit?

Jason && Leon, please comment on this.

Thanks a lot.

Zhu Yanjun

>
>> Thanks,
>>
>> Bart.
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 17:29                                 ` Bob Pearson
  2023-09-21  5:46                                   ` Zhu Yanjun
@ 2023-09-21 10:06                                   ` Zhu Yanjun
  2023-09-21 14:23                                   ` Rain River
  2 siblings, 0 replies; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-21 10:06 UTC (permalink / raw)
  To: Bob Pearson, Bart Van Assche, Shinichiro Kawasaki; +Cc: linux-rdma, linux-scsi


在 2023/9/21 1:29, Bob Pearson 写道:
> On 9/20/23 12:22, Bart Van Assche wrote:
>> On 9/20/23 10:18, Bob Pearson wrote:
>>> But I have also seen the same behavior in the siw driver which is
>>> completely independent.
>> Hmm ... I haven't seen any hangs yet with the siw driver.
> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>> support for rxe tasks")?
> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
Thanks, Bob. From what you said, in Ubuntu, this problem does not occur 
now.

To now,

On Debian, without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue 
support for rxe tasks"), this hang does not occur.

On Fedora, similar to Debian.

On Ubuntu, this problem does not occur now. But not sure if this commit 
exists or not.

Hi, Bob, can you make tests without the above commit to verify if the 
same problem occurs or not on Ubuntu?

Can any one who has test environments to verify if this problem still 
occurs on Ubuntu without this commit?

Jason && Leon, please comment on this.

Thanks a lot.

Zhu Yanjun
>
>> Thanks,
>>
>> Bart.
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-20 17:29                                 ` Bob Pearson
  2023-09-21  5:46                                   ` Zhu Yanjun
  2023-09-21 10:06                                   ` Zhu Yanjun
@ 2023-09-21 14:23                                   ` Rain River
  2023-09-21 14:39                                     ` Bob Pearson
  2 siblings, 1 reply; 87+ messages in thread
From: Rain River @ 2023-09-21 14:23 UTC (permalink / raw)
  To: Bob Pearson
  Cc: Bart Van Assche, Zhu Yanjun, Shinichiro Kawasaki, linux-rdma, linux-scsi

On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>
> On 9/20/23 12:22, Bart Van Assche wrote:
> > On 9/20/23 10:18, Bob Pearson wrote:
> >> But I have also seen the same behavior in the siw driver which is
> >> completely independent.
> >
> > Hmm ... I haven't seen any hangs yet with the siw driver.
>
> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
> >
> >> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
> >
> > I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
> > support for rxe tasks")?
>
> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
> and wqs. But after updating Ubuntu and the kernel at some point they all went away.

I made tests on the latest Ubuntu with the latest kernel without the
commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
workqueue support for rxe tasks") is reverted.
I made blktest tests for about 30 times, this problem does not occur.

So I confirm that without this commit, this hang problem does not
occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
workqueue support for rxe tasks").

Nanthan

>
> >
> > Thanks,
> >
> > Bart.
>
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-21 14:23                                   ` Rain River
@ 2023-09-21 14:39                                     ` Bob Pearson
  2023-09-21 15:08                                       ` Zhu Yanjun
  2023-09-21 15:10                                       ` Zhu Yanjun
  0 siblings, 2 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-21 14:39 UTC (permalink / raw)
  To: Rain River, Daisuke Matsuda
  Cc: Bart Van Assche, Zhu Yanjun, Shinichiro Kawasaki, linux-rdma, linux-scsi

On 9/21/23 09:23, Rain River wrote:
> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>
>> On 9/20/23 12:22, Bart Van Assche wrote:
>>> On 9/20/23 10:18, Bob Pearson wrote:
>>>> But I have also seen the same behavior in the siw driver which is
>>>> completely independent.
>>>
>>> Hmm ... I haven't seen any hangs yet with the siw driver.
>>
>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>>
>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>>>
>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>>> support for rxe tasks")?
>>
>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
> 
> I made tests on the latest Ubuntu with the latest kernel without the
> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> workqueue support for rxe tasks") is reverted.
> I made blktest tests for about 30 times, this problem does not occur.
> 
> So I confirm that without this commit, this hang problem does not
> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> workqueue support for rxe tasks").
> 
> Nanthan
> 
>>
>>>
>>> Thanks,
>>>
>>> Bart.
>>
>>

This commit is very important for several reasons. It is needed for the ODP implementation
that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
queue implementation scales well with increasing qp number while the tasklet implementation
does not. This is critical for the drivers use in large scale storage applications. So, if
there is a bug in the work queue implementation it needs to be fixed not reverted.

I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
something missing causing it to wait.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-21 14:39                                     ` Bob Pearson
@ 2023-09-21 15:08                                       ` Zhu Yanjun
  2023-09-21 15:10                                       ` Zhu Yanjun
  1 sibling, 0 replies; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-21 15:08 UTC (permalink / raw)
  To: Bob Pearson, Rain River, Daisuke Matsuda
  Cc: Bart Van Assche, Shinichiro Kawasaki, linux-rdma, linux-scsi


在 2023/9/21 22:39, Bob Pearson 写道:
> On 9/21/23 09:23, Rain River wrote:
>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>> On 9/20/23 12:22, Bart Van Assche wrote:
>>>> On 9/20/23 10:18, Bob Pearson wrote:
>>>>> But I have also seen the same behavior in the siw driver which is
>>>>> completely independent.
>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>>>> support for rxe tasks")?
>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
>> I made tests on the latest Ubuntu with the latest kernel without the
>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>> workqueue support for rxe tasks") is reverted.
>> I made blktest tests for about 30 times, this problem does not occur.
>>
>> So I confirm that without this commit, this hang problem does not
>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>> workqueue support for rxe tasks").
>>
>> Nanthan
>>
>>>> Thanks,
>>>>
>>>> Bart.
>>>
> This commit is very important for several reasons. It is needed for the ODP implementation
> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
> queue implementation scales well with increasing qp number while the tasklet implementation
> does not. This is critical for the drivers use in large scale storage applications. So, if
> there is a bug in the work queue implementation it needs to be fixed not reverted.
>
> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
> something missing causing it to wait.

Hi, Bob


You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support 
for rxe tasks").

You should be very familiar with this commit.

And this commit causes regression.

So you should delved into the source code to find the root cause, then 
fix it.


Jason && Leon, please comment on this.


Best Regards,

Zhu Yanjun

>
> Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-21 14:39                                     ` Bob Pearson
  2023-09-21 15:08                                       ` Zhu Yanjun
@ 2023-09-21 15:10                                       ` Zhu Yanjun
  2023-09-22 18:14                                         ` Bob Pearson
  1 sibling, 1 reply; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-21 15:10 UTC (permalink / raw)
  To: Bob Pearson, Rain River, Daisuke Matsuda, Jason Gunthorpe, leon
  Cc: Bart Van Assche, Shinichiro Kawasaki, RDMA mailing list, linux-scsi


在 2023/9/21 22:39, Bob Pearson 写道:
> On 9/21/23 09:23, Rain River wrote:
>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>> On 9/20/23 12:22, Bart Van Assche wrote:
>>>> On 9/20/23 10:18, Bob Pearson wrote:
>>>>> But I have also seen the same behavior in the siw driver which is
>>>>> completely independent.
>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>>>> support for rxe tasks")?
>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
>> I made tests on the latest Ubuntu with the latest kernel without the
>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>> workqueue support for rxe tasks") is reverted.
>> I made blktest tests for about 30 times, this problem does not occur.
>>
>> So I confirm that without this commit, this hang problem does not
>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>> workqueue support for rxe tasks").
>>
>> Nanthan
>>
>>>> Thanks,
>>>>
>>>> Bart.
>>>
> This commit is very important for several reasons. It is needed for the ODP implementation
> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
> queue implementation scales well with increasing qp number while the tasklet implementation
> does not. This is critical for the drivers use in large scale storage applications. So, if
> there is a bug in the work queue implementation it needs to be fixed not reverted.
>
> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
> something missing causing it to wait.

Hi, Bob


You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support 
for rxe tasks").

You should be very familiar with this commit.

And this commit causes regression.

So you should delved into the source code to find the root cause, then 
fix it.


Jason && Leon, please comment on this.


Best Regards,

Zhu Yanjun

>
> Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-08-21  6:46 [bug report] blktests srp/002 hang Shinichiro Kawasaki
  2023-08-22  1:46 ` Bob Pearson
@ 2023-09-22 11:06 ` Linux regression tracking #adding (Thorsten Leemhuis)
  2023-10-13 12:51   ` Linux regression tracking #update (Thorsten Leemhuis)
  1 sibling, 1 reply; 87+ messages in thread
From: Linux regression tracking #adding (Thorsten Leemhuis) @ 2023-09-22 11:06 UTC (permalink / raw)
  To: linux-rdma, linux-scsi; +Cc: Linux kernel regressions list

[TLDR: I'm adding this report to the list of tracked Linux kernel
regressions; the text you find below is based on a few templates
paragraphs you might have encountered already in similar form.
See link in footer if these mails annoy you.]

On 21.08.23 08:46, Shinichiro Kawasaki wrote:
> I observed a process hang at the blktests test case srp/002 occasionally, using
> kernel v6.5-rcX. Kernel reported stall of many kworkers [1]. PID 2757 hanged at
> inode_sleep_on_writeback(). Other kworkers hanged at __inode_wait_for_writeback.
> 
> The hang is recreated in stable manner by repeating the test case srp/002 (from
> 15 times to 30 times).
> 
> I bisected and found the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
> for rxe tasks") looks like the trigger commit. When I revert it from the kernel
> v6.5-rc7, the hang symptom disappears. I'm not sure how the commit relates to
> the hang. Comments will be welcomed.
> […]

Thanks for the report. To be sure the issue doesn't fall through the
cracks unnoticed, I'm adding it to regzbot, the Linux kernel regression
tracking bot:

#regzbot ^introduced 9b4b7c1f9f54
#regzbot title RDMA/rxe: occasionally pocess hang at the blktests test
case srp/002
#regzbot ignore-activity

This isn't a regression? This issue or a fix for it are already
discussed somewhere else? It was fixed already? You want to clarify when
the regression started to happen? Or point out I got the title or
something else totally wrong? Then just reply and tell me -- ideally
while also telling regzbot about it, as explained by the page listed in
the footer of this mail.

Developers: When fixing the issue, remember to add 'Link:' tags pointing
to the report (the parent of this mail). See page linked in footer for
details.

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
That page also explains what to do if mails like this annoy you.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-21 15:10                                       ` Zhu Yanjun
@ 2023-09-22 18:14                                         ` Bob Pearson
  2023-09-22 22:06                                           ` Bart Van Assche
  2023-09-24  1:17                                           ` Rain River
  0 siblings, 2 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-22 18:14 UTC (permalink / raw)
  To: Zhu Yanjun, Rain River, Daisuke Matsuda, Jason Gunthorpe, leon
  Cc: Bart Van Assche, Shinichiro Kawasaki, RDMA mailing list, linux-scsi

On 9/21/23 10:10, Zhu Yanjun wrote:
> 
> 在 2023/9/21 22:39, Bob Pearson 写道:
>> On 9/21/23 09:23, Rain River wrote:
>>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>>> On 9/20/23 12:22, Bart Van Assche wrote:
>>>>> On 9/20/23 10:18, Bob Pearson wrote:
>>>>>> But I have also seen the same behavior in the siw driver which is
>>>>>> completely independent.
>>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
>>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>>>>> support for rxe tasks")?
>>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
>>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
>>> I made tests on the latest Ubuntu with the latest kernel without the
>>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>>> workqueue support for rxe tasks") is reverted.
>>> I made blktest tests for about 30 times, this problem does not occur.
>>>
>>> So I confirm that without this commit, this hang problem does not
>>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>>> workqueue support for rxe tasks").
>>>
>>> Nanthan
>>>
>>>>> Thanks,
>>>>>
>>>>> Bart.
>>>>
>> This commit is very important for several reasons. It is needed for the ODP implementation
>> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
>> queue implementation scales well with increasing qp number while the tasklet implementation
>> does not. This is critical for the drivers use in large scale storage applications. So, if
>> there is a bug in the work queue implementation it needs to be fixed not reverted.
>>
>> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
>> something missing causing it to wait.
> 
> Hi, Bob
> 
> 
> You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> 
> You should be very familiar with this commit.
> 
> And this commit causes regression.
> 
> So you should delved into the source code to find the root cause, then fix it.

Zhu,

I have spent tons of time over the months trying to figure out what is happening with blktests.
As I have mentioned several times I have seen the same exact failure in siw in the past although
currently that doesn't seem to happen so I had been suspecting that the problem may be in the ULP.
The challenge is that the blktests represents a huge stack of software much of which I am not
familiar with. The bug is a hang in layers above the rxe driver and so far no one has been able to
say with any specificity the rxe driver failed to do something needed to make progress or violated
expected behavior. Without any clue as to where to look it has been hard to make progress.

My main motivation is making Lustre run on rxe and it does and it's fast enough to meet our needs.
Lustre is similar to srp as a ULP and in all of our testing we have never seen a similar hang. Other
hangs to be sure but not this one. I believe that this bug will never get resolved until someone with
a good understanding of the ulp drivers makes an effort to find out where and why the hang is occurring.
From there it should be straight forward to fix the problem. I am continuing to investigate and am learning
the device-manager/multipath/srp/scsi stack but I have a long ways to go.

Bob


> 
> 
> Jason && Leon, please comment on this.
> 
> 
> Best Regards,
> 
> Zhu Yanjun
> 
>>
>> Bob


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-22 18:14                                         ` Bob Pearson
@ 2023-09-22 22:06                                           ` Bart Van Assche
  2023-09-24  1:17                                           ` Rain River
  1 sibling, 0 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-09-22 22:06 UTC (permalink / raw)
  To: Bob Pearson, Zhu Yanjun, Rain River, Daisuke Matsuda,
	Jason Gunthorpe, leon
  Cc: Shinichiro Kawasaki, RDMA mailing list, linux-scsi

On 9/22/23 11:14, Bob Pearson wrote:
> I have spent tons of time over the months trying to figure out what 
> is happening with blktests. As I have mentioned several times I have 
> seen the same exact failure in siw in the past although currently 
> that doesn't seem to happen so I had been suspecting that the
> problem may be in the ULP. The challenge is that the blktests
> represents a huge stack of software much of which I am not familiar
> with. The bug is a hang in layers above the rxe driver and so far no
> one has been able to say with any specificity the rxe driver failed
> to do something needed to make progress or violated expected
> behavior. Without any clue as to where to look it has been hard to
> make progress.
> 
> My main motivation is making Lustre run on rxe and it does and it's 
> fast enough to meet our needs. Lustre is similar to srp as a ULP and 
> in all of our testing we have never seen a similar hang. Other hangs 
> to be sure but not this one. I believe that this bug will never get 
> resolved until someone with a good understanding of the ulp drivers 
> makes an effort to find out where and why the hang is occurring.
> From there it should be straight forward to fix the problem. I am 
> continuing to investigate and am learning the 
> device-manager/multipath/srp/scsi stack but I have a long ways to 
> go.

Why would knowledge of device-manager/multipath/srp/scsi be required to
make progress?

Please start with fixing the KASAN complaint shown below. I think the
root cause of this complaint is in the RDMA/rxe driver. This issue can
be reproduced as follows:
* Build and install Linus' master branch with KASAN enabled (commit
   8018e02a8703 ("Merge tag 'thermal-6.6-rc3' of
   git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm")).
* Install the latest version of blktests and run the following shell
   command:

     export use_rxe=1; while (cd ~bart/software/blktests && ./check -q srp/002); do :; done

   The KASAN complaint should appear during the first run of test
   srp/002.

Thanks,

Bart.

BUG: KASAN: slab-use-after-free in rxe_comp_queue_pkt+0x3d/0x80 [rdma_rxe]
Read of size 8 at addr ffff888111865928 by task kworker/u18:5/3502

CPU: 1 PID: 3502 Comm: kworker/u18:5 Tainted: G        W          6.6.0-rc2-dbg #3
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
Workqueue: rxe_wq do_work [rdma_rxe]
Call Trace:
  <TASK>
  dump_stack_lvl+0x5c/0xc0
  print_address_description.constprop.0+0x33/0x400
  ? preempt_count_sub+0x18/0xc0
  print_report+0xb6/0x260
  ? kasan_complete_mode_report_info+0x5c/0x190
  kasan_report+0xc6/0x100
  ? rxe_comp_queue_pkt+0x3d/0x80 [rdma_rxe]
  ? rxe_comp_queue_pkt+0x3d/0x80 [rdma_rxe]
  __asan_load8+0x69/0x90
  rxe_comp_queue_pkt+0x3d/0x80 [rdma_rxe]
  rxe_rcv+0x3db/0x400 [rdma_rxe]
  ? rxe_rcv_mcast_pkt+0x500/0x500 [rdma_rxe]
  rxe_xmit_packet+0x224/0x3f0 [rdma_rxe]
  ? rxe_prepare+0x110/0x110 [rdma_rxe]
  ? prepare_ack_packet+0x1cd/0x340 [rdma_rxe]
  send_common_ack.isra.0+0xac/0x140 [rdma_rxe]
  ? prepare_ack_packet+0x340/0x340 [rdma_rxe]
  ? __this_cpu_preempt_check+0x13/0x20
  ? rxe_resp_check_length+0x148/0x2d0 [rdma_rxe]
  rxe_responder+0xe0b/0x1610 [rdma_rxe]
  ? __this_cpu_preempt_check+0x13/0x20
  ? rxe_resp_queue_pkt+0x70/0x70 [rdma_rxe]
  do_task+0xd2/0x350 [rdma_rxe]
  ? lockdep_hardirqs_on+0x7e/0x100
  rxe_run_task+0x8a/0xa0 [rdma_rxe]
  rxe_resp_queue_pkt+0x62/0x70 [rdma_rxe]
  rxe_rcv+0x327/0x400 [rdma_rxe]
  ? rxe_rcv_mcast_pkt+0x500/0x500 [rdma_rxe]
  rxe_xmit_packet+0x224/0x3f0 [rdma_rxe]
  ? rxe_prepare+0x110/0x110 [rdma_rxe]
  rxe_requester+0x6bb/0x13a0 [rdma_rxe]
  ? check_prev_add+0x12c0/0x12c0
  ? rnr_nak_timer+0xd0/0xd0 [rdma_rxe]
  ? __lock_acquire+0x88c/0xf30
  ? __kasan_check_read+0x11/0x20
  ? mark_lock+0xeb/0xa80
  ? mark_lock_irq+0xcd0/0xcd0
  ? __lock_release.isra.0+0x14c/0x280
  ? do_task+0x9f/0x350 [rdma_rxe]
  ? reacquire_held_locks+0x270/0x270
  ? _raw_spin_unlock_irqrestore+0x56/0x80
  ? __this_cpu_preempt_check+0x13/0x20
  ? lockdep_hardirqs_on+0x7e/0x100
  ? rnr_nak_timer+0xd0/0xd0 [rdma_rxe]
  do_task+0xd2/0x350 [rdma_rxe]
  ? __this_cpu_preempt_check+0x13/0x20
  do_work+0xe/0x10 [rdma_rxe]
  process_one_work+0x4af/0x9a0
  ? init_worker_pool+0x350/0x350
  ? assign_work+0xe2/0x120
  worker_thread+0x385/0x680
  ? preempt_count_sub+0x18/0xc0
  ? process_one_work+0x9a0/0x9a0
  kthread+0x1b9/0x200
  ? kthread+0xfd/0x200
  ? kthread_complete_and_exit+0x30/0x30
  ret_from_fork+0x36/0x60
  ? kthread_complete_and_exit+0x30/0x30
  ret_from_fork_asm+0x11/0x20
  </TASK>

Allocated by task 3502:
  kasan_save_stack+0x26/0x50
  kasan_set_track+0x25/0x30
  kasan_save_alloc_info+0x1e/0x30
  __kasan_slab_alloc+0x6a/0x70
  kmem_cache_alloc_node+0x16a/0x3d0
  __alloc_skb+0x1d8/0x250
  rxe_init_packet+0x11a/0x3b0 [rdma_rxe]
  prepare_ack_packet+0x9c/0x340 [rdma_rxe]
  send_common_ack.isra.0+0x95/0x140 [rdma_rxe]
  rxe_responder+0xe0b/0x1610 [rdma_rxe]
  do_task+0xd2/0x350 [rdma_rxe]
  rxe_run_task+0x8a/0xa0 [rdma_rxe]
  rxe_resp_queue_pkt+0x62/0x70 [rdma_rxe]
  rxe_rcv+0x327/0x400 [rdma_rxe]
  rxe_xmit_packet+0x224/0x3f0 [rdma_rxe]
  rxe_requester+0x6bb/0x13a0 [rdma_rxe]
  do_task+0xd2/0x350 [rdma_rxe]
  do_work+0xe/0x10 [rdma_rxe]
  process_one_work+0x4af/0x9a0
  worker_thread+0x385/0x680
  kthread+0x1b9/0x200
  ret_from_fork+0x36/0x60
  ret_from_fork_asm+0x11/0x20

Freed by task 56:
  kasan_save_stack+0x26/0x50
  kasan_set_track+0x25/0x30
  kasan_save_free_info+0x2b/0x40
  ____kasan_slab_free+0x14c/0x1b0
  __kasan_slab_free+0x12/0x20
  kmem_cache_free+0x20a/0x4b0
  kfree_skbmem+0xaa/0xc0
  kfree_skb_reason+0x8e/0xe0
  rxe_completer+0x205/0xfe0 [rdma_rxe]
  do_task+0xd2/0x350 [rdma_rxe]
  do_work+0xe/0x10 [rdma_rxe]
  process_one_work+0x4af/0x9a0
  worker_thread+0x385/0x680
  kthread+0x1b9/0x200
  ret_from_fork+0x36/0x60
  ret_from_fork_asm+0x11/0x20

The buggy address belongs to the object at ffff888111865900
  which belongs to the cache skbuff_head_cache of size 224
The buggy address is located 40 bytes inside of
  freed 224-byte region [ffff888111865900, ffff8881118659e0)

The buggy address belongs to the physical page:
page:00000000c6a967c7 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x111864
head:00000000c6a967c7 order:1 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x2000000000000840(slab|head|node=0|zone=2)
page_type: 0xffffffff()
raw: 2000000000000840 ffff888100274c80 dead000000000122 0000000000000000
raw: 0000000000000000 0000000080190019 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected

Memory state around the buggy address:
  ffff888111865800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  ffff888111865880: 00 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc
 >ffff888111865900: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                   ^
  ffff888111865980: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
  ffff888111865a00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-22 18:14                                         ` Bob Pearson
  2023-09-22 22:06                                           ` Bart Van Assche
@ 2023-09-24  1:17                                           ` Rain River
  2023-09-25  4:47                                             ` Daisuke Matsuda (Fujitsu)
  1 sibling, 1 reply; 87+ messages in thread
From: Rain River @ 2023-09-24  1:17 UTC (permalink / raw)
  To: Bob Pearson
  Cc: Zhu Yanjun, Daisuke Matsuda, Jason Gunthorpe, leon,
	Bart Van Assche, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On Sat, Sep 23, 2023 at 2:14 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>
> On 9/21/23 10:10, Zhu Yanjun wrote:
> >
> > 在 2023/9/21 22:39, Bob Pearson 写道:
> >> On 9/21/23 09:23, Rain River wrote:
> >>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
> >>>> On 9/20/23 12:22, Bart Van Assche wrote:
> >>>>> On 9/20/23 10:18, Bob Pearson wrote:
> >>>>>> But I have also seen the same behavior in the siw driver which is
> >>>>>> completely independent.
> >>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
> >>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
> >>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
> >>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
> >>>>> support for rxe tasks")?
> >>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
> >>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
> >>> I made tests on the latest Ubuntu with the latest kernel without the
> >>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> >>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> >>> workqueue support for rxe tasks") is reverted.
> >>> I made blktest tests for about 30 times, this problem does not occur.
> >>>
> >>> So I confirm that without this commit, this hang problem does not
> >>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> >>> workqueue support for rxe tasks").
> >>>
> >>> Nanthan
> >>>
> >>>>> Thanks,
> >>>>>
> >>>>> Bart.
> >>>>
> >> This commit is very important for several reasons. It is needed for the ODP implementation
> >> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
> >> queue implementation scales well with increasing qp number while the tasklet implementation
> >> does not. This is critical for the drivers use in large scale storage applications. So, if
> >> there is a bug in the work queue implementation it needs to be fixed not reverted.
> >>
> >> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
> >> something missing causing it to wait.
> >
> > Hi, Bob
> >
> >
> > You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> >
> > You should be very familiar with this commit.
> >
> > And this commit causes regression.
> >
> > So you should delved into the source code to find the root cause, then fix it.
>
> Zhu,
>
> I have spent tons of time over the months trying to figure out what is happening with blktests.
> As I have mentioned several times I have seen the same exact failure in siw in the past although
> currently that doesn't seem to happen so I had been suspecting that the problem may be in the ULP.
> The challenge is that the blktests represents a huge stack of software much of which I am not
> familiar with. The bug is a hang in layers above the rxe driver and so far no one has been able to
> say with any specificity the rxe driver failed to do something needed to make progress or violated
> expected behavior. Without any clue as to where to look it has been hard to make progress.

Bob

Work queue will sleep. If work queue sleep for long time, the packets
will not be sent to ULP. This is why this hang occurs.
Difficult to handle this sleep in work queue. It had better revert
this commit in RXE.
Because work queue sleeps,  ULP can not wait for long time for the
packets. If packets can not reach ULPs for long time, many problems
will occur to ULPs.

>
> My main motivation is making Lustre run on rxe and it does and it's fast enough to meet our needs.
> Lustre is similar to srp as a ULP and in all of our testing we have never seen a similar hang. Other
> hangs to be sure but not this one. I believe that this bug will never get resolved until someone with
> a good understanding of the ulp drivers makes an effort to find out where and why the hang is occurring.
> From there it should be straight forward to fix the problem. I am continuing to investigate and am learning
> the device-manager/multipath/srp/scsi stack but I have a long ways to go.
>
> Bob
>
>
> >
> >
> > Jason && Leon, please comment on this.
> >
> >
> > Best Regards,
> >
> > Zhu Yanjun
> >
> >>
> >> Bob
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* RE: [bug report] blktests srp/002 hang
  2023-09-24  1:17                                           ` Rain River
@ 2023-09-25  4:47                                             ` Daisuke Matsuda (Fujitsu)
  2023-09-25 14:31                                               ` Zhu Yanjun
  2023-09-25 15:00                                               ` Bart Van Assche
  0 siblings, 2 replies; 87+ messages in thread
From: Daisuke Matsuda (Fujitsu) @ 2023-09-25  4:47 UTC (permalink / raw)
  To: 'Rain River', Bob Pearson
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Bart Van Assche,
	Shinichiro Kawasaki, RDMA mailing list, linux-scsi

On Sun, Sep 24, 2023 10:18 AM Rain River wrote:
> On Sat, Sep 23, 2023 at 2:14 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
> >
> > On 9/21/23 10:10, Zhu Yanjun wrote:
> > >
> > > 在 2023/9/21 22:39, Bob Pearson 写道:
> > >> On 9/21/23 09:23, Rain River wrote:
> > >>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
> > >>>> On 9/20/23 12:22, Bart Van Assche wrote:
> > >>>>> On 9/20/23 10:18, Bob Pearson wrote:
> > >>>>>> But I have also seen the same behavior in the siw driver which is
> > >>>>>> completely independent.
> > >>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
> > >>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
> > >>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of
> the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
> > >>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
> > >>>>> support for rxe tasks")?
> > >>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
> > >>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
> > >>> I made tests on the latest Ubuntu with the latest kernel without the
> > >>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> > >>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> > >>> workqueue support for rxe tasks") is reverted.
> > >>> I made blktest tests for about 30 times, this problem does not occur.
> > >>>
> > >>> So I confirm that without this commit, this hang problem does not
> > >>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> > >>> workqueue support for rxe tasks").
> > >>>
> > >>> Nanthan
> > >>>
> > >>>>> Thanks,
> > >>>>>
> > >>>>> Bart.
> > >>>>
> > >> This commit is very important for several reasons. It is needed for the ODP implementation
> > >> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
> > >> queue implementation scales well with increasing qp number while the tasklet implementation
> > >> does not. This is critical for the drivers use in large scale storage applications. So, if
> > >> there is a bug in the work queue implementation it needs to be fixed not reverted.
> > >>
> > >> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
> > >> something missing causing it to wait.
> > >
> > > Hi, Bob
> > >
> > >
> > > You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> > >
> > > You should be very familiar with this commit.
> > >
> > > And this commit causes regression.
> > >
> > > So you should delved into the source code to find the root cause, then fix it.
> >
> > Zhu,
> >
> > I have spent tons of time over the months trying to figure out what is happening with blktests.
> > As I have mentioned several times I have seen the same exact failure in siw in the past although
> > currently that doesn't seem to happen so I had been suspecting that the problem may be in the ULP.
> > The challenge is that the blktests represents a huge stack of software much of which I am not
> > familiar with. The bug is a hang in layers above the rxe driver and so far no one has been able to
> > say with any specificity the rxe driver failed to do something needed to make progress or violated
> > expected behavior. Without any clue as to where to look it has been hard to make progress.
> 
> Bob
> 
> Work queue will sleep. If work queue sleep for long time, the packets
> will not be sent to ULP. This is why this hang occurs.

In general work queue can sleep, but the workload running in rxe driver
should not sleep because it was originally running on tasklet and converted
to use work queue. A task can sometime take longer because of IRQs, but
the same thing can also happen with tasklet. If there is a difference between
the two, I think it would be the overhead of scheduring the work queue.

> Difficult to handle this sleep in work queue. It had better revert
> this commit in RXE.

I am objected to reverting the commit at this stage. As Bob wrote above,
nobody has found any logical failure in rxe driver. It is quite possible
that the patch is just revealing a latent bug in the higher layers.

> Because work queue sleeps,  ULP can not wait for long time for the
> packets. If packets can not reach ULPs for long time, many problems
> will occur to ULPs.

I wonder where in the rxe driver does it sleep. BTW, most packets are
processed in NET_RX_IRQ context, and work queue is scheduled only
when there is already a running context. If your speculation is to the point,
the hang will occur more frequently if we change it to use work queue exclusively.
My ODP patches include a change to do this.
Cf. https://lore.kernel.org/lkml/7699a90bc4af10c33c0a46ef6330ed4bb7e7ace6.1694153251.git.matsuda-daisuke@fujitsu.com/

Thanks,
Daisuke

> 
> >
> > My main motivation is making Lustre run on rxe and it does and it's fast enough to meet our needs.
> > Lustre is similar to srp as a ULP and in all of our testing we have never seen a similar hang. Other
> > hangs to be sure but not this one. I believe that this bug will never get resolved until someone with
> > a good understanding of the ulp drivers makes an effort to find out where and why the hang is occurring.
> > From there it should be straight forward to fix the problem. I am continuing to investigate and am learning
> > the device-manager/multipath/srp/scsi stack but I have a long ways to go.
> >
> > Bob
> >
> >
> > >
> > >
> > > Jason && Leon, please comment on this.
> > >
> > >
> > > Best Regards,
> > >
> > > Zhu Yanjun
> > >
> > >>
> > >> Bob
> >

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25  4:47                                             ` Daisuke Matsuda (Fujitsu)
@ 2023-09-25 14:31                                               ` Zhu Yanjun
  2023-09-26  1:09                                                 ` Daisuke Matsuda (Fujitsu)
  2023-09-25 15:00                                               ` Bart Van Assche
  1 sibling, 1 reply; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-25 14:31 UTC (permalink / raw)
  To: Daisuke Matsuda (Fujitsu), 'Rain River', Bob Pearson
  Cc: Jason Gunthorpe, leon, Bart Van Assche, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi


在 2023/9/25 12:47, Daisuke Matsuda (Fujitsu) 写道:
> On Sun, Sep 24, 2023 10:18 AM Rain River wrote:
>> On Sat, Sep 23, 2023 at 2:14 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>> On 9/21/23 10:10, Zhu Yanjun wrote:
>>>> 在 2023/9/21 22:39, Bob Pearson 写道:
>>>>> On 9/21/23 09:23, Rain River wrote:
>>>>>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>>>>>> On 9/20/23 12:22, Bart Van Assche wrote:
>>>>>>>> On 9/20/23 10:18, Bob Pearson wrote:
>>>>>>>>> But I have also seen the same behavior in the siw driver which is
>>>>>>>>> completely independent.
>>>>>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
>>>>>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>>>>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75% of
>> the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>>>>>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>>>>>>>> support for rxe tasks")?
>>>>>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
>>>>>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
>>>>>> I made tests on the latest Ubuntu with the latest kernel without the
>>>>>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>>>>>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>>>>>> workqueue support for rxe tasks") is reverted.
>>>>>> I made blktest tests for about 30 times, this problem does not occur.
>>>>>>
>>>>>> So I confirm that without this commit, this hang problem does not
>>>>>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>>>>>> workqueue support for rxe tasks").
>>>>>>
>>>>>> Nanthan
>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Bart.
>>>>> This commit is very important for several reasons. It is needed for the ODP implementation
>>>>> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
>>>>> queue implementation scales well with increasing qp number while the tasklet implementation
>>>>> does not. This is critical for the drivers use in large scale storage applications. So, if
>>>>> there is a bug in the work queue implementation it needs to be fixed not reverted.
>>>>>
>>>>> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
>>>>> something missing causing it to wait.
>>>> Hi, Bob
>>>>
>>>>
>>>> You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>>>>
>>>> You should be very familiar with this commit.
>>>>
>>>> And this commit causes regression.
>>>>
>>>> So you should delved into the source code to find the root cause, then fix it.
>>> Zhu,
>>>
>>> I have spent tons of time over the months trying to figure out what is happening with blktests.
>>> As I have mentioned several times I have seen the same exact failure in siw in the past although
>>> currently that doesn't seem to happen so I had been suspecting that the problem may be in the ULP.
>>> The challenge is that the blktests represents a huge stack of software much of which I am not
>>> familiar with. The bug is a hang in layers above the rxe driver and so far no one has been able to
>>> say with any specificity the rxe driver failed to do something needed to make progress or violated
>>> expected behavior. Without any clue as to where to look it has been hard to make progress.
>> Bob
>>
>> Work queue will sleep. If work queue sleep for long time, the packets
>> will not be sent to ULP. This is why this hang occurs.
> In general work queue can sleep, but the workload running in rxe driver
> should not sleep because it was originally running on tasklet and converted
> to use work queue. A task can sometime take longer because of IRQs, but
> the same thing can also happen with tasklet. If there is a difference between
> the two, I think it would be the overhead of scheduring the work queue.
>
>> Difficult to handle this sleep in work queue. It had better revert
>> this commit in RXE.
> I am objected to reverting the commit at this stage. As Bob wrote above,
> nobody has found any logical failure in rxe driver. It is quite possible
> that the patch is just revealing a latent bug in the higher layers.

To now, on Debian and Fedora, all the tests with work queue will hang. 
And after reverting this commit,

no hang will occur.

Before new test results, it is a reasonable suspect that this commit 
will result in the hang.

>
>> Because work queue sleeps,  ULP can not wait for long time for the
>> packets. If packets can not reach ULPs for long time, many problems
>> will occur to ULPs.
> I wonder where in the rxe driver does it sleep. BTW, most packets are
> processed in NET_RX_IRQ context, and work queue is scheduled only

Do you mean NET_RX_SOFTIRQ?

Zhu Yanjun

> when there is already a running context. If your speculation is to the point,
> the hang will occur more frequently if we change it to use work queue exclusively.
> My ODP patches include a change to do this.
> Cf. https://lore.kernel.org/lkml/7699a90bc4af10c33c0a46ef6330ed4bb7e7ace6.1694153251.git.matsuda-daisuke@fujitsu.com/
>
> Thanks,
> Daisuke
>
>>> My main motivation is making Lustre run on rxe and it does and it's fast enough to meet our needs.
>>> Lustre is similar to srp as a ULP and in all of our testing we have never seen a similar hang. Other
>>> hangs to be sure but not this one. I believe that this bug will never get resolved until someone with
>>> a good understanding of the ulp drivers makes an effort to find out where and why the hang is occurring.
>>>  From there it should be straight forward to fix the problem. I am continuing to investigate and am learning
>>> the device-manager/multipath/srp/scsi stack but I have a long ways to go.
>>>
>>> Bob
>>>
>>>
>>>>
>>>> Jason && Leon, please comment on this.
>>>>
>>>>
>>>> Best Regards,
>>>>
>>>> Zhu Yanjun
>>>>
>>>>> Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25  4:47                                             ` Daisuke Matsuda (Fujitsu)
  2023-09-25 14:31                                               ` Zhu Yanjun
@ 2023-09-25 15:00                                               ` Bart Van Assche
  2023-09-25 15:25                                                 ` Bob Pearson
                                                                   ` (3 more replies)
  1 sibling, 4 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-09-25 15:00 UTC (permalink / raw)
  To: Daisuke Matsuda (Fujitsu), 'Rain River', Bob Pearson
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
> As Bob wrote above, nobody has found any logical failure in rxe
> driver.

That's wrong. In case you would not yet have noticed my latest email in
this thread, please take a look at
https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37b042ae4f8e9b024f1871a73. 
I think the report in that email is a 100% proof that there is a 
use-after-free issue in the rdma_rxe driver. Use-after-free issues have 
security implications and also can cause data corruption. I propose to 
revert the commit that introduced the rdma_rxe use-after-free unless 
someone comes up with a fix for the rdma_rxe driver.

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 15:00                                               ` Bart Van Assche
@ 2023-09-25 15:25                                                 ` Bob Pearson
  2023-09-25 15:52                                                 ` Jason Gunthorpe
                                                                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-25 15:25 UTC (permalink / raw)
  To: Bart Van Assche, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 9/25/23 10:00, Bart Van Assche wrote:
> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
>> As Bob wrote above, nobody has found any logical failure in rxe
>> driver.
> 
> That's wrong. In case you would not yet have noticed my latest email in
> this thread, please take a look at
> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37b042ae4f8e9b024f1871a73. I think the report in that email is a 100% proof that there is a use-after-free issue in the rdma_rxe driver. Use-after-free issues have security implications and also can cause data corruption. I propose to revert the commit that introduced the rdma_rxe use-after-free unless someone comes up with a fix for the rdma_rxe driver.
> 
> Bart.

Thanks Bart, I missed that. This will give me a better target to try to track this down.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 15:00                                               ` Bart Van Assche
  2023-09-25 15:25                                                 ` Bob Pearson
@ 2023-09-25 15:52                                                 ` Jason Gunthorpe
  2023-09-25 15:54                                                   ` Bob Pearson
  2023-09-25 19:57                                                 ` Bob Pearson
  2023-09-26  1:17                                                 ` Daisuke Matsuda (Fujitsu)
  3 siblings, 1 reply; 87+ messages in thread
From: Jason Gunthorpe @ 2023-09-25 15:52 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Bob Pearson, Zhu Yanjun, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On Mon, Sep 25, 2023 at 08:00:39AM -0700, Bart Van Assche wrote:
> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
> > As Bob wrote above, nobody has found any logical failure in rxe
> > driver.
> 
> That's wrong. In case you would not yet have noticed my latest email in
> this thread, please take a look at
> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37b042ae4f8e9b024f1871a73.
> I think the report in that email is a 100% proof that there is a
> use-after-free issue in the rdma_rxe driver. Use-after-free issues have
> security implications and also can cause data corruption. I propose to
> revert the commit that introduced the rdma_rxe use-after-free unless someone
> comes up with a fix for the rdma_rxe driver.

I should say I'm not keen on reverting improvements to rxe. This stuff
needs to happen eventually. Let's please try hard to fix it.

Jason

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 15:52                                                 ` Jason Gunthorpe
@ 2023-09-25 15:54                                                   ` Bob Pearson
  0 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-25 15:54 UTC (permalink / raw)
  To: Jason Gunthorpe, Bart Van Assche
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 9/25/23 10:52, Jason Gunthorpe wrote:
> On Mon, Sep 25, 2023 at 08:00:39AM -0700, Bart Van Assche wrote:
>> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
>>> As Bob wrote above, nobody has found any logical failure in rxe
>>> driver.
>>
>> That's wrong. In case you would not yet have noticed my latest email in
>> this thread, please take a look at
>> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37b042ae4f8e9b024f1871a73.
>> I think the report in that email is a 100% proof that there is a
>> use-after-free issue in the rdma_rxe driver. Use-after-free issues have
>> security implications and also can cause data corruption. I propose to
>> revert the commit that introduced the rdma_rxe use-after-free unless someone
>> comes up with a fix for the rdma_rxe driver.
> 
> I should say I'm not keen on reverting improvements to rxe. This stuff
> needs to happen eventually. Let's please try hard to fix it.
> 
> Jason
I'm digging into Bart's kasan bug. Hope to find something.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 15:00                                               ` Bart Van Assche
  2023-09-25 15:25                                                 ` Bob Pearson
  2023-09-25 15:52                                                 ` Jason Gunthorpe
@ 2023-09-25 19:57                                                 ` Bob Pearson
  2023-09-25 20:33                                                   ` Bart Van Assche
  2023-09-26 15:36                                                   ` Rain River
  2023-09-26  1:17                                                 ` Daisuke Matsuda (Fujitsu)
  3 siblings, 2 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-25 19:57 UTC (permalink / raw)
  To: Bart Van Assche, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 9/25/23 10:00, Bart Van Assche wrote:
> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
>> As Bob wrote above, nobody has found any logical failure in rxe
>> driver.
> 
> That's wrong. In case you would not yet have noticed my latest email in
> this thread, please take a look at
> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37b042ae4f8e9b024f1871a73. I think the report in that email is a 100% proof that there is a use-after-free issue in the rdma_rxe driver. Use-after-free issues have security implications and also can cause data corruption. I propose to revert the commit that introduced the rdma_rxe use-after-rpearson:src$ git clone git://git.kernel.org/pub/scm/linux/git/rafael/linux-pm
Cloning into 'linux-pm'...
fatal: remote error: access denied or repository not exported: /pub/scm/linux/git/rafael/linux-pm
free unless someone comes up with a fix for the rdma_rxe driver.
> 
> Bart.

Bart,

Having trouble following your recipe. The git repo you mention does not seem to be available. E.g.

rpearson:src$ git clone git://git.kernel.org/pub/scm/linux/git/rafael/linux-pm
Cloning into 'linux-pm'...
fatal: remote error: access denied or repository not exported: /pub/scm/linux/git/rafael/linux-pm

I am not sure how to obtain the tag if I cannot see the repo.

If I just try to enable KASAN by setting CONFIG_KASAN=y in .config for the current linux-rdma repo
and compile the kernel the kernel won't boot and is caught in some kind of SRSO hell. If I checkout
Linus' v6.4 tag and add CONFIG_KASAN=y to a fresh .config file the kernel builds OK but when I
try to boot it, it is unable to chroot to the root file system in boot.

Any hints would be appreciated.

Bob


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 19:57                                                 ` Bob Pearson
@ 2023-09-25 20:33                                                   ` Bart Van Assche
  2023-09-25 20:40                                                     ` Bob Pearson
  2023-09-26 15:36                                                   ` Rain River
  1 sibling, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-09-25 20:33 UTC (permalink / raw)
  To: Bob Pearson, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

[-- Attachment #1: Type: text/plain, Size: 1390 bytes --]

On 9/25/23 12:57, Bob Pearson wrote:
> Having trouble following your recipe. The git repo you mention does not seem to be available. E.g.
> 
> rpearson:src$ git clone git://git.kernel.org/pub/scm/linux/git/rafael/linux-pm
> Cloning into 'linux-pm'...
> fatal: remote error: access denied or repository not exported: /pub/scm/linux/git/rafael/linux-pm
> 
> I am not sure how to obtain the tag if I cannot see the repo.

As one can see on
https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git/,
".git" is missing from the end of the URL in your git clone command.

I think that you misread my email. In my email I clearly referred to
Linus' master branch. Please try this:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux-kernel
$ cd linux-kernel
$ git checkout 8018e02a8703 -b linus-master

> If I just try to enable KASAN by setting CONFIG_KASAN=y in .config for the current linux-rdma repo
> and compile the kernel the kernel won't boot and is caught in some kind of SRSO hell. If I checkout
> Linus' v6.4 tag and add CONFIG_KASAN=y to a fresh .config file the kernel builds OK but when I
> try to boot it, it is unable to chroot to the root file system in boot.

Please try to run the blktests suite in a VM. I have attached the kernel
configuration to this email with which I observed the KASAN complaint on
my test setup.

Thanks,

Bart.

[-- Attachment #2: vm-kernel-config.txt.gz --]
[-- Type: application/gzip, Size: 29541 bytes --]

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 20:33                                                   ` Bart Van Assche
@ 2023-09-25 20:40                                                     ` Bob Pearson
  0 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-09-25 20:40 UTC (permalink / raw)
  To: Bart Van Assche, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 9/25/23 15:33, Bart Van Assche wrote:
> On 9/25/23 12:57, Bob Pearson wrote:
>> Having trouble following your recipe. The git repo you mention does not seem to be available. E.g.
>>
>> rpearson:src$ git clone git://git.kernel.org/pub/scm/linux/git/rafael/linux-pm
>> Cloning into 'linux-pm'...
>> fatal: remote error: access denied or repository not exported: /pub/scm/linux/git/rafael/linux-pm
>>
>> I am not sure how to obtain the tag if I cannot see the repo.
> 
> As one can see on
> https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git/,
> ".git" is missing from the end of the URL in your git clone command.
> 
> I think that you misread my email. In my email I clearly referred to
> Linus' master branch. Please try this:
> $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux-kernel
> $ cd linux-kernel
> $ git checkout 8018e02a8703 -b linus-master

what the email said was:

Please start with fixing the KASAN complaint shown below. I think the
root cause of this complaint is in the RDMA/rxe driver. This issue can
be reproduced as follows:
* Build and install Linus' master branch with KASAN enabled (commit
   8018e02a8703 ("Merge tag 'thermal-6.6-rc3' of
   git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm")).

I found the reference to rafael/linux-pm confusing. I also tried with .git still didn't work.
Thanks for the clarification.

Bob
> 
>> If I just try to enable KASAN by setting CONFIG_KASAN=y in .config for the current linux-rdma repo
>> and compile the kernel the kernel won't boot and is caught in some kind of SRSO hell. If I checkout
>> Linus' v6.4 tag and add CONFIG_KASAN=y to a fresh .config file the kernel builds OK but when I
>> try to boot it, it is unable to chroot to the root file system in boot.
> 
> Please try to run the blktests suite in a VM. I have attached the kernel
> configuration to this email with which I observed the KASAN complaint on
> my test setup.
> 
> Thanks,
> 
> Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 14:31                                               ` Zhu Yanjun
@ 2023-09-26  1:09                                                 ` Daisuke Matsuda (Fujitsu)
  2023-09-26  6:09                                                   ` Zhu Yanjun
  0 siblings, 1 reply; 87+ messages in thread
From: Daisuke Matsuda (Fujitsu) @ 2023-09-26  1:09 UTC (permalink / raw)
  To: 'Zhu Yanjun', 'Rain River', Bob Pearson
  Cc: Jason Gunthorpe, leon, Bart Van Assche, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On Mon, Sep 25, 2023 11:31 PM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
> 在 2023/9/25 12:47, Daisuke Matsuda (Fujitsu) 写道:
> > On Sun, Sep 24, 2023 10:18 AM Rain River wrote:
> >> On Sat, Sep 23, 2023 at 2:14 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
> >>> On 9/21/23 10:10, Zhu Yanjun wrote:
> >>>> 在 2023/9/21 22:39, Bob Pearson 写道:
> >>>>> On 9/21/23 09:23, Rain River wrote:
> >>>>>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
> >>>>>>> On 9/20/23 12:22, Bart Van Assche wrote:
> >>>>>>>> On 9/20/23 10:18, Bob Pearson wrote:
> >>>>>>>>> But I have also seen the same behavior in the siw driver which is
> >>>>>>>>> completely independent.
> >>>>>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
> >>>>>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
> >>>>>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75%
> of
> >> the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
> >>>>>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
> >>>>>>>> support for rxe tasks")?
> >>>>>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
> >>>>>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
> >>>>>> I made tests on the latest Ubuntu with the latest kernel without the
> >>>>>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> >>>>>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> >>>>>> workqueue support for rxe tasks") is reverted.
> >>>>>> I made blktest tests for about 30 times, this problem does not occur.
> >>>>>>
> >>>>>> So I confirm that without this commit, this hang problem does not
> >>>>>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
> >>>>>> workqueue support for rxe tasks").
> >>>>>>
> >>>>>> Nanthan
> >>>>>>
> >>>>>>>> Thanks,
> >>>>>>>>
> >>>>>>>> Bart.
> >>>>> This commit is very important for several reasons. It is needed for the ODP implementation
> >>>>> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
> >>>>> queue implementation scales well with increasing qp number while the tasklet implementation
> >>>>> does not. This is critical for the drivers use in large scale storage applications. So, if
> >>>>> there is a bug in the work queue implementation it needs to be fixed not reverted.
> >>>>>
> >>>>> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
> >>>>> something missing causing it to wait.
> >>>> Hi, Bob
> >>>>
> >>>>
> >>>> You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
> >>>>
> >>>> You should be very familiar with this commit.
> >>>>
> >>>> And this commit causes regression.
> >>>>
> >>>> So you should delved into the source code to find the root cause, then fix it.
> >>> Zhu,
> >>>
> >>> I have spent tons of time over the months trying to figure out what is happening with blktests.
> >>> As I have mentioned several times I have seen the same exact failure in siw in the past although
> >>> currently that doesn't seem to happen so I had been suspecting that the problem may be in the ULP.
> >>> The challenge is that the blktests represents a huge stack of software much of which I am not
> >>> familiar with. The bug is a hang in layers above the rxe driver and so far no one has been able to
> >>> say with any specificity the rxe driver failed to do something needed to make progress or violated
> >>> expected behavior. Without any clue as to where to look it has been hard to make progress.
> >> Bob
> >>
> >> Work queue will sleep. If work queue sleep for long time, the packets
> >> will not be sent to ULP. This is why this hang occurs.
> > In general work queue can sleep, but the workload running in rxe driver
> > should not sleep because it was originally running on tasklet and converted
> > to use work queue. A task can sometime take longer because of IRQs, but
> > the same thing can also happen with tasklet. If there is a difference between
> > the two, I think it would be the overhead of scheduring the work queue.
> >
> >> Difficult to handle this sleep in work queue. It had better revert
> >> this commit in RXE.
> > I am objected to reverting the commit at this stage. As Bob wrote above,
> > nobody has found any logical failure in rxe driver. It is quite possible
> > that the patch is just revealing a latent bug in the higher layers.
> 
> To now, on Debian and Fedora, all the tests with work queue will hang.
> And after reverting this commit,
> 
> no hang will occur.
> 
> Before new test results, it is a reasonable suspect that this commit
> will result in the hang.

If the hang *always* occurs, then I agree your opinion is correct,
but this one happens occasionally. It is also natural to think that
the commit makes it easier to meet the condition of an existing bug.

> 
> >
> >> Because work queue sleeps,  ULP can not wait for long time for the
> >> packets. If packets can not reach ULPs for long time, many problems
> >> will occur to ULPs.
> > I wonder where in the rxe driver does it sleep. BTW, most packets are
> > processed in NET_RX_IRQ context, and work queue is scheduled only
> 
> Do you mean NET_RX_SOFTIRQ?

Yes. I am sorry for confusing you.

Thanks,
Daisuke

> 
> Zhu Yanjun
> 
> > when there is already a running context. If your speculation is to the point,
> > the hang will occur more frequently if we change it to use work queue exclusively.
> > My ODP patches include a change to do this.
> > Cf.
> https://lore.kernel.org/lkml/7699a90bc4af10c33c0a46ef6330ed4bb7e7ace6.1694153251.git.matsuda-daisuke@fujitsu.c
> om/
> >
> > Thanks,
> > Daisuke
> >
> >>> My main motivation is making Lustre run on rxe and it does and it's fast enough to meet our needs.
> >>> Lustre is similar to srp as a ULP and in all of our testing we have never seen a similar hang. Other
> >>> hangs to be sure but not this one. I believe that this bug will never get resolved until someone with
> >>> a good understanding of the ulp drivers makes an effort to find out where and why the hang is occurring.
> >>>  From there it should be straight forward to fix the problem. I am continuing to investigate and am learning
> >>> the device-manager/multipath/srp/scsi stack but I have a long ways to go.
> >>>
> >>> Bob
> >>>
> >>>
> >>>>
> >>>> Jason && Leon, please comment on this.
> >>>>
> >>>>
> >>>> Best Regards,
> >>>>
> >>>> Zhu Yanjun
> >>>>
> >>>>> Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 15:00                                               ` Bart Van Assche
                                                                   ` (2 preceding siblings ...)
  2023-09-25 19:57                                                 ` Bob Pearson
@ 2023-09-26  1:17                                                 ` Daisuke Matsuda (Fujitsu)
  2023-10-17 17:09                                                   ` Bob Pearson
  3 siblings, 1 reply; 87+ messages in thread
From: Daisuke Matsuda (Fujitsu) @ 2023-09-26  1:17 UTC (permalink / raw)
  To: 'Bart Van Assche', 'Rain River', Bob Pearson
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On Tue, Sep 26, 2023 12:01 AM Bart Van Assche:
> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
> > As Bob wrote above, nobody has found any logical failure in rxe
> > driver.
> 
> That's wrong. In case you would not yet have noticed my latest email in
> this thread, please take a look at
> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37
> b042ae4f8e9b024f1871a73.
> I think the report in that email is a 100% proof that there is a
> use-after-free issue in the rdma_rxe driver. Use-after-free issues have
> security implications and also can cause data corruption. I propose to
> revert the commit that introduced the rdma_rxe use-after-free unless
> someone comes up with a fix for the rdma_rxe driver.
> 
> Bart.

Thank you for the clarification. I see your intention.
I hope the hang issue will be resolved by addressing this.

Thanks,
Daisuke


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-26  1:09                                                 ` Daisuke Matsuda (Fujitsu)
@ 2023-09-26  6:09                                                   ` Zhu Yanjun
  0 siblings, 0 replies; 87+ messages in thread
From: Zhu Yanjun @ 2023-09-26  6:09 UTC (permalink / raw)
  To: Daisuke Matsuda (Fujitsu), 'Rain River', Bob Pearson
  Cc: Jason Gunthorpe, leon, Bart Van Assche, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

在 2023/9/26 9:09, Daisuke Matsuda (Fujitsu) 写道:
> On Mon, Sep 25, 2023 11:31 PM Zhu Yanjun <yanjun.zhu@linux.dev> wrote:
>> 在 2023/9/25 12:47, Daisuke Matsuda (Fujitsu) 写道:
>>> On Sun, Sep 24, 2023 10:18 AM Rain River wrote:
>>>> On Sat, Sep 23, 2023 at 2:14 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>>>> On 9/21/23 10:10, Zhu Yanjun wrote:
>>>>>> 在 2023/9/21 22:39, Bob Pearson 写道:
>>>>>>> On 9/21/23 09:23, Rain River wrote:
>>>>>>>> On Thu, Sep 21, 2023 at 2:53 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>>>>>>>> On 9/20/23 12:22, Bart Van Assche wrote:
>>>>>>>>>> On 9/20/23 10:18, Bob Pearson wrote:
>>>>>>>>>>> But I have also seen the same behavior in the siw driver which is
>>>>>>>>>>> completely independent.
>>>>>>>>>> Hmm ... I haven't seen any hangs yet with the siw driver.
>>>>>>>>> I was on Ubuntu 6-9 months ago. Currently I don't see hangs on either.
>>>>>>>>>>> As mentioned above at the moment Ubuntu is failing rarely. But it used to fail reliably (srp/002 about 75%
>> of
>>>> the time and srp/011 about 99% of the time.) There haven't been any changes to rxe to explain this.
>>>>>>>>>> I think that Zhu mentioned commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue
>>>>>>>>>> support for rxe tasks")?
>>>>>>>>> That change happened well before the failures went away. I was seeing failures at the same rate with tasklets
>>>>>>>>> and wqs. But after updating Ubuntu and the kernel at some point they all went away.
>>>>>>>> I made tests on the latest Ubuntu with the latest kernel without the
>>>>>>>> commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>>>>>>>> The latest kernel is v6.6-rc2, the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>>>>>>>> workqueue support for rxe tasks") is reverted.
>>>>>>>> I made blktest tests for about 30 times, this problem does not occur.
>>>>>>>>
>>>>>>>> So I confirm that without this commit, this hang problem does not
>>>>>>>> occur on Ubuntu without the commit 9b4b7c1f9f54 ("RDMA/rxe: Add
>>>>>>>> workqueue support for rxe tasks").
>>>>>>>>
>>>>>>>> Nanthan
>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> Bart.
>>>>>>> This commit is very important for several reasons. It is needed for the ODP implementation
>>>>>>> that is in the works from Daisuke Matsuda and also for QP scaling of performance. The work
>>>>>>> queue implementation scales well with increasing qp number while the tasklet implementation
>>>>>>> does not. This is critical for the drivers use in large scale storage applications. So, if
>>>>>>> there is a bug in the work queue implementation it needs to be fixed not reverted.
>>>>>>>
>>>>>>> I am still hoping that someone will diagnose what is causing the ULPs to hang in terms of
>>>>>>> something missing causing it to wait.
>>>>>> Hi, Bob
>>>>>>
>>>>>>
>>>>>> You submitted this commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support for rxe tasks").
>>>>>>
>>>>>> You should be very familiar with this commit.
>>>>>>
>>>>>> And this commit causes regression.
>>>>>>
>>>>>> So you should delved into the source code to find the root cause, then fix it.
>>>>> Zhu,
>>>>>
>>>>> I have spent tons of time over the months trying to figure out what is happening with blktests.
>>>>> As I have mentioned several times I have seen the same exact failure in siw in the past although
>>>>> currently that doesn't seem to happen so I had been suspecting that the problem may be in the ULP.
>>>>> The challenge is that the blktests represents a huge stack of software much of which I am not
>>>>> familiar with. The bug is a hang in layers above the rxe driver and so far no one has been able to
>>>>> say with any specificity the rxe driver failed to do something needed to make progress or violated
>>>>> expected behavior. Without any clue as to where to look it has been hard to make progress.
>>>> Bob
>>>>
>>>> Work queue will sleep. If work queue sleep for long time, the packets
>>>> will not be sent to ULP. This is why this hang occurs.
>>> In general work queue can sleep, but the workload running in rxe driver
>>> should not sleep because it was originally running on tasklet and converted
>>> to use work queue. A task can sometime take longer because of IRQs, but
>>> the same thing can also happen with tasklet. If there is a difference between
>>> the two, I think it would be the overhead of scheduring the work queue.
>>>
>>>> Difficult to handle this sleep in work queue. It had better revert
>>>> this commit in RXE.
>>> I am objected to reverting the commit at this stage. As Bob wrote above,
>>> nobody has found any logical failure in rxe driver. It is quite possible
>>> that the patch is just revealing a latent bug in the higher layers.
>>
>> To now, on Debian and Fedora, all the tests with work queue will hang.
>> And after reverting this commit,
>>
>> no hang will occur.
>>
>> Before new test results, it is a reasonable suspect that this commit
>> will result in the hang.
> 
> If the hang *always* occurs, then I agree your opinion is correct,

About hang tests, please read through the whole discussion. Several 
engineers made tests on Debian, Fedora and Ubuntu to confirm these test 
results.

Zhu Yanjun

> but this one happens occasionally. It is also natural to think that
> the commit makes it easier to meet the condition of an existing bug.
> 
>>
>>>
>>>> Because work queue sleeps,  ULP can not wait for long time for the
>>>> packets. If packets can not reach ULPs for long time, many problems
>>>> will occur to ULPs.
>>> I wonder where in the rxe driver does it sleep. BTW, most packets are
>>> processed in NET_RX_IRQ context, and work queue is scheduled only
>>
>> Do you mean NET_RX_SOFTIRQ?
> 
> Yes. I am sorry for confusing you.
> 
> Thanks,
> Daisuke
> 
>>
>> Zhu Yanjun
>>
>>> when there is already a running context. If your speculation is to the point,
>>> the hang will occur more frequently if we change it to use work queue exclusively.
>>> My ODP patches include a change to do this.
>>> Cf.
>> https://lore.kernel.org/lkml/7699a90bc4af10c33c0a46ef6330ed4bb7e7ace6.1694153251.git.matsuda-daisuke@fujitsu.c
>> om/
>>>
>>> Thanks,
>>> Daisuke
>>>
>>>>> My main motivation is making Lustre run on rxe and it does and it's fast enough to meet our needs.
>>>>> Lustre is similar to srp as a ULP and in all of our testing we have never seen a similar hang. Other
>>>>> hangs to be sure but not this one. I believe that this bug will never get resolved until someone with
>>>>> a good understanding of the ulp drivers makes an effort to find out where and why the hang is occurring.
>>>>>   From there it should be straight forward to fix the problem. I am continuing to investigate and am learning
>>>>> the device-manager/multipath/srp/scsi stack but I have a long ways to go.
>>>>>
>>>>> Bob
>>>>>
>>>>>
>>>>>>
>>>>>> Jason && Leon, please comment on this.
>>>>>>
>>>>>>
>>>>>> Best Regards,
>>>>>>
>>>>>> Zhu Yanjun
>>>>>>
>>>>>>> Bob


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-25 19:57                                                 ` Bob Pearson
  2023-09-25 20:33                                                   ` Bart Van Assche
@ 2023-09-26 15:36                                                   ` Rain River
  1 sibling, 0 replies; 87+ messages in thread
From: Rain River @ 2023-09-26 15:36 UTC (permalink / raw)
  To: Bob Pearson
  Cc: Bart Van Assche, Daisuke Matsuda (Fujitsu),
	Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On Tue, Sep 26, 2023 at 3:57 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>
> On 9/25/23 10:00, Bart Van Assche wrote:
> > On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
> >> As Bob wrote above, nobody has found any logical failure in rxe
> >> driver.
> >
> > That's wrong. In case you would not yet have noticed my latest email in
> > this thread, please take a look at
> > https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37b042ae4f8e9b024f1871a73. I think the report in that email is a 100% proof that there is a use-after-free issue in the rdma_rxe driver. Use-after-free issues have security implications and also can cause data corruption. I propose to revert the commit that introduced the rdma_rxe use-after-rpearson:src$ git clone git://git.kernel.org/pub/scm/linux/git/rafael/linux-pm
> Cloning into 'linux-pm'...
> fatal: remote error: access denied or repository not exported: /pub/scm/linux/git/rafael/linux-pm
> free unless someone comes up with a fix for the rdma_rxe driver.
> >
> > Bart.
>
> Bart,
>
> Having trouble following your recipe. The git repo you mention does not seem to be available. E.g.
>
> rpearson:src$ git clone git://git.kernel.org/pub/scm/linux/git/rafael/linux-pm
> Cloning into 'linux-pm'...
> fatal: remote error: access denied or repository not exported: /pub/scm/linux/git/rafael/linux-pm
>
> I am not sure how to obtain the tag if I cannot see the repo.
>
> If I just try to enable KASAN by setting CONFIG_KASAN=y in .config for the current linux-rdma repo
> and compile the kernel the kernel won't boot and is caught in some kind of SRSO hell. If I checkout
> Linus' v6.4 tag and add CONFIG_KASAN=y to a fresh .config file the kernel builds OK but when I
> try to boot it, it is unable to chroot to the root file system in boot.

Bob,

Suggested by a friend who is an expert in process schedule and
workqueue, I made a test as below.
On each CPU, a cpu-intensive process runs with high priority. Then run
rxe with the commit, the rping almost can not work well.
Without this commit, rping can work with rxe in the same scenario.
When you fix this problem, consider the above.

>
> Any hints would be appreciated.
>
> Bob
>

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-22 11:06 ` Linux regression tracking #adding (Thorsten Leemhuis)
@ 2023-10-13 12:51   ` Linux regression tracking #update (Thorsten Leemhuis)
  0 siblings, 0 replies; 87+ messages in thread
From: Linux regression tracking #update (Thorsten Leemhuis) @ 2023-10-13 12:51 UTC (permalink / raw)
  To: linux-rdma, linux-scsi; +Cc: Linux kernel regressions list

[TLDR: This mail in primarily relevant for Linux kernel regression
tracking. See link in footer if these mails annoy you.]

On 22.09.23 13:06, Linux regression tracking #adding (Thorsten Leemhuis)
wrote:
> On 21.08.23 08:46, Shinichiro Kawasaki wrote:
>> I observed a process hang at the blktests test case srp/002 occasionally, using
>> kernel v6.5-rcX. Kernel reported stall of many kworkers [1]. PID 2757 hanged at
>> inode_sleep_on_writeback(). Other kworkers hanged at __inode_wait_for_writeback.
>>
>> The hang is recreated in stable manner by repeating the test case srp/002 (from
>> 15 times to 30 times).
>>
>> I bisected and found the commit 9b4b7c1f9f54 ("RDMA/rxe: Add workqueue support
>> for rxe tasks") looks like the trigger commit. When I revert it from the kernel
>> v6.5-rc7, the hang symptom disappears. I'm not sure how the commit relates to
>> the hang. Comments will be welcomed.
>> […]
> 
> Thanks for the report. To be sure the issue doesn't fall through the
> cracks unnoticed, I'm adding it to regzbot, the Linux kernel regression
> tracking bot:
> 
> #regzbot ^introduced 9b4b7c1f9f54
> #regzbot title RDMA/rxe: occasionally pocess hang at the blktests test
> case srp/002
> #regzbot ignore-activity

#regzbot monitor:
https://lore.kernel.org/all/20230922163231.2237811-1-yanjun.zhu@intel.com/
#regzbot ignore-activity

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
That page also explains what to do if mails like this annoy you.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-09-26  1:17                                                 ` Daisuke Matsuda (Fujitsu)
@ 2023-10-17 17:09                                                   ` Bob Pearson
  2023-10-17 17:13                                                     ` Bart Van Assche
                                                                       ` (2 more replies)
  0 siblings, 3 replies; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 17:09 UTC (permalink / raw)
  To: Daisuke Matsuda (Fujitsu), 'Bart Van Assche',
	'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

[-- Attachment #1: Type: text/plain, Size: 2793 bytes --]

On 9/25/23 20:17, Daisuke Matsuda (Fujitsu) wrote:
> On Tue, Sep 26, 2023 12:01 AM Bart Van Assche:
>> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
>>> As Bob wrote above, nobody has found any logical failure in rxe
>>> driver.
>>
>> That's wrong. In case you would not yet have noticed my latest email in
>> this thread, please take a look at
>> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37
>> b042ae4f8e9b024f1871a73.
>> I think the report in that email is a 100% proof that there is a
>> use-after-free issue in the rdma_rxe driver. Use-after-free issues have
>> security implications and also can cause data corruption. I propose to
>> revert the commit that introduced the rdma_rxe use-after-free unless
>> someone comes up with a fix for the rdma_rxe driver.
>>
>> Bart.
> 
> Thank you for the clarification. I see your intention.
> I hope the hang issue will be resolved by addressing this.
> 
> Thanks,
> Daisuke
> 

I have made some progress in understanding the cause of the srp/002 etc. hang.

The two attached files are traces of activity for two qp's qp#151 and qp#167. In my runs of srp/002
All the qp's pass before 167 and all fail after 167 which is the first to fail.

It turns out that all the passing qp's call srp_post_send() some number of times and also call
srp_send_done() the same number of times. Starting at qp#167 the last call to srp_send_done() does
not take place leaving the srp driver waiting for the final completion and causing the hang I believe.

There are four cq's involved in each pair of qp's in the srp test. Two in ib_srp and two in ib_srpt
for the two qp's. Three of them execute completion processing in a soft irq context so the code in
core/cq.c gathers the completions and calls back to the srp drivers. The send side cq in srp uses
cq_direct which requires srp to call ib_process_direct() in order to collect the completions. This
happens in __srp_get_tx_iu() which is called in several places in the srp driver. But only as a side effect
since the purpose of this routine is to get an iu to start a new command.

In the attached files for qp#151 the final call to srp_post_send is followed by the rxe requester and
completer work queues processing the send packet and the ack before a final call to __srp_get_rx_iu()
which gathers the final send side completion and success.

For qp#167 the call to srp_post_send() is followed by the rxe driver processing the send operation and
generating a work completion which is posted to the send cq but there is never a following call to
__srp_get_rx_iu() so the cqe is not received by srp and failure.

I don't yet understand the logic of the srp driver to fix this but the problem is not in the rxe driver
as far as I can tell.

Bob

[-- Attachment #2: out151 --]
[-- Type: text/plain, Size: 16249 bytes --]

[  184.877132] qp#151: create_qp
[  184.892362] qp#151: modify_qp: INIT
[  184.892385] qp#151: modify_qp: RTR
[  184.892390] qp#151: modify_qp: RTS
[  184.892722] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  184.893208] ib_srp: qp#151: post-recv:
[  184.893212] ib_srp: qp#151: post-recv:
[  184.893215] ib_srp: qp#151: post-recv:
[  184.893218] ib_srp: qp#151: post-recv:
[  184.893220] ib_srp: qp#151: post-recv:
[  184.893223] ib_srp: qp#151: post-recv:
[  184.893226] ib_srp: qp#151: post-recv:
[  184.893228] ib_srp: qp#151: post-recv:
[  184.893231] ib_srp: qp#151: post-recv:
[  184.893234] ib_srp: qp#151: post-recv:
[  184.893236] ib_srp: qp#151: post-recv:
[  184.893239] ib_srp: qp#151: post-recv:
[  184.893242] ib_srp: qp#151: post-recv:
[  184.893244] ib_srp: qp#151: post-recv:
[  184.893247] ib_srp: qp#151: post-recv:
[  184.893249] ib_srp: qp#151: post-recv:
[  184.893252] ib_srp: qp#151: post-recv:
[  184.893255] ib_srp: qp#151: post-recv:
[  184.893257] ib_srp: qp#151: post-recv:
[  184.893260] ib_srp: qp#151: post-recv:
[  184.893263] ib_srp: qp#151: post-recv:
[  184.893265] ib_srp: qp#151: post-recv:
[  184.893268] ib_srp: qp#151: post-recv:
[  184.893270] ib_srp: qp#151: post-recv:
[  184.893273] ib_srp: qp#151: post-recv:
[  184.893276] ib_srp: qp#151: post-recv:
[  184.893278] ib_srp: qp#151: post-recv:
[  184.893281] ib_srp: qp#151: post-recv:
[  184.893284] ib_srp: qp#151: post-recv:
[  184.893286] ib_srp: qp#151: post-recv:
[  184.893289] ib_srp: qp#151: post-recv:
[  184.893291] ib_srp: qp#151: post-recv:
[  184.893294] ib_srp: qp#151: post-recv:
[  184.893297] ib_srp: qp#151: post-recv:
[  184.893299] ib_srp: qp#151: post-recv:
[  184.893302] ib_srp: qp#151: post-recv:
[  184.893304] ib_srp: qp#151: post-recv:
[  184.893307] ib_srp: qp#151: post-recv:
[  184.893310] ib_srp: qp#151: post-recv:
[  184.893312] ib_srp: qp#151: post-recv:
[  184.893315] ib_srp: qp#151: post-recv:
[  184.893318] ib_srp: qp#151: post-recv:
[  184.893320] ib_srp: qp#151: post-recv:
[  184.893323] ib_srp: qp#151: post-recv:
[  184.893325] ib_srp: qp#151: post-recv:
[  184.893328] ib_srp: qp#151: post-recv:
[  184.893331] ib_srp: qp#151: post-recv:
[  184.893333] ib_srp: qp#151: post-recv:
[  184.893336] ib_srp: qp#151: post-recv:
[  184.893339] ib_srp: qp#151: post-recv:
[  184.893341] ib_srp: qp#151: post-recv:
[  184.893344] ib_srp: qp#151: post-recv:
[  184.893346] ib_srp: qp#151: post-recv:
[  184.893349] ib_srp: qp#151: post-recv:
[  184.893352] ib_srp: qp#151: post-recv:
[  184.893354] ib_srp: qp#151: post-recv:
[  184.893357] ib_srp: qp#151: post-recv:
[  184.893360] ib_srp: qp#151: post-recv:
[  184.893362] ib_srp: qp#151: post-recv:
[  184.893365] ib_srp: qp#151: post-recv:
[  184.893367] ib_srp: qp#151: post-recv:
[  184.893370] ib_srp: qp#151: post-recv:
[  184.893373] ib_srp: qp#151: post-recv:
[  184.893375] ib_srp: qp#151: post-recv:
[  185.127720] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.127760] ib_srp: qp#151: post-reg_mr: 0x207820
[  185.127767] ib_srp: qp#151: post-send:
[  185.127792] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.127805] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.127984] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.127996] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.128182] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.128232] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.128241] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.128254] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.128302] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.128317] ib_srp: qp#151: post-inv_rkey: 0x207820
[  185.128323] ib_srp: qp#151: post-recv:
[  185.128336] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.128388] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.128409] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.128439] ib_srp: qp#151: post-reg_mr: 0x207821
[  185.128446] ib_srp: qp#151: post-send:
[  185.128446] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.128459] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.128548] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.128556] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.128692] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.128736] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.128745] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.128756] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.128769] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.128788] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.128810] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.128821] ib_srp: qp#151: post-reg_mr: 0x20798f
[  185.128821] ib_srp: qp#151: post-inv_rkey: 0x207821
[  185.128828] ib_srp: qp#151: post-send:
[  185.128844] ib_srp: qp#151: post-recv:
[  185.128868] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.128883] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.128891] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.128908] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.128917] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.128921] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.128932] ib_srp: qp#151: post-reg_mr: 0x207822
[  185.128939] ib_srp: qp#151: post-send:
[  185.128946] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.128960] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.129019] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.129026] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.129063] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.129070] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.129285] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.129332] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.129341] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.129352] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.129380] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.129399] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.129404] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.129438] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.129448] ib_srp: qp#151: post-inv_rkey: 0x20798f
[  185.129455] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.129461] ib_srp: qp#151: post-recv:
[  185.129465] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.129471] ib_srp: qp#151: post-inv_rkey: 0x207822
[  185.129475] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.129491] ib_srp: qp#151: post-recv:
[  185.129505] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.129530] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.129535] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.129560] ib_srp: qp#151: post-reg_mr: 0x207823
[  185.129568] ib_srp: qp#151: post-send:
[  185.129570] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.129587] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.129664] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.129672] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.129841] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.129908] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.129917] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.129929] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.129995] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.130010] ib_srp: qp#151: post-inv_rkey: 0x207823
[  185.130034] ib_srp: qp#151: post-recv:
[  185.130035] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.134544] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.134566] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.134593] ib_srp: qp#151: post-reg_mr: 0x207824
[  185.134598] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.134599] ib_srp: qp#151: post-send:
[  185.134618] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.134701] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.134712] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.134845] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.134882] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.134890] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.134898] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.134936] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.134947] ib_srp: qp#151: post-inv_rkey: 0x207824
[  185.134956] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.134961] ib_srp: qp#151: post-recv:
[  185.135496] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.135518] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.135545] ib_srp: qp#151: post-reg_mr: 0x207825
[  185.135552] ib_srp: qp#151: post-send:
[  185.135554] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.135573] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.135678] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.135688] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.135865] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.135913] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.135921] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.135933] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.135994] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.136010] ib_srp: qp#151: post-inv_rkey: 0x207825
[  185.136014] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.136028] ib_srp: qp#151: post-recv:
[  185.141233] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.141260] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.141292] ib_srp: qp#151: post-reg_mr: 0x207826
[  185.141301] ib_srp: qp#151: post-send:
[  185.141302] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.141319] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.141424] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.141428] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.141600] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.141648] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.141663] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.141675] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.141738] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.141754] ib_srp: qp#151: post-inv_rkey: 0x207826
[  185.141762] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.141772] ib_srp: qp#151: post-recv:
[  185.141820] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.141842] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.141882] ib_srp: qp#151: post-reg_mr: 0x207827
[  185.141889] ib_srp: qp#151: post-send:
[  185.141893] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.141909] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.142020] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.142027] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.142199] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.142246] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.142256] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.142271] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.142337] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.142354] ib_srp: qp#151: post-inv_rkey: 0x207827
[  185.142362] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.142370] ib_srp: qp#151: post-recv:
[  185.142463] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.142486] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.142514] ib_srp: qp#151: post-reg_mr: 0x207828
[  185.142523] ib_srp: qp#151: post-send:
[  185.142523] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.142541] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.142657] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.142664] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.142832] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.142880] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.142891] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.142902] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.142962] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.142978] ib_srp: qp#151: post-inv_rkey: 0x207828
[  185.142985] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.142992] ib_srp: qp#151: post-recv:
[  185.143041] ib_srp: qp#151: __srp_get_tx_iu to ib_process_cq_direct
[  185.143062] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  185.143087] ib_srp: qp#151: post-reg_mr: 0x207829
[  185.143093] ib_srp: qp#151: post-send:
[  185.143095] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  185.143111] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  185.143215] enp6s0_rxe: qp#151 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  185.143220] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 0, len: 64
[  185.143393] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  185.143441] enp6s0_rxe: qp#151 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  185.143450] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 128, status: 0, len: 36
[  185.143462] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified
[  185.143530] ib_srp: qp#151: recv-done: opcode: 128 status: 0: len: 36
[  185.143547] ib_srp: qp#151: post-inv_rkey: 0x207829
[  185.143556] enp6s0_rxe: qp#151 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  185.143566] ib_srp: qp#151: post-recv:
[  192.873115] ib_srp: qp#151: srp_destroy_qp to ib_process_cq_direct
[  192.873142] ib_srp: qp#151: send-done: opcode: 0 status: 0: len: 64
[  192.873226] enp6s0_rxe: qp#151 rxe_cq_post: cq#161 opcode: 0, status: 5, len: 0
[  192.973687] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 opcode: 0, status: 5, len: 0
[  192.973701] enp6s0_rxe: qp#151 rxe_cq_post: cq#160 notified

[-- Attachment #3: out167 --]
[-- Type: text/plain, Size: 13466 bytes --]

[  195.843870] qp#167: create_qp
[  195.858393] qp#167: modify_qp: INIT
[  195.858402] qp#167: modify_qp: RTR
[  195.858406] qp#167: modify_qp: RTS
[  195.858656] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  195.859199] ib_srp: qp#167: post-recv:
[  195.859203] ib_srp: qp#167: post-recv:
[  195.859205] ib_srp: qp#167: post-recv:
[  195.859208] ib_srp: qp#167: post-recv:
[  195.859210] ib_srp: qp#167: post-recv:
[  195.859213] ib_srp: qp#167: post-recv:
[  195.859216] ib_srp: qp#167: post-recv:
[  195.859218] ib_srp: qp#167: post-recv:
[  195.859221] ib_srp: qp#167: post-recv:
[  195.859224] ib_srp: qp#167: post-recv:
[  195.859226] ib_srp: qp#167: post-recv:
[  195.859229] ib_srp: qp#167: post-recv:
[  195.859231] ib_srp: qp#167: post-recv:
[  195.859234] ib_srp: qp#167: post-recv:
[  195.859237] ib_srp: qp#167: post-recv:
[  195.859239] ib_srp: qp#167: post-recv:
[  195.859242] ib_srp: qp#167: post-recv:
[  195.859244] ib_srp: qp#167: post-recv:
[  195.859247] ib_srp: qp#167: post-recv:
[  195.859250] ib_srp: qp#167: post-recv:
[  195.859253] ib_srp: qp#167: post-recv:
[  195.859255] ib_srp: qp#167: post-recv:
[  195.859258] ib_srp: qp#167: post-recv:
[  195.859260] ib_srp: qp#167: post-recv:
[  195.859263] ib_srp: qp#167: post-recv:
[  195.859266] ib_srp: qp#167: post-recv:
[  195.859268] ib_srp: qp#167: post-recv:
[  195.859271] ib_srp: qp#167: post-recv:
[  195.859274] ib_srp: qp#167: post-recv:
[  195.859276] ib_srp: qp#167: post-recv:
[  195.859279] ib_srp: qp#167: post-recv:
[  195.859281] ib_srp: qp#167: post-recv:
[  195.859284] ib_srp: qp#167: post-recv:
[  195.859287] ib_srp: qp#167: post-recv:
[  195.859289] ib_srp: qp#167: post-recv:
[  195.859292] ib_srp: qp#167: post-recv:
[  195.859294] ib_srp: qp#167: post-recv:
[  195.859297] ib_srp: qp#167: post-recv:
[  195.859300] ib_srp: qp#167: post-recv:
[  195.859303] ib_srp: qp#167: post-recv:
[  195.859306] ib_srp: qp#167: post-recv:
[  195.859308] ib_srp: qp#167: post-recv:
[  195.859311] ib_srp: qp#167: post-recv:
[  195.859313] ib_srp: qp#167: post-recv:
[  195.859316] ib_srp: qp#167: post-recv:
[  195.859319] ib_srp: qp#167: post-recv:
[  195.859321] ib_srp: qp#167: post-recv:
[  195.859324] ib_srp: qp#167: post-recv:
[  195.859326] ib_srp: qp#167: post-recv:
[  195.859329] ib_srp: qp#167: post-recv:
[  195.859332] ib_srp: qp#167: post-recv:
[  195.859334] ib_srp: qp#167: post-recv:
[  195.859337] ib_srp: qp#167: post-recv:
[  195.859339] ib_srp: qp#167: post-recv:
[  195.859342] ib_srp: qp#167: post-recv:
[  195.859345] ib_srp: qp#167: post-recv:
[  195.859348] ib_srp: qp#167: post-recv:
[  195.859350] ib_srp: qp#167: post-recv:
[  195.859353] ib_srp: qp#167: post-recv:
[  195.859356] ib_srp: qp#167: post-recv:
[  195.859358] ib_srp: qp#167: post-recv:
[  195.859361] ib_srp: qp#167: post-recv:
[  195.859364] ib_srp: qp#167: post-recv:
[  195.859366] ib_srp: qp#167: post-recv:
[  196.396284] ib_srp: qp#167: __srp_get_tx_iu to ib_process_cq_direct
[  196.396316] ib_srp: qp#167: post-reg_mr: 0x2458e7
[  196.396325] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  196.396327] ib_srp: qp#167: post-send:
[  196.396338] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  196.396360] ib_srp: qp#167: __srp_get_tx_iu to ib_process_cq_direct
[  196.396383] ib_srp: qp#167: post-reg_mr: 0x245923
[  196.396391] ib_srp: qp#167: post-send:
[  196.396455] ib_srp: qp#167: __srp_get_tx_iu to ib_process_cq_direct
[  196.396478] ib_srp: qp#167: post-reg_mr: 0x245a7d
[  196.396484] ib_srp: qp#167: post-send:
[  196.396590] ib_srp: qp#167: __srp_get_tx_iu to ib_process_cq_direct
[  196.396615] ib_srp: qp#167: post-reg_mr: 0x245b46
[  196.396621] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  196.396624] ib_srp: qp#167: post-send:
[  196.396629] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  196.396645] enp6s0_rxe: qp#167 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  196.396661] enp6s0_rxe: qp#167 rxe_cq_post: cq#177 opcode: 0, status: 0, len: 64
[  196.396662] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  196.396670] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  196.396694] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  196.396702] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_ONLY
[  196.396709] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  196.396733] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  196.396738] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 opcode: 128, status: 0, len: 36
[  196.396746] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 notified
[  196.396760] enp6s0_rxe: qp#167 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  196.396770] enp6s0_rxe: qp#167 rxe_cq_post: cq#177 opcode: 0, status: 0, len: 64
[  196.396783] ib_srp: qp#167: recv-done: opcode: 128 status: 0: len: 36
[  196.396796] ib_srp: qp#167: post-inv_rkey: 0x2458e7
[  196.396798] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.396798] enp6s0_rxe: qp#167 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  196.396804] ib_srp: qp#167: post-recv:
[  196.396813] ib_srp: qp#167: __srp_get_tx_iu to ib_process_cq_direct
[  196.396815] enp6s0_rxe: qp#167 rxe_cq_post: cq#177 opcode: 0, status: 0, len: 64
[  196.396845] ib_srp: qp#167: send-done: opcode: 0 status: 0: len: 64
[  196.396845] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.396850] ib_srp: qp#167: send-done: opcode: 0 status: 0: len: 64
[  196.396855] ib_srp: qp#167: send-done: opcode: 0 status: 0: len: 64
[  196.396860] enp6s0_rxe: qp#167 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  196.396867] enp6s0_rxe: qp#167 rxe_cq_post: cq#177 opcode: 0, status: 0, len: 64
[  196.396886] ib_srp: qp#167: post-reg_mr: 0x2458e8
[  196.396892] ib_srp: qp#167: post-send:
[  196.396898] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.396910] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_REG_MR, length: 0, resid: 0
[  196.396917] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_SEND, length: 64, resid: 64
[  196.397354] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_FIRST
[  196.397373] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_LAST
[  196.397418] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  196.397429] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 opcode: 128, status: 0, len: 36
[  196.397440] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 notified
[  196.397490] ib_srp: qp#167: recv-done: opcode: 128 status: 0: len: 36
[  196.397513] ib_srp: qp#167: post-inv_rkey: 0x245923
[  196.397534] ib_srp: qp#167: post-recv:
[  196.397576] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_FIRST
[  196.397597] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397615] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397633] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397653] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397668] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_LAST
[  196.397677] enp6s0_rxe: qp#167 rxe_completer: pkt: opcode = IB_OPCODE_RC_ACKNOWLEDGE
[  196.397685] enp6s0_rxe: qp#167 rxe_cq_post: cq#177 opcode: 0, status: 0, len: 64
[  196.397705] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  196.397714] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 opcode: 128, status: 0, len: 36
[  196.397720] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 notified
[  196.397754] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_FIRST
[  196.397771] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397789] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397806] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397827] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397845] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397864] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397883] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397903] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397921] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397925] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.397938] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397957] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397974] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.397991] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398029] ib_srp: qp#167: recv-done: opcode: 128 status: 0: len: 36
[  196.398031] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_LAST
[  196.398052] ib_srp: qp#167: post-inv_rkey: 0x245a7d
[  196.398077] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  196.398079] ib_srp: qp#167: post-recv:
[  196.398089] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 opcode: 128, status: 0, len: 36
[  196.398106] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.398113] ib_srp: qp#167: recv-done: opcode: 128 status: 0: len: 36
[  196.398121] ib_srp: qp#167: post-inv_rkey: 0x245b46
[  196.398122] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.398127] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_FIRST
[  196.398145] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398155] ib_srp: qp#167: post-recv:
[  196.398164] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398185] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398205] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398228] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398245] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398261] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398296] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398334] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398355] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398390] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398411] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398452] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398470] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398505] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398520] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398577] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398594] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398612] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398630] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398676] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398696] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398737] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398756] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398798] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398816] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398853] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398889] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398909] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_MIDDLE
[  196.398947] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_RDMA_WRITE_LAST
[  196.398994] enp6s0_rxe: qp#167 rxe_responder: pkt: opcode = IB_OPCODE_RC_SEND_ONLY
[  196.399001] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 opcode: 128, status: 0, len: 36
[  196.399014] enp6s0_rxe: qp#167 rxe_cq_post: cq#176 notified
[  196.399087] ib_srp: qp#167: recv-done: opcode: 128 status: 0: len: 36
[  196.399107] ib_srp: qp#167: post-inv_rkey: 0x2458e8
[  196.399114] enp6s0_rxe: qp#167 rxe_requester: wqe: IB_WR_LOCAL_INV, length: 0, resid: 0
[  196.399139] ib_srp: qp#167: post-recv:

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:09                                                   ` Bob Pearson
@ 2023-10-17 17:13                                                     ` Bart Van Assche
  2023-10-17 17:15                                                       ` Bob Pearson
  2023-10-17 17:19                                                       ` Bob Pearson
  2023-10-17 17:58                                                     ` Jason Gunthorpe
  2023-10-18  8:16                                                     ` Zhu Yanjun
  2 siblings, 2 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 17:13 UTC (permalink / raw)
  To: Bob Pearson, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi


On 10/17/23 10:09, Bob Pearson wrote:
> I don't yet understand the logic of the srp driver to fix this but
> the problem is not in the rxe driver as far as I can tell.
Is there any information available that supports this conclusion? I
think the KASAN output that I shared shows that there is an issue in
the RXE driver.

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:13                                                     ` Bart Van Assche
@ 2023-10-17 17:15                                                       ` Bob Pearson
  2023-10-17 17:19                                                       ` Bob Pearson
  1 sibling, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 17:15 UTC (permalink / raw)
  To: Bart Van Assche, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 10/17/23 12:13, Bart Van Assche wrote:
> 
> On 10/17/23 10:09, Bob Pearson wrote:
>> I don't yet understand the logic of the srp driver to fix this but
>> the problem is not in the rxe driver as far as I can tell.
> Is there any information available that supports this conclusion? I
> think the KASAN output that I shared shows that there is an issue in
> the RXE driver.
> 
> Thanks,
> 
> Bart.
> 

Bart,

I have seen 100's of hangs. I have never seen a KASAN warning and it is configured in my kernel.

Bopb

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:13                                                     ` Bart Van Assche
  2023-10-17 17:15                                                       ` Bob Pearson
@ 2023-10-17 17:19                                                       ` Bob Pearson
  2023-10-17 17:34                                                         ` Bart Van Assche
  1 sibling, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 17:19 UTC (permalink / raw)
  To: Bart Van Assche, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 10/17/23 12:13, Bart Van Assche wrote:
> 
> On 10/17/23 10:09, Bob Pearson wrote:
>> I don't yet understand the logic of the srp driver to fix this but
>> the problem is not in the rxe driver as far as I can tell.
> Is there any information available that supports this conclusion? I
> think the KASAN output that I shared shows that there is an issue in
> the RXE driver.
> 
> Thanks,
> 
> Bart.
> 

Should have mentioned that the last set of tests in srp/002 have much longer
writes than the earlier ones which require a lot more processing and thus
time. My belief is that the completion logic in srp is faulty but works if
the underlying transport is fast but not if it is slow.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:19                                                       ` Bob Pearson
@ 2023-10-17 17:34                                                         ` Bart Van Assche
  0 siblings, 0 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 17:34 UTC (permalink / raw)
  To: Bob Pearson, Daisuke Matsuda (Fujitsu), 'Rain River'
  Cc: Zhu Yanjun, Jason Gunthorpe, leon, Shinichiro Kawasaki,
	RDMA mailing list, linux-scsi

On 10/17/23 10:19, Bob Pearson wrote:
> Should have mentioned that the last set of tests in srp/002 have much longer
> writes than the earlier ones which require a lot more processing and thus
> time. My belief is that the completion logic in srp is faulty but works if
> the underlying transport is fast but not if it is slow.

There are no known issues in the SRP driver. If there would be any
issues in that driver, I think these would also show up in tests with
the siw (Soft-iWARP) driver.

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:09                                                   ` Bob Pearson
  2023-10-17 17:13                                                     ` Bart Van Assche
@ 2023-10-17 17:58                                                     ` Jason Gunthorpe
  2023-10-17 18:44                                                       ` Bob Pearson
  2023-10-17 19:18                                                       ` Bart Van Assche
  2023-10-18  8:16                                                     ` Zhu Yanjun
  2 siblings, 2 replies; 87+ messages in thread
From: Jason Gunthorpe @ 2023-10-17 17:58 UTC (permalink / raw)
  To: Bob Pearson
  Cc: Daisuke Matsuda (Fujitsu), 'Bart Van Assche',
	'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On Tue, Oct 17, 2023 at 12:09:31PM -0500, Bob Pearson wrote:

 
> For qp#167 the call to srp_post_send() is followed by the rxe driver
> processing the send operation and generating a work completion which
> is posted to the send cq but there is never a following call to
> __srp_get_rx_iu() so the cqe is not received by srp and failure.

? I don't see this funcion in the kernel?  __srp_get_tx_iu ?
 
> I don't yet understand the logic of the srp driver to fix this but
> the problem is not in the rxe driver as far as I can tell.

It looks to me like __srp_get_tx_iu() is following the design pattern
where the send queue is only polled when it needs to allocate a new
send buffer - ie the send buffers are pre-allocated and cycle through
the queue.

So, it is not surprising this isn't being called if it is hung - the
hang is probably something that is preventing it from even wanting to
send, which is probably a receive side issue.

Followup back up from that point to isolate what is the missing
resouce to trigger send may bring some more clarity.

Alternatively if __srp_get_tx_iu() is failing then perhaps you've run
into an issue where it hit something rare and recovery does not work.

eg this kind of design pattern carries a subtle assumption that the rx
and send CQ are ordered together. Getting a rx CQ before a matching tx
CQ can trigger the unusual scenario where the send side runs out of
resources.

Jason

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:58                                                     ` Jason Gunthorpe
@ 2023-10-17 18:44                                                       ` Bob Pearson
  2023-10-17 18:51                                                         ` Jason Gunthorpe
  2023-10-17 19:18                                                       ` Bart Van Assche
  1 sibling, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 18:44 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Bart Van Assche',
	'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 12:58, Jason Gunthorpe wrote:
> On Tue, Oct 17, 2023 at 12:09:31PM -0500, Bob Pearson wrote:
> 
>  
>> For qp#167 the call to srp_post_send() is followed by the rxe driver
>> processing the send operation and generating a work completion which
>> is posted to the send cq but there is never a following call to
>> __srp_get_rx_iu() so the cqe is not received by srp and failure.
> 
> ? I don't see this funcion in the kernel?  __srp_get_tx_iu ?
>  
>> I don't yet understand the logic of the srp driver to fix this but
>> the problem is not in the rxe driver as far as I can tell.
> 
> It looks to me like __srp_get_tx_iu() is following the design pattern
> where the send queue is only polled when it needs to allocate a new
> send buffer - ie the send buffers are pre-allocated and cycle through
> the queue.
> 
> So, it is not surprising this isn't being called if it is hung - the
> hang is probably something that is preventing it from even wanting to
> send, which is probably a receive side issue.
> 
> Followup back up from that point to isolate what is the missing
> resouce to trigger send may bring some more clarity.
> 
> Alternatively if __srp_get_tx_iu() is failing then perhaps you've run
> into an issue where it hit something rare and recovery does not work.
> 
> eg this kind of design pattern carries a subtle assumption that the rx
> and send CQ are ordered together. Getting a rx CQ before a matching tx
> CQ can trigger the unusual scenario where the send side runs out of
> resources.
> 
> Jason

In all the traces I have looked at the hang only occurs once the final
send side completions are not received. This happens when the srp
driver doesn't poll (i.e. call ib_process_cq_direct). The rest is
my conjecture. Since there are several (e.g. qp#167 through qp#211 (odd))
qp's with missing completions there are 23 iu's tied up when srp hangs.
Your suggestion makes sense as why the hang occurs. When the test
finishes the qp's are destroyed and the driver calls ib_process_cq_direct
again which cleans up the resources.

The problem is that there isn't any obvious way to find a thread related
to the missing cqe to poll for them. I think the best way to fix this is
to convert the send side cq handling to interrupt driven (as is the case
with the srpt driver.) The provider drivers have to run in any case to
convert cqe's to wc's so there isn't much penalty to call the cq
completion handler since there is already software running and then you
will get reliable delivery of completions.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 18:44                                                       ` Bob Pearson
@ 2023-10-17 18:51                                                         ` Jason Gunthorpe
  2023-10-17 19:55                                                           ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Jason Gunthorpe @ 2023-10-17 18:51 UTC (permalink / raw)
  To: Bob Pearson
  Cc: Daisuke Matsuda (Fujitsu), 'Bart Van Assche',
	'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On Tue, Oct 17, 2023 at 01:44:58PM -0500, Bob Pearson wrote:
> On 10/17/23 12:58, Jason Gunthorpe wrote:
> > On Tue, Oct 17, 2023 at 12:09:31PM -0500, Bob Pearson wrote:
> > 
> >  
> >> For qp#167 the call to srp_post_send() is followed by the rxe driver
> >> processing the send operation and generating a work completion which
> >> is posted to the send cq but there is never a following call to
> >> __srp_get_rx_iu() so the cqe is not received by srp and failure.
> > 
> > ? I don't see this funcion in the kernel?  __srp_get_tx_iu ?
> >  
> >> I don't yet understand the logic of the srp driver to fix this but
> >> the problem is not in the rxe driver as far as I can tell.
> > 
> > It looks to me like __srp_get_tx_iu() is following the design pattern
> > where the send queue is only polled when it needs to allocate a new
> > send buffer - ie the send buffers are pre-allocated and cycle through
> > the queue.
> > 
> > So, it is not surprising this isn't being called if it is hung - the
> > hang is probably something that is preventing it from even wanting to
> > send, which is probably a receive side issue.
> > 
> > Followup back up from that point to isolate what is the missing
> > resouce to trigger send may bring some more clarity.
> > 
> > Alternatively if __srp_get_tx_iu() is failing then perhaps you've run
> > into an issue where it hit something rare and recovery does not work.
> > 
> > eg this kind of design pattern carries a subtle assumption that the rx
> > and send CQ are ordered together. Getting a rx CQ before a matching tx
> > CQ can trigger the unusual scenario where the send side runs out of
> > resources.
> > 
> > Jason
> 
> In all the traces I have looked at the hang only occurs once the final
> send side completions are not received. This happens when the srp
> driver doesn't poll (i.e. call ib_process_cq_direct). The rest is
> my conjecture. Since there are several (e.g. qp#167 through qp#211 (odd))
> qp's with missing completions there are 23 iu's tied up when srp hangs.
> Your suggestion makes sense as why the hang occurs. When the test
> finishes the qp's are destroyed and the driver calls ib_process_cq_direct
> again which cleans up the resources.
> 
> The problem is that there isn't any obvious way to find a thread related
> to the missing cqe to poll for them. I think the best way to fix this is
> to convert the send side cq handling to interrupt driven (as is the case
> with the srpt driver.) The provider drivers have to run in any case to
> convert cqe's to wc's so there isn't much penalty to call the cq
> completion handler since there is already software running and then you
> will get reliable delivery of completions.

Can you add tracing to show that SRP is running out of SQ resources,
ie __srp_get_tx_iu() fails and that is a precondition for the hang?

I am fully willing to belive that is not ever tested.

Otherwise if srp thinks it has SQ resources then the SQ is probably
not the cause of the hang.

Jason

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:58                                                     ` Jason Gunthorpe
  2023-10-17 18:44                                                       ` Bob Pearson
@ 2023-10-17 19:18                                                       ` Bart Van Assche
  1 sibling, 0 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 19:18 UTC (permalink / raw)
  To: Jason Gunthorpe, Bob Pearson
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 10:58, Jason Gunthorpe wrote:
> eg this kind of design pattern carries a subtle assumption that the rx
> and send CQ are ordered together. Getting a rx CQ before a matching tx
> CQ can trigger the unusual scenario where the send side runs out of
> resources.

If an rx CQ is received before the matching tx CQ by srp_queuecommand(),
then srp_queuecommand() will return SCSI_MLQUEUE_HOST_BUSY and the SCSI
core will retry the srp_queuecommand() after a small delay. This is a
common approach in Linux kernel SCSI drivers.

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 18:51                                                         ` Jason Gunthorpe
@ 2023-10-17 19:55                                                           ` Bob Pearson
  2023-10-17 20:06                                                             ` Bart Van Assche
  0 siblings, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 19:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Bart Van Assche',
	'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 13:51, Jason Gunthorpe wrote:
> On Tue, Oct 17, 2023 at 01:44:58PM -0500, Bob Pearson wrote:
>> On 10/17/23 12:58, Jason Gunthorpe wrote:
>>> On Tue, Oct 17, 2023 at 12:09:31PM -0500, Bob Pearson wrote:
>>>
>>>  
>>>> For qp#167 the call to srp_post_send() is followed by the rxe driver
>>>> processing the send operation and generating a work completion which
>>>> is posted to the send cq but there is never a following call to
>>>> __srp_get_rx_iu() so the cqe is not received by srp and failure.
>>>
>>> ? I don't see this funcion in the kernel?  __srp_get_tx_iu ?
>>>  
>>>> I don't yet understand the logic of the srp driver to fix this but
>>>> the problem is not in the rxe driver as far as I can tell.
>>>
>>> It looks to me like __srp_get_tx_iu() is following the design pattern
>>> where the send queue is only polled when it needs to allocate a new
>>> send buffer - ie the send buffers are pre-allocated and cycle through
>>> the queue.
>>>
>>> So, it is not surprising this isn't being called if it is hung - the
>>> hang is probably something that is preventing it from even wanting to
>>> send, which is probably a receive side issue.
>>>
>>> Followup back up from that point to isolate what is the missing
>>> resouce to trigger send may bring some more clarity.
>>>
>>> Alternatively if __srp_get_tx_iu() is failing then perhaps you've run
>>> into an issue where it hit something rare and recovery does not work.
>>>
>>> eg this kind of design pattern carries a subtle assumption that the rx
>>> and send CQ are ordered together. Getting a rx CQ before a matching tx
>>> CQ can trigger the unusual scenario where the send side runs out of
>>> resources.
>>>
>>> Jason
>>
>> In all the traces I have looked at the hang only occurs once the final
>> send side completions are not received. This happens when the srp
>> driver doesn't poll (i.e. call ib_process_cq_direct). The rest is
>> my conjecture. Since there are several (e.g. qp#167 through qp#211 (odd))
>> qp's with missing completions there are 23 iu's tied up when srp hangs.
>> Your suggestion makes sense as why the hang occurs. When the test
>> finishes the qp's are destroyed and the driver calls ib_process_cq_direct
>> again which cleans up the resources.
>>
>> The problem is that there isn't any obvious way to find a thread related
>> to the missing cqe to poll for them. I think the best way to fix this is
>> to convert the send side cq handling to interrupt driven (as is the case
>> with the srpt driver.) The provider drivers have to run in any case to
>> convert cqe's to wc's so there isn't much penalty to call the cq
>> completion handler since there is already software running and then you
>> will get reliable delivery of completions.
> 
> Can you add tracing to show that SRP is running out of SQ resources,
> ie __srp_get_tx_iu() fails and that is a precondition for the hang?
> 
> I am fully willing to belive that is not ever tested.
> 
> Otherwise if srp thinks it has SQ resources then the SQ is probably
> not the cause of the hang.
> 
> Jason

Well.... the extra tracing did *not* show srp running out of iu's.
So I converted cq handling to IB_POLL_SOFTIRQ from IB_POLL_DIRECT.
This required adding a spinlock around list_add(&iu->list, ...) in 
srp_send_done(). The test now runs with all the completions handled
correctly. But, it still hangs. So a red herring.

The hunt continues.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 19:55                                                           ` Bob Pearson
@ 2023-10-17 20:06                                                             ` Bart Van Assche
  2023-10-17 20:13                                                               ` Bob Pearson
  2023-10-17 21:14                                                               ` Bob Pearson
  0 siblings, 2 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 20:06 UTC (permalink / raw)
  To: Bob Pearson, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 12:55, Bob Pearson wrote:
> Well.... the extra tracing did *not* show srp running out of iu's.
> So I converted cq handling to IB_POLL_SOFTIRQ from IB_POLL_DIRECT.
> This required adding a spinlock around list_add(&iu->list, ...) in
> srp_send_done(). The test now runs with all the completions handled
> correctly. But, it still hangs. So a red herring.

iu->list manipulations are protected by ch->lock. See also the
lockdep_assert_held(&ch->lock) statements in the code that does
manipulate this list and that does not grab ch->lock directly.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 20:06                                                             ` Bart Van Assche
@ 2023-10-17 20:13                                                               ` Bob Pearson
  2023-10-17 21:14                                                               ` Bob Pearson
  1 sibling, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 20:13 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 15:06, Bart Van Assche wrote:
> On 10/17/23 12:55, Bob Pearson wrote:
>> Well.... the extra tracing did *not* show srp running out of iu's.
>> So I converted cq handling to IB_POLL_SOFTIRQ from IB_POLL_DIRECT.
>> This required adding a spinlock around list_add(&iu->list, ...) in
>> srp_send_done(). The test now runs with all the completions handled
>> correctly. But, it still hangs. So a red herring.
> 
> iu->list manipulations are protected by ch->lock. See also the
> lockdep_assert_held(&ch->lock) statements in the code that does
> manipulate this list and that does not grab ch->lock directly.
> 
> Thanks,
> 
> Bart.

Thanks. Saw that. I just added ch->lock'ing around the list_add. It
works if you don't call ib_process_cq_direct which was inside
the lock and use poll_softirq instead which runs on it's own thread.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 20:06                                                             ` Bart Van Assche
  2023-10-17 20:13                                                               ` Bob Pearson
@ 2023-10-17 21:14                                                               ` Bob Pearson
  2023-10-17 21:18                                                                 ` Bart Van Assche
  1 sibling, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 21:14 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 15:06, Bart Van Assche wrote:
> On 10/17/23 12:55, Bob Pearson wrote:
>> Well.... the extra tracing did *not* show srp running out of iu's.
>> So I converted cq handling to IB_POLL_SOFTIRQ from IB_POLL_DIRECT.
>> This required adding a spinlock around list_add(&iu->list, ...) in
>> srp_send_done(). The test now runs with all the completions handled
>> correctly. But, it still hangs. So a red herring.
> 
> iu->list manipulations are protected by ch->lock. See also the
> lockdep_assert_held(&ch->lock) statements in the code that does
> manipulate this list and that does not grab ch->lock directly.
> 
> Thanks,
> 
> Bart.

One more clue. When the test hangs, after 120 seconds there is a set
of hung task messages in the logs like:

[  408.844422] ib_srp:srp_parse_in: ib_srp: [fe80::b62e:99ff:fef9:fa2e] -> [fe80::b62e:99ff:fef9:fa2e]:0/11010381%0
[  408.844439] ib_srp:srp_parse_in: ib_srp: [fe80::b62e:99ff:fef9:fa2e]:5555 -> [fe80::b62e:99ff:fef9:fa2e]:5555/11010381%0
[  408.844474] ib_srp:srp_parse_in: ib_srp: [fe80::21bb:9ba3:7562:5fb8%2] -> [fe80::21bb:9ba3:7562:5fb8]:0/11010381%2
[  408.844491] ib_srp:srp_parse_in: ib_srp: [fe80::21bb:9ba3:7562:5fb8%2]:5555 -> [fe80::21bb:9ba3:7562:5fb8]:5555/11010381%2
[  408.844502] scsi host13: ib_srp: Already connected to target port with id_ext=b62e99fffef9fa2e;ioc_guid=b62e99fffef9fa2e;dest=fe80:0000:0000:0000:21bb:9ba3:7562:5fb8
[  605.106839] INFO: task kworker/1:0:25 blocked for more than 120 seconds.
[  605.106857]       Tainted: G    B      OE      6.6.0-rc3+ #10
[  605.106866] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  605.106872] task:kworker/1:0     state:D stack:0     pid:25    ppid:2      flags:0x00004000
[  605.106887] Workqueue: dio/dm-5 iomap_dio_complete_work
[  605.106904] Call Trace:
[  605.106909]  <TASK>
[  605.106917]  ? __schedule+0x996/0x2c80
[  605.106929]  __schedule+0x9f6/0x2c80
[  605.106945]  ? lock_release+0xc1/0x6f0
[  605.106955]  ? rcu_is_watching+0x23/0x50
[  605.106970]  ? io_schedule_timeout+0xc0/0xc0
[  605.106981]  ? lock_contended+0x740/0x740
[  605.106989]  ? do_raw_spin_lock+0x1c0/0x1c0
[  605.106999]  ? lock_contended+0x740/0x740
[  605.107011]  ? _raw_spin_unlock_irq+0x27/0x60
[  605.107023]  ? trace_hardirqs_on+0x22/0x100
[  605.107037]  ? _raw_spin_unlock_irq+0x27/0x60
[  605.107052]  schedule+0x96/0x150
[  605.107063]  bit_wait+0x1c/0xa0
[  605.107074]  __wait_on_bit+0x42/0x110
[  605.107084]  ? bit_wait_io+0xa0/0xa0
[  605.107099]  __inode_wait_for_writeback+0x11b/0x190
[  605.107112]  ? inode_prepare_wbs_switch+0x160/0x160
[  605.107127]  ? swake_up_one+0xb0/0xb0
[  605.107147]  writeback_single_inode+0xb8/0x250
[  605.107159]  sync_inode_metadata+0xa2/0xe0
[  605.107168]  ? write_inode_now+0x160/0x160
[  605.107186]  ? file_write_and_wait_range+0x54/0xe0
[  605.107199]  generic_buffers_fsync_noflush+0x135/0x160
[  605.107213]  ext4_sync_file+0x3b3/0x620
[  605.107227]  vfs_fsync_range+0x69/0x110
[  605.107237]  ? ext4_getfsmap+0x520/0x520
[  605.107249]  iomap_dio_complete+0x35c/0x3a0
[  605.107259]  ? __schedule+0x9fe/0x2c80
[  605.107272]  ? aio_fsync_work+0x190/0x190
[  605.107284]  iomap_dio_complete_work+0x36/0x50
[  605.107297]  process_one_work+0x46c/0x950


All the active threads are just the same and are all waiting for
an io to complete from scsi. No threads are active in rxe, srp(t)
or scsi. All activity appears to be dead.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 21:14                                                               ` Bob Pearson
@ 2023-10-17 21:18                                                                 ` Bart Van Assche
  2023-10-17 21:23                                                                   ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 21:18 UTC (permalink / raw)
  To: Bob Pearson, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi


On 10/17/23 14:14, Bob Pearson wrote:
> All the active threads are just the same and are all waiting for
> an io to complete from scsi. No threads are active in rxe, srp(t)
> or scsi. All activity appears to be dead.

Is this really a clue? I have seen such backtraces many times. All
such a backtrace tells us is that something got stuck in a layer
under the filesystem. It does not tell us which layer caused
command processing to get stuck.

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 21:18                                                                 ` Bart Van Assche
@ 2023-10-17 21:23                                                                   ` Bob Pearson
  2023-10-17 21:30                                                                     ` Bart Van Assche
  0 siblings, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 21:23 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 16:18, Bart Van Assche wrote:
> 
> On 10/17/23 14:14, Bob Pearson wrote:
>> All the active threads are just the same and are all waiting for
>> an io to complete from scsi. No threads are active in rxe, srp(t)
>> or scsi. All activity appears to be dead.
> 
> Is this really a clue? I have seen such backtraces many times. All
> such a backtrace tells us is that something got stuck in a layer
> under the filesystem. It does not tell us which layer caused
> command processing to get stuck.
> 
> Bart.
> 

Not really, but stuck could mean it died (no threads active) or it is
in a loop or waiting to be scheduled. It looks dead. The lower layers are
waiting to get kicked into action by some event but it hasn't happened.
This is conjecture on my part though.

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 21:23                                                                   ` Bob Pearson
@ 2023-10-17 21:30                                                                     ` Bart Van Assche
  2023-10-17 21:39                                                                       ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 21:30 UTC (permalink / raw)
  To: Bob Pearson, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi


On 10/17/23 14:23, Bob Pearson wrote:
> Not really, but stuck could mean it died (no threads active) or it is
> in a loop or waiting to be scheduled. It looks dead. The lower layers are
> waiting to get kicked into action by some event but it hasn't happened.
> This is conjecture on my part though.

This call stack means that I/O has been submitted by the block layer and
that it did not get completed. Which I/O request got stuck can be
verified by e.g. running the list-pending-block-requests script that I
posted some time ago. See also
https://lore.kernel.org/all/55c0fe61-a091-b351-11b4-fa7f668e49d7@acm.org/.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 21:30                                                                     ` Bart Van Assche
@ 2023-10-17 21:39                                                                       ` Bob Pearson
  2023-10-17 22:42                                                                         ` Bart Van Assche
  0 siblings, 1 reply; 87+ messages in thread
From: Bob Pearson @ 2023-10-17 21:39 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 16:30, Bart Van Assche wrote:
> 
> On 10/17/23 14:23, Bob Pearson wrote:
>> Not really, but stuck could mean it died (no threads active) or it is
>> in a loop or waiting to be scheduled. It looks dead. The lower layers are
>> waiting to get kicked into action by some event but it hasn't happened.
>> This is conjecture on my part though.
> 
> This call stack means that I/O has been submitted by the block layer and
> that it did not get completed. Which I/O request got stuck can be
> verified by e.g. running the list-pending-block-requests script that I
> posted some time ago. See also
> https://lore.kernel.org/all/55c0fe61-a091-b351-11b4-fa7f668e49d7@acm.org/.
> 
> Thanks,
> 
> Bart.

Thanks. Would this run on the side of a hung blktests or would I need to
setup an srp-srpt file system?

Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 21:39                                                                       ` Bob Pearson
@ 2023-10-17 22:42                                                                         ` Bart Van Assche
  2023-10-18 18:29                                                                           ` Bob Pearson
  0 siblings, 1 reply; 87+ messages in thread
From: Bart Van Assche @ 2023-10-17 22:42 UTC (permalink / raw)
  To: Bob Pearson, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 14:39, Bob Pearson wrote:
> On 10/17/23 16:30, Bart Van Assche wrote:
>>
>> On 10/17/23 14:23, Bob Pearson wrote:
>>> Not really, but stuck could mean it died (no threads active) or it is
>>> in a loop or waiting to be scheduled. It looks dead. The lower layers are
>>> waiting to get kicked into action by some event but it hasn't happened.
>>> This is conjecture on my part though.
>>
>> This call stack means that I/O has been submitted by the block layer and
>> that it did not get completed. Which I/O request got stuck can be
>> verified by e.g. running the list-pending-block-requests script that I
>> posted some time ago. See also
>> https://lore.kernel.org/all/55c0fe61-a091-b351-11b4-fa7f668e49d7@acm.org/.
> 
> Thanks. Would this run on the side of a hung blktests or would I need to
> setup an srp-srpt file system?

I propose to analyze the source code of the component(s) that you
suspect of causing the hang. The output of the list-pending-block-
requests script is not sufficient to reveal which of the following
drivers is causing the hang: ib_srp, rdma_rxe, ib_srpt, ...

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 17:09                                                   ` Bob Pearson
  2023-10-17 17:13                                                     ` Bart Van Assche
  2023-10-17 17:58                                                     ` Jason Gunthorpe
@ 2023-10-18  8:16                                                     ` Zhu Yanjun
  2 siblings, 0 replies; 87+ messages in thread
From: Zhu Yanjun @ 2023-10-18  8:16 UTC (permalink / raw)
  To: Bob Pearson, Daisuke Matsuda (Fujitsu), 'Bart Van Assche',
	'Rain River'
  Cc: Jason Gunthorpe, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi


在 2023/10/18 1:09, Bob Pearson 写道:
> On 9/25/23 20:17, Daisuke Matsuda (Fujitsu) wrote:
>> On Tue, Sep 26, 2023 12:01 AM Bart Van Assche:
>>> On 9/24/23 21:47, Daisuke Matsuda (Fujitsu) wrote:
>>>> As Bob wrote above, nobody has found any logical failure in rxe
>>>> driver.
>>> That's wrong. In case you would not yet have noticed my latest email in
>>> this thread, please take a look at
>>> https://lore.kernel.org/linux-rdma/e8b76fae-780a-470e-8ec4-c6b650793d10@leemhuis.info/T/#m0fd8ea8a4cbc27b37
>>> b042ae4f8e9b024f1871a73.
>>> I think the report in that email is a 100% proof that there is a
>>> use-after-free issue in the rdma_rxe driver. Use-after-free issues have
>>> security implications and also can cause data corruption. I propose to
>>> revert the commit that introduced the rdma_rxe use-after-free unless
>>> someone comes up with a fix for the rdma_rxe driver.
>>>
>>> Bart.
>> Thank you for the clarification. I see your intention.
>> I hope the hang issue will be resolved by addressing this.
>>
>> Thanks,
>> Daisuke
>>
> I have made some progress in understanding the cause of the srp/002 etc. hang.
>
> The two attached files are traces of activity for two qp's qp#151 and qp#167. In my runs of srp/002
> All the qp's pass before 167 and all fail after 167 which is the first to fail.
>
> It turns out that all the passing qp's call srp_post_send() some number of times and also call
> srp_send_done() the same number of times. Starting at qp#167 the last call to srp_send_done() does
> not take place leaving the srp driver waiting for the final completion and causing the hang I believe.

Thanks, Bob

I will delve into your findings and the source code to find the root cause.

BTW, what linux distribution are you using to find this? Ubuntu, Fedora 
or Debian?

 From the above, sometings this problem is difficult to reproduce on 
Ubuntu. But it can be reproduced in Ubuntu and Debian.

So can you let me know what linux distribution you are using?

Thanks

Zhu Yanjun

>
> There are four cq's involved in each pair of qp's in the srp test. Two in ib_srp and two in ib_srpt
> for the two qp's. Three of them execute completion processing in a soft irq context so the code in
> core/cq.c gathers the completions and calls back to the srp drivers. The send side cq in srp uses
> cq_direct which requires srp to call ib_process_direct() in order to collect the completions. This
> happens in __srp_get_tx_iu() which is called in several places in the srp driver. But only as a side effect
> since the purpose of this routine is to get an iu to start a new command.
>
> In the attached files for qp#151 the final call to srp_post_send is followed by the rxe requester and
> completer work queues processing the send packet and the ack before a final call to __srp_get_rx_iu()
> which gathers the final send side completion and success.
>
> For qp#167 the call to srp_post_send() is followed by the rxe driver processing the send operation and
> generating a work completion which is posted to the send cq but there is never a following call to
> __srp_get_rx_iu() so the cqe is not received by srp and failure.
>
> I don't yet understand the logic of the srp driver to fix this but the problem is not in the rxe driver
> as far as I can tell.
>
> Bob

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-17 22:42                                                                         ` Bart Van Assche
@ 2023-10-18 18:29                                                                           ` Bob Pearson
  2023-10-18 19:17                                                                             ` Jason Gunthorpe
  2023-10-18 19:38                                                                             ` Bart Van Assche
  0 siblings, 2 replies; 87+ messages in thread
From: Bob Pearson @ 2023-10-18 18:29 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/17/23 17:42, Bart Van Assche wrote:
> On 10/17/23 14:39, Bob Pearson wrote:
>> On 10/17/23 16:30, Bart Van Assche wrote:
>>>
>>> On 10/17/23 14:23, Bob Pearson wrote:
>>>> Not really, but stuck could mean it died (no threads active) or it is
>>>> in a loop or waiting to be scheduled. It looks dead. The lower layers are
>>>> waiting to get kicked into action by some event but it hasn't happened.
>>>> This is conjecture on my part though.
>>>
>>> This call stack means that I/O has been submitted by the block layer and
>>> that it did not get completed. Which I/O request got stuck can be
>>> verified by e.g. running the list-pending-block-requests script that I
>>> posted some time ago. See also
>>> https://lore.kernel.org/all/55c0fe61-a091-b351-11b4-fa7f668e49d7@acm.org/.
>>
>> Thanks. Would this run on the side of a hung blktests or would I need to
>> setup an srp-srpt file system?
> 
> I propose to analyze the source code of the component(s) that you
> suspect of causing the hang. The output of the list-pending-block-
> requests script is not sufficient to reveal which of the following
> drivers is causing the hang: ib_srp, rdma_rxe, ib_srpt, ...
> 
> Thanks,
> 
> Bart.
> 

Bart,

Another data point. I had seen (months ago) that both the rxe and siw drivers could cause blktests srp
hangs. More recently when I configure my kernel to run lots of tests (lockdep, memory leaks, kasan, ubsan,
etc.), which definitely slows performance and adds delays, the % of srp/002 runs which hang on the rxe driver
has gone from 10%+- to a solid 100%. This suggested retrying the siw driver on the debug kernel since it
has the reputation of always running successfully. I now find that siw also hangs solidly on srp/002.
This is another hint that we are seeing a timing issue.

Bob 

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-18 18:29                                                                           ` Bob Pearson
@ 2023-10-18 19:17                                                                             ` Jason Gunthorpe
  2023-10-18 19:48                                                                               ` Bart Van Assche
  2023-10-18 19:38                                                                             ` Bart Van Assche
  1 sibling, 1 reply; 87+ messages in thread
From: Jason Gunthorpe @ 2023-10-18 19:17 UTC (permalink / raw)
  To: Bob Pearson
  Cc: Bart Van Assche, Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On Wed, Oct 18, 2023 at 01:29:16PM -0500, Bob Pearson wrote:
> On 10/17/23 17:42, Bart Van Assche wrote:
> > On 10/17/23 14:39, Bob Pearson wrote:
> >> On 10/17/23 16:30, Bart Van Assche wrote:
> >>>
> >>> On 10/17/23 14:23, Bob Pearson wrote:
> >>>> Not really, but stuck could mean it died (no threads active) or it is
> >>>> in a loop or waiting to be scheduled. It looks dead. The lower layers are
> >>>> waiting to get kicked into action by some event but it hasn't happened.
> >>>> This is conjecture on my part though.
> >>>
> >>> This call stack means that I/O has been submitted by the block layer and
> >>> that it did not get completed. Which I/O request got stuck can be
> >>> verified by e.g. running the list-pending-block-requests script that I
> >>> posted some time ago. See also
> >>> https://lore.kernel.org/all/55c0fe61-a091-b351-11b4-fa7f668e49d7@acm.org/.
> >>
> >> Thanks. Would this run on the side of a hung blktests or would I need to
> >> setup an srp-srpt file system?
> > 
> > I propose to analyze the source code of the component(s) that you
> > suspect of causing the hang. The output of the list-pending-block-
> > requests script is not sufficient to reveal which of the following
> > drivers is causing the hang: ib_srp, rdma_rxe, ib_srpt, ...
> > 
> > Thanks,
> > 
> > Bart.
> > 
> 
> Bart,
> 
> Another data point. I had seen (months ago) that both the rxe and
> siw drivers could cause blktests srp hangs. More recently when I
> configure my kernel to run lots of tests (lockdep, memory leaks,
> kasan, ubsan, etc.), which definitely slows performance and adds
> delays, the % of srp/002 runs which hang on the rxe driver has gone
> from 10%+- to a solid 100%. This suggested retrying the siw driver
> on the debug kernel since it has the reputation of always running
> successfully. I now find that siw also hangs solidly on srp/002.
> This is another hint that we are seeing a timing issue.

If siw hangs as well, I definitely comfortable continuing to debug and
leaving the work queues in-tree for now.

Jason

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-18 18:29                                                                           ` Bob Pearson
  2023-10-18 19:17                                                                             ` Jason Gunthorpe
@ 2023-10-18 19:38                                                                             ` Bart Van Assche
  1 sibling, 0 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-10-18 19:38 UTC (permalink / raw)
  To: Bob Pearson, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/18/23 11:29, Bob Pearson wrote:
> I now find that siw also hangs solidly on srp/002. This is another
> hint that we are seeing a timing issue.
I can't reproduce the srp/002 hang with the siw driver - neither with a 
production kernel nor with a debug kernel. Is anyone else able to 
reproduce the srp/002 hang with the siw driver?

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-18 19:17                                                                             ` Jason Gunthorpe
@ 2023-10-18 19:48                                                                               ` Bart Van Assche
  2023-10-18 20:03                                                                                 ` Bob Pearson
                                                                                                   ` (3 more replies)
  0 siblings, 4 replies; 87+ messages in thread
From: Bart Van Assche @ 2023-10-18 19:48 UTC (permalink / raw)
  To: Jason Gunthorpe, Bob Pearson
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi


On 10/18/23 12:17, Jason Gunthorpe wrote:
> If siw hangs as well, I definitely comfortable continuing to debug and
> leaving the work queues in-tree for now.

Regarding the KASAN complaint that I shared about one month ago, can 
that complaint have any other root cause than the patch "RDMA/rxe: Add
workqueue support for rxe tasks"? That report shows a use-after-free by
rxe code with a pointer to memory that was owned by the rxe driver and
that was freed by the rxe driver. That memory is an skbuff. The rxe
driver manages skbuffs. The SRP driver doesn't even know about these
skbuff objects. See also 
https://lore.kernel.org/linux-rdma/8ee2869b-3f51-4195-9883-015cd30b4241@acm.org/

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-18 19:48                                                                               ` Bart Van Assche
@ 2023-10-18 20:03                                                                                 ` Bob Pearson
  2023-10-18 20:04                                                                                 ` Bob Pearson
                                                                                                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-10-18 20:03 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

On 10/18/23 14:48, Bart Van Assche wrote:
> 
> On 10/18/23 12:17, Jason Gunthorpe wrote:
>> If siw hangs as well, I definitely comfortable continuing to debug and
>> leaving the work queues in-tree for now.
> 
> Regarding the KASAN complaint that I shared about one month ago, can that complaint have any other root cause than the patch "RDMA/rxe: Add
> workqueue support for rxe tasks"? That report shows a use-after-free by
> rxe code with a pointer to memory that was owned by the rxe driver and
> that was freed by the rxe driver. That memory is an skbuff. The rxe
> driver manages skbuffs. The SRP driver doesn't even know about these
> skbuff objects. See also https://lore.kernel.org/linux-rdma/8ee2869b-3f51-4195-9883-015cd30b4241@acm.org/
> 
> Thanks,
> 
> Bart.
> 
Bart,

I agree with you that that is a rxe issue. But, I haven't been able to reproduce it. However, I am able
to generate hangs without the KASAN bug so it seems to me that they are unrelated. In addition to the
kernel debugging I have added tracing to ib_srp and ib_srpt which may help delay things.

Bob


^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [bug report] blktests srp/002 hang
  2023-10-18 19:48                                                                               ` Bart Van Assche
  2023-10-18 20:03                                                                                 ` Bob Pearson
@ 2023-10-18 20:04                                                                                 ` Bob Pearson
  2023-10-18 20:14                                                                                 ` Bob Pearson
  2023-10-18 20:29                                                                                 ` Bob Pearson
  3 siblings, 0 replies; 87+ messages in thread
From: Bob Pearson @ 2023-10-18 20:04 UTC (permalink / raw)
  To: Bart Van Assche, Jason Gunthorpe
  Cc: Daisuke Matsuda (Fujitsu), 'Rain River',
	Zhu Yanjun, leon, Shinichiro Kawasaki, RDMA mailing list,
	linux-scsi

[-- Attachment #1: Type: text/plain, Size: 859 bytes --]

On 10/18/23 14:48, Bart Van Assche wrote:
> 
> On 10/18/23 12:17, Jason Gunthorpe wrote:
>> If siw hangs as well, I definitely comfortable continuing to debug and
>> leaving the work queues in-tree for now.
> 
> Regarding the KASAN complaint that I shared about one month ago, can that complaint have any other root cause than the patch "RDMA/rxe: Add
> workqueue support for rxe tasks"? That report shows a use-after-free by
> rxe code with a pointer to memory that was owned by the rxe driver and
> that was freed by the rxe driver. That memory is an skbuff. The rxe
> driver manages skbuffs. The SRP driver doesn't even know about these
> skbuff objects. See also https://lore.kernel.org/linux-rdma/8ee2869b-3f51-4195-9883-015cd30b4241@acm.org/
> 
> Thanks,
> 
> Bart.
> 

here is the .config I am using. Based on stock Ubuntu 23.04 plus make olddefconfig.

[-- Attachment #2: .config --]
[-- Type: text/plain, Size: 281233 bytes --]

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86 6.6.0-rc3 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc (Ubuntu 12.3.0-1ubuntu1~23.04) 12.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=120300
CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=24000
CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=24000
CONFIG_LLD_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_PAHOLE_VERSION=125
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
# CONFIG_WERROR is not set
CONFIG_LOCALVERSION=""
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_ZSTD=y
# CONFIG_KERNEL_GZIP is not set
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_KERNEL_ZSTD=y
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_WATCH_QUEUE=y
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_USELIB=y
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_GENERIC_IRQ_CHIP=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_SIM=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_CONTEXT_TRACKING=y
CONFIG_CONTEXT_TRACKING_IDLE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US=100
# end of Timers subsystem

CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y

#
# BPF subsystem
#
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
CONFIG_USERMODE_DRIVER=y
# CONFIG_BPF_PRELOAD is not set
CONFIG_BPF_LSM=y
# end of BPF subsystem

CONFIG_PREEMPT_BUILD=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y
CONFIG_PREEMPT_DYNAMIC=y
CONFIG_SCHED_CORE=y

#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_PSI=y
# CONFIG_PSI_DEFAULT_DISABLED is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
CONFIG_PREEMPT_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RCU=y
CONFIG_TASKS_RUDE_RCU=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
# end of RCU Subsystem

# CONFIG_IKCONFIG is not set
CONFIG_IKHEADERS=m
CONFIG_LOG_BUF_SHIFT=20
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
# CONFIG_PRINTK_INDEX is not set
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y

#
# Scheduler features
#
CONFIG_UCLAMP_TASK=y
CONFIG_UCLAMP_BUCKETS_COUNT=5
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC11_NO_ARRAY_BOUNDS=y
CONFIG_CC_NO_ARRAY_BOUNDS=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
# CONFIG_CGROUP_FAVOR_DYNMODS is not set
CONFIG_MEMCG=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
# CONFIG_RT_GROUP_SCHED is not set
CONFIG_SCHED_MM_CID=y
CONFIG_UCLAMP_TASK_GROUP=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
CONFIG_CGROUP_MISC=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_RD_ZSTD=y
CONFIG_BOOT_CONFIG=y
# CONFIG_BOOT_CONFIG_FORCE is not set
# CONFIG_BOOT_CONFIG_EMBED is not set
CONFIG_INITRAMFS_PRESERVE_MTIME=y
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_LD_ORPHAN_WARN=y
CONFIG_LD_ORPHAN_WARN_LEVEL="warn"
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_EXPERT=y
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_SELFTEST is not set
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_GUEST_PERF_EVENTS=y
CONFIG_PC104=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y

#
# Kexec and crash features
#
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_HAVE_IMA_KEXEC=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_KEXEC_SIG=y
# CONFIG_KEXEC_SIG_FORCE is not set
CONFIG_KEXEC_BZIMAGE_VERIFY_SIG=y
CONFIG_KEXEC_JUMP=y
CONFIG_CRASH_DUMP=y
CONFIG_CRASH_HOTPLUG=y
CONFIG_CRASH_MAX_MEMORY_RANGES=8192
# end of Kexec and crash features
# end of General setup

CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_CSUM=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_AUDIT_ARCH=y
CONFIG_KASAN_SHADOW_OFFSET=0xdffffc0000000000
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_DYNAMIC_PHYSICAL_MASK=y
CONFIG_PGTABLE_LEVELS=5
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

#
# Processor type and features
#
CONFIG_SMP=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_GOLDFISH is not set
CONFIG_X86_CPU_RESCTRL=y
CONFIG_X86_EXTENDED_PLATFORM=y
CONFIG_X86_NUMACHIP=y
# CONFIG_X86_VSMP is not set
CONFIG_X86_UV=y
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=y
CONFIG_X86_AMD_PLATFORM_DEVICE=y
CONFIG_IOSF_MBI=y
CONFIG_IOSF_MBI_DEBUG=y
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_XXL=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
CONFIG_XEN=y
CONFIG_XEN_PV=y
CONFIG_XEN_512GB=y
CONFIG_XEN_PV_SMP=y
CONFIG_XEN_PV_DOM0=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_PVHVM_SMP=y
CONFIG_XEN_PVHVM_GUEST=y
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
CONFIG_XEN_PVH=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PV_MSR_SAFE=y
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
CONFIG_PVH=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_PARAVIRT_CLOCK=y
CONFIG_JAILHOUSE_GUEST=y
CONFIG_ACRN_GUEST=y
CONFIG_INTEL_TDX_GUEST=y
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_CPU_SUP_ZHAOXIN=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
CONFIG_BOOT_VESA_SUPPORT=y
CONFIG_MAXSMP=y
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
CONFIG_SCHED_CLUSTER=y
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCELOG_LEGACY=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=m
CONFIG_PERF_EVENTS_INTEL_CSTATE=m
# CONFIG_PERF_EVENTS_AMD_POWER is not set
CONFIG_PERF_EVENTS_AMD_UNCORE=m
CONFIG_PERF_EVENTS_AMD_BRS=y
# end of Performance monitoring

CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_MICROCODE=y
# CONFIG_MICROCODE_LATE_LOADING is not set
CONFIG_X86_MSR=m
CONFIG_X86_CPUID=m
CONFIG_X86_5LEVEL=y
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
CONFIG_X86_MEM_ENCRYPT=y
CONFIG_AMD_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=10
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=y
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_X86_UMIP=y
CONFIG_CC_HAS_IBT=y
# CONFIG_X86_KERNEL_IBT is not set
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_X86_SGX=y
# CONFIG_X86_USER_SHADOW_STACK is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_HANDOVER_PROTOCOL=y
CONFIG_EFI_MIXED=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
CONFIG_ARCH_SUPPORTS_KEXEC=y
CONFIG_ARCH_SUPPORTS_KEXEC_FILE=y
CONFIG_ARCH_SELECTS_KEXEC_FILE=y
CONFIG_ARCH_SUPPORTS_KEXEC_PURGATORY=y
CONFIG_ARCH_SUPPORTS_KEXEC_SIG=y
CONFIG_ARCH_SUPPORTS_KEXEC_SIG_FORCE=y
CONFIG_ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG=y
CONFIG_ARCH_SUPPORTS_KEXEC_JUMP=y
CONFIG_ARCH_SUPPORTS_CRASH_DUMP=y
CONFIG_ARCH_SUPPORTS_CRASH_HOTPLUG=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_X86_NEED_RELOCS=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_RANDOMIZE_MEMORY=y
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
# CONFIG_ADDRESS_MASKING is not set
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_XONLY=y
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
# CONFIG_STRICT_SIGALTSTACK_SIZE is not set
CONFIG_HAVE_LIVEPATCH=y
CONFIG_LIVEPATCH=y
# end of Processor type and features

CONFIG_CC_HAS_SLS=y
CONFIG_CC_HAS_RETURN_THUNK=y
CONFIG_CC_HAS_ENTRY_PADDING=y
CONFIG_FUNCTION_PADDING_CFI=11
CONFIG_FUNCTION_PADDING_BYTES=16
# CONFIG_SPECULATION_MITIGATIONS is not set
CONFIG_ARCH_HAS_ADD_PAGES=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
# CONFIG_SUSPEND_SKIP_SYNC is not set
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_USERSPACE_AUTOSLEEP is not set
CONFIG_PM_WAKELOCKS=y
CONFIG_PM_WAKELOCKS_LIMIT=100
CONFIG_PM_WAKELOCKS_GC=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_PM_CLK=y
CONFIG_PM_GENERIC_DOMAINS=y
CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
CONFIG_PM_GENERIC_DOMAINS_SLEEP=y
CONFIG_ENERGY_MODEL=y
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
CONFIG_ACPI_TABLE_LIB=y
CONFIG_ACPI_DEBUGGER=y
CONFIG_ACPI_DEBUGGER_USER=y
CONFIG_ACPI_SPCR_TABLE=y
CONFIG_ACPI_FPDT=y
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
CONFIG_ACPI_EC_DEBUGFS=m
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_TAD=m
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_PLATFORM_PROFILE=m
CONFIG_ACPI_CUSTOM_DSDT_FILE=""
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_DEBUG=y
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_SBS=m
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
CONFIG_ACPI_BGRT=y
# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
CONFIG_ACPI_HMAT=y
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=m
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
CONFIG_ACPI_DPTF=y
CONFIG_DPTF_POWER=m
CONFIG_DPTF_PCH_FIVR=m
CONFIG_ACPI_WATCHDOG=y
CONFIG_ACPI_EXTLOG=m
CONFIG_ACPI_ADXL=y
CONFIG_ACPI_CONFIGFS=m
CONFIG_ACPI_PFRUT=m
CONFIG_ACPI_PCC=y
CONFIG_ACPI_FFH=y
CONFIG_PMIC_OPREGION=y
CONFIG_BYTCRC_PMIC_OPREGION=y
CONFIG_CHTCRC_PMIC_OPREGION=y
CONFIG_XPOWER_PMIC_OPREGION=y
CONFIG_BXT_WC_PMIC_OPREGION=y
CONFIG_CHT_WC_PMIC_OPREGION=y
CONFIG_CHT_DC_TI_PMIC_OPREGION=y
CONFIG_TPS68470_PMIC_OPREGION=y
CONFIG_ACPI_VIOT=y
CONFIG_ACPI_PRMT=y
CONFIG_X86_PM_TIMER=y

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
CONFIG_X86_PCC_CPUFREQ=y
CONFIG_X86_AMD_PSTATE=y
CONFIG_X86_AMD_PSTATE_DEFAULT_MODE=3
# CONFIG_X86_AMD_PSTATE_UT is not set
CONFIG_X86_ACPI_CPUFREQ=y
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=y
CONFIG_X86_AMD_FREQ_SENSITIVITY=m
CONFIG_X86_SPEEDSTEP_CENTRINO=y
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
CONFIG_CPU_IDLE_GOV_TEO=y
CONFIG_CPU_IDLE_GOV_HALTPOLL=y
CONFIG_HALTPOLL_CPUIDLE=m
# end of CPU Idle

CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_MMCONF_FAM10H=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
CONFIG_ISA_BUS=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_X86_X32_ABI is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
# end of Binary Emulations

CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_PFNCACHE=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_DIRTY_RING=y
CONFIG_HAVE_KVM_DIRTY_RING_TSO=y
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_HAVE_KVM_PM_NOTIFIER=y
CONFIG_KVM_GENERIC_HARDWARE_ENABLING=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_WERROR=y
CONFIG_KVM_INTEL=m
CONFIG_X86_SGX_KVM=y
CONFIG_KVM_AMD=m
CONFIG_KVM_AMD_SEV=y
CONFIG_KVM_SMM=y
CONFIG_KVM_XEN=y
# CONFIG_KVM_PROVE_MMU is not set
CONFIG_KVM_EXTERNAL_WRITE_TRACKING=y
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y
CONFIG_AS_GFNI=y
CONFIG_AS_WRUSS=y

#
# General architecture-dependent options
#
CONFIG_HOTPLUG_SMT=y
CONFIG_HOTPLUG_CORE_SYNC=y
CONFIG_HOTPLUG_CORE_SYNC_DEAD=y
CONFIG_HOTPLUG_CORE_SYNC_FULL=y
CONFIG_HOTPLUG_SPLIT_STARTUP=y
CONFIG_HOTPLUG_PARALLEL=y
CONFIG_GENERIC_ENTRY=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_KRETPROBE_ON_RETHOOK=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_ARCH_HAS_CPU_FINALIZE_INIT=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_ARCH_WANTS_NO_INSTR=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_RUST=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_MMU_GATHER_MERGE_VMAS=y
CONFIG_MMU_LAZY_TLB_REFCOUNT=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# CONFIG_SECCOMP_CACHE_DEBUG is not set
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y
CONFIG_LTO_NONE=y
CONFIG_ARCH_SUPPORTS_CFI_CLANG=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING_USER=y
CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PUD=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_HAVE_ARCH_HUGE_VMALLOC=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_PMD_MKWRITE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y
CONFIG_SOFTIRQ_ON_OWN_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_PAGE_SIZE_LESS_THAN_64KB=y
CONFIG_PAGE_SIZE_LESS_THAN_256KB=y
CONFIG_HAVE_OBJTOOL=y
CONFIG_HAVE_JUMP_LABEL_HACK=y
CONFIG_HAVE_NOINSTR_HACK=y
CONFIG_HAVE_NOINSTR_VALIDATION=y
CONFIG_HAVE_UACCESS_VALIDATION=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_ISA_BUS_API=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_ARCH_HAS_CC_PLATFORM=y
CONFIG_HAVE_STATIC_CALL=y
CONFIG_HAVE_STATIC_CALL_INLINE=y
CONFIG_HAVE_PREEMPT_DYNAMIC=y
CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y
CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y
CONFIG_ARCH_HAS_ELFCORE_COMPAT=y
CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y
CONFIG_DYNAMIC_SIGFRAME=y
CONFIG_HAVE_ARCH_NODE_DEV_GROUP=y
CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=y
CONFIG_FUNCTION_ALIGNMENT_4B=y
CONFIG_FUNCTION_ALIGNMENT_16B=y
CONFIG_FUNCTION_ALIGNMENT=16
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
# CONFIG_MODULE_DEBUG is not set
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set
CONFIG_MODVERSIONS=y
CONFIG_ASM_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=y
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
# CONFIG_MODULE_SIG_SHA256 is not set
# CONFIG_MODULE_SIG_SHA384 is not set
CONFIG_MODULE_SIG_SHA512=y
CONFIG_MODULE_SIG_HASH="sha512"
CONFIG_MODULE_COMPRESS_NONE=y
# CONFIG_MODULE_COMPRESS_GZIP is not set
# CONFIG_MODULE_COMPRESS_XZ is not set
# CONFIG_MODULE_COMPRESS_ZSTD is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_MODPROBE_PATH="/sbin/modprobe"
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLOCK_LEGACY_AUTOLOAD=y
CONFIG_BLK_RQ_ALLOC_TIME=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_CGROUP_PUNT_BIO=y
CONFIG_BLK_DEV_BSG_COMMON=y
CONFIG_BLK_ICQ=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=y
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
CONFIG_BLK_WBT=y
CONFIG_BLK_WBT_MQ=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
CONFIG_BLK_CGROUP_FC_APPID=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_CGROUP_IOPRIO=y
CONFIG_BLK_DEBUG_FS=y
CONFIG_BLK_DEBUG_FS_ZONED=y
CONFIG_BLK_SED_OPAL=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
CONFIG_AIX_PARTITION=y
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
CONFIG_ATARI_PARTITION=y
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
# CONFIG_LDM_DEBUG is not set
CONFIG_SGI_PARTITION=y
CONFIG_ULTRIX_PARTITION=y
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_SYSV68_PARTITION=y
CONFIG_CMDLINE_PARTITION=y
# end of Partition Types

CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y
CONFIG_BLOCK_HOLDER_DEPRECATED=y
CONFIG_BLK_MQ_STACKING=y

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=m
CONFIG_IOSCHED_BFQ=m
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_BFQ_CGROUP_DEBUG is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y

#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
# end of Executable file formats

#
# Memory Management options
#
CONFIG_ZPOOL=y
CONFIG_SWAP=y
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_DEFAULT_ON is not set
# CONFIG_ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
CONFIG_ZBUD=y
CONFIG_Z3FOLD=m
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_ZSMALLOC_CHAIN_SIZE=8

#
# SLAB allocator options
#
# CONFIG_SLAB_DEPRECATED is not set
CONFIG_SLUB=y
# CONFIG_SLUB_TINY is not set
CONFIG_SLAB_MERGE_DEFAULT=y
CONFIG_SLAB_FREELIST_RANDOM=y
CONFIG_SLAB_FREELIST_HARDENED=y
# CONFIG_SLUB_STATS is not set
CONFIG_SLUB_CPU_PARTIAL=y
# CONFIG_RANDOM_KMALLOC_CACHES is not set
# end of SLAB allocator options

CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SPARSEMEM=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=y
CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_EXCLUSIVE_SYSTEM_RAM=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_MHP_MEMMAP_ON_MEMORY=y
CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_MEMORY_BALLOON=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_DEVICE_MIGRATION=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_TRANSPARENT_HUGEPAGE=y
# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
CONFIG_THP_SWAP=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
# CONFIG_CMA is not set
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_GENERIC_EARLY_IOREMAP=y
# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
CONFIG_PAGE_IDLE_FLAG=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ARCH_HAS_ZONE_DMA_SET=y
CONFIG_ZONE_DMA=y
CONFIG_ZONE_DMA32=y
CONFIG_ZONE_DEVICE=y
CONFIG_HMM_MIRROR=y
CONFIG_GET_FREE_REGION=y
CONFIG_DEVICE_PRIVATE=y
CONFIG_VMAP_PFN=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_TEST is not set
# CONFIG_DMAPOOL_TEST is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_MAPPING_DIRTY_HELPERS=y
CONFIG_MEMFD_CREATE=y
CONFIG_SECRETMEM=y
CONFIG_ANON_VMA_NAME=y
CONFIG_USERFAULTFD=y
CONFIG_HAVE_ARCH_USERFAULTFD_WP=y
CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y
CONFIG_PTE_MARKER_UFFD_WP=y
CONFIG_LRU_GEN=y
# CONFIG_LRU_GEN_ENABLED is not set
# CONFIG_LRU_GEN_STATS is not set
CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y
CONFIG_PER_VMA_LOCK=y
CONFIG_LOCK_MM_AND_FIND_VMA=y

#
# Data Access Monitoring
#
# CONFIG_DAMON is not set
# end of Data Access Monitoring
# end of Memory Management options

CONFIG_NET=y
CONFIG_WANT_COMPAT_NETLINK_MESSAGES=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_NET_XGRESS=y
CONFIG_NET_REDIRECT=y
CONFIG_SKB_EXTENSIONS=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_SCM=y
CONFIG_AF_UNIX_OOB=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
CONFIG_TLS_DEVICE=y
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_OFFLOAD=y
CONFIG_XFRM_ALGO=m
CONFIG_XFRM_USER=m
CONFIG_XFRM_USER_COMPAT=m
CONFIG_XFRM_INTERFACE=m
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
# CONFIG_NET_KEY_MIGRATE is not set
CONFIG_XFRM_ESPINTCP=y
CONFIG_SMC=m
CONFIG_SMC_DIAG=m
CONFIG_XDP_SOCKETS=y
CONFIG_XDP_SOCKETS_DIAG=m
CONFIG_NET_HANDSHAKE=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
# CONFIG_IP_PNP is not set
CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IP_TUNNEL=m
CONFIG_NET_IPGRE=m
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_NET_UDP_TUNNEL=m
CONFIG_NET_FOU=m
CONFIG_NET_FOU_IP_TUNNELS=y
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_ESP_OFFLOAD=m
CONFIG_INET_ESPINTCP=y
CONFIG_INET_IPCOMP=m
CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
CONFIG_INET_RAW_DIAG=m
CONFIG_INET_DIAG_DESTROY=y
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_NV=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
CONFIG_TCP_CONG_CDG=m
CONFIG_TCP_CONG_BBR=m
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_ESP_OFFLOAD=m
CONFIG_INET6_ESPINTCP=y
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
CONFIG_IPV6_ILA=m
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
CONFIG_IPV6_VTI=m
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_GRE=m
CONFIG_IPV6_FOU=m
CONFIG_IPV6_FOU_TUNNEL=m
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_IPV6_SEG6_LWTUNNEL=y
CONFIG_IPV6_SEG6_HMAC=y
CONFIG_IPV6_SEG6_BPF=y
# CONFIG_IPV6_RPL_LWTUNNEL is not set
CONFIG_IPV6_IOAM6_LWTUNNEL=y
CONFIG_NETLABEL=y
CONFIG_MPTCP=y
CONFIG_INET_MPTCP_DIAG=m
CONFIG_MPTCP_IPV6=y
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
CONFIG_NETWORK_PHY_TIMESTAMPING=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=m

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_EGRESS=y
CONFIG_NETFILTER_SKIP_EGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
CONFIG_NETFILTER_BPF_LINK=y
CONFIG_NETFILTER_NETLINK_HOOK=m
CONFIG_NETFILTER_NETLINK_ACCT=m
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_SYSLOG=m
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
# CONFIG_NF_CONNTRACK_PROCFS is not set
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CONNTRACK_OVS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_GLUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NF_NAT_OVS=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NFT_NUMGEN=m
CONFIG_NFT_CT=m
CONFIG_NFT_FLOW_OFFLOAD=m
CONFIG_NFT_CONNLIMIT=m
CONFIG_NFT_LOG=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_MASQ=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_NAT=m
CONFIG_NFT_TUNNEL=m
CONFIG_NFT_QUEUE=m
CONFIG_NFT_QUOTA=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_REJECT_INET=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_HASH=m
CONFIG_NFT_FIB=m
CONFIG_NFT_FIB_INET=m
CONFIG_NFT_XFRM=m
CONFIG_NFT_SOCKET=m
CONFIG_NFT_OSF=m
CONFIG_NFT_TPROXY=m
CONFIG_NFT_SYNPROXY=m
CONFIG_NF_DUP_NETDEV=m
CONFIG_NFT_DUP_NETDEV=m
CONFIG_NFT_FWD_NETDEV=m
CONFIG_NFT_FIB_NETDEV=m
CONFIG_NFT_REJECT_NETDEV=m
CONFIG_NF_FLOW_TABLE_INET=m
CONFIG_NF_FLOW_TABLE=m
# CONFIG_NF_FLOW_TABLE_PROCFS is not set
CONFIG_NETFILTER_XTABLES=m
CONFIG_NETFILTER_XTABLES_COMPAT=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LED=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
CONFIG_NETFILTER_XT_MATCH_IPCOMP=m
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
CONFIG_NETFILTER_XT_MATCH_L2TP=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_NFACCT=m
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_TIME=m
CONFIG_NETFILTER_XT_MATCH_U32=m
# end of Core Netfilter Configuration

CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
CONFIG_IP_VS_FO=m
CONFIG_IP_VS_OVF=m
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
CONFIG_IP_VS_MH=m
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m
CONFIG_IP_VS_TWOS=m

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=m
CONFIG_NFT_DUP_IPV4=m
CONFIG_NFT_FIB_IPV4=m
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_DUP_IPV4=m
CONFIG_NF_LOG_ARP=m
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_SYNPROXY=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=m
CONFIG_NFT_DUP_IPV6=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
CONFIG_IP6_NF_MATCH_SRH=m
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_TARGET_SYNPROXY=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_TABLES_BRIDGE=m
CONFIG_NFT_BRIDGE_META=m
CONFIG_NFT_BRIDGE_REJECT=m
CONFIG_NF_CONNTRACK_BRIDGE=m
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
CONFIG_BPFILTER=y
CONFIG_BPFILTER_UMH=m
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m

#
# DCCP CCIDs Configuration
#
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
# CONFIG_IP_DCCP_CCID3 is not set
# end of DCCP CCIDs Configuration

#
# DCCP Kernel Hacking
#
# CONFIG_IP_DCCP_DEBUG is not set
# end of DCCP Kernel Hacking

CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
CONFIG_RDS=m
CONFIG_RDS_RDMA=m
CONFIG_RDS_TCP=m
# CONFIG_RDS_DEBUG is not set
CONFIG_TIPC=m
CONFIG_TIPC_MEDIA_IB=y
CONFIG_TIPC_MEDIA_UDP=y
CONFIG_TIPC_CRYPTO=y
CONFIG_TIPC_DIAG=m
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
CONFIG_ATM_MPOA=m
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=m
CONFIG_GARP=m
CONFIG_MRP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
CONFIG_BRIDGE_MRP=y
CONFIG_BRIDGE_CFM=y
CONFIG_NET_DSA=m
CONFIG_NET_DSA_TAG_NONE=m
CONFIG_NET_DSA_TAG_AR9331=m
CONFIG_NET_DSA_TAG_BRCM_COMMON=m
CONFIG_NET_DSA_TAG_BRCM=m
CONFIG_NET_DSA_TAG_BRCM_LEGACY=m
CONFIG_NET_DSA_TAG_BRCM_PREPEND=m
CONFIG_NET_DSA_TAG_HELLCREEK=m
CONFIG_NET_DSA_TAG_GSWIP=m
CONFIG_NET_DSA_TAG_DSA_COMMON=m
CONFIG_NET_DSA_TAG_DSA=m
CONFIG_NET_DSA_TAG_EDSA=m
CONFIG_NET_DSA_TAG_MTK=m
CONFIG_NET_DSA_TAG_KSZ=m
CONFIG_NET_DSA_TAG_OCELOT=m
CONFIG_NET_DSA_TAG_OCELOT_8021Q=m
CONFIG_NET_DSA_TAG_QCA=m
CONFIG_NET_DSA_TAG_RTL4_A=m
CONFIG_NET_DSA_TAG_RTL8_4=m
CONFIG_NET_DSA_TAG_RZN1_A5PSW=m
CONFIG_NET_DSA_TAG_LAN9303=m
CONFIG_NET_DSA_TAG_SJA1105=m
CONFIG_NET_DSA_TAG_TRAILER=m
CONFIG_NET_DSA_TAG_XRS700X=m
CONFIG_VLAN_8021Q=m
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
CONFIG_LLC=m
CONFIG_LLC2=m
CONFIG_ATALK=m
CONFIG_DEV_APPLETALK=m
# CONFIG_IPDDP is not set
CONFIG_X25=m
CONFIG_LAPB=m
CONFIG_PHONET=m
CONFIG_6LOWPAN=m
# CONFIG_6LOWPAN_DEBUGFS is not set
CONFIG_6LOWPAN_NHC=m
CONFIG_6LOWPAN_NHC_DEST=m
CONFIG_6LOWPAN_NHC_FRAGMENT=m
CONFIG_6LOWPAN_NHC_HOP=m
CONFIG_6LOWPAN_NHC_IPV6=m
CONFIG_6LOWPAN_NHC_MOBILITY=m
CONFIG_6LOWPAN_NHC_ROUTING=m
CONFIG_6LOWPAN_NHC_UDP=m
# CONFIG_6LOWPAN_GHC_EXT_HDR_HOP is not set
# CONFIG_6LOWPAN_GHC_UDP is not set
# CONFIG_6LOWPAN_GHC_ICMPV6 is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_DEST is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE is not set
CONFIG_IEEE802154=m
# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
CONFIG_IEEE802154_SOCKET=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
CONFIG_NET_SCH_CBS=m
CONFIG_NET_SCH_ETF=m
CONFIG_NET_SCH_MQPRIO_LIB=m
CONFIG_NET_SCH_TAPRIO=m
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_SKBPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_CAKE=m
CONFIG_NET_SCH_FQ=m
CONFIG_NET_SCH_HHF=m
CONFIG_NET_SCH_PIE=m
CONFIG_NET_SCH_FQ_PIE=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
CONFIG_NET_SCH_ETS=m
# CONFIG_NET_SCH_DEFAULT is not set

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
# CONFIG_CLS_U32_PERF is not set
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=m
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
CONFIG_NET_EMATCH_CANID=m
CONFIG_NET_EMATCH_IPSET=m
CONFIG_NET_EMATCH_IPT=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
CONFIG_NET_ACT_IPT=m
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
CONFIG_NET_ACT_MPLS=m
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_ACT_BPF=m
CONFIG_NET_ACT_CONNMARK=m
CONFIG_NET_ACT_CTINFO=m
CONFIG_NET_ACT_SKBMOD=m
# CONFIG_NET_ACT_IFE is not set
CONFIG_NET_ACT_TUNNEL_KEY=m
CONFIG_NET_ACT_CT=m
CONFIG_NET_ACT_GATE=m
CONFIG_NET_TC_SKB_EXT=y
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
# CONFIG_BATMAN_ADV_BATMAN_V is not set
CONFIG_BATMAN_ADV_BLA=y
CONFIG_BATMAN_ADV_DAT=y
CONFIG_BATMAN_ADV_NC=y
CONFIG_BATMAN_ADV_MCAST=y
# CONFIG_BATMAN_ADV_DEBUG is not set
# CONFIG_BATMAN_ADV_TRACING is not set
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_GRE=m
CONFIG_OPENVSWITCH_VXLAN=m
CONFIG_OPENVSWITCH_GENEVE=m
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_VMWARE_VMCI_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_HYPERV_VSOCKETS=m
CONFIG_NETLINK_DIAG=m
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=m
CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_HSR=m
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
CONFIG_QRTR=m
CONFIG_QRTR_SMD=m
CONFIG_QRTR_TUN=m
CONFIG_QRTR_MHI=m
CONFIG_NET_NCSI=y
CONFIG_NCSI_OEM_CMD_GET_MAC=y
# CONFIG_NCSI_OEM_CMD_KEEP_PHY is not set
CONFIG_PCPU_DEV_REFCNT=y
CONFIG_MAX_SKB_FRAGS=17
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_SOCK_RX_QUEUE_MAPPING=y
CONFIG_XPS=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options

CONFIG_HAMRADIO=y

#
# Packet Radio protocols
#
CONFIG_AX25=m
CONFIG_AX25_DAMA_SLAVE=y
CONFIG_NETROM=m
CONFIG_ROSE=m

#
# AX.25 network device drivers
#
CONFIG_MKISS=m
CONFIG_6PACK=m
CONFIG_BPQETHER=m
CONFIG_BAYCOM_SER_FDX=m
CONFIG_BAYCOM_SER_HDX=m
CONFIG_BAYCOM_PAR=m
CONFIG_YAM=m
# end of AX.25 network device drivers

CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
CONFIG_CAN_J1939=m
CONFIG_CAN_ISOTP=m
CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_CMTP=m
CONFIG_BT_HIDP=m
CONFIG_BT_HS=y
CONFIG_BT_LE=y
CONFIG_BT_LE_L2CAP_ECRED=y
CONFIG_BT_6LOWPAN=m
CONFIG_BT_LEDS=y
CONFIG_BT_MSFTEXT=y
CONFIG_BT_AOSPEXT=y
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set

#
# Bluetooth device drivers
#
CONFIG_BT_INTEL=m
CONFIG_BT_BCM=m
CONFIG_BT_RTL=m
CONFIG_BT_QCA=m
CONFIG_BT_MTK=m
CONFIG_BT_HCIBTUSB=m
CONFIG_BT_HCIBTUSB_AUTOSUSPEND=y
CONFIG_BT_HCIBTUSB_POLL_SYNC=y
CONFIG_BT_HCIBTUSB_BCM=y
CONFIG_BT_HCIBTUSB_MTK=y
CONFIG_BT_HCIBTUSB_RTL=y
CONFIG_BT_HCIBTSDIO=m
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_SERDEV=y
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_NOKIA=m
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
CONFIG_BT_HCIUART_LL=y
CONFIG_BT_HCIUART_3WIRE=y
CONFIG_BT_HCIUART_INTEL=y
CONFIG_BT_HCIUART_BCM=y
CONFIG_BT_HCIUART_RTL=y
CONFIG_BT_HCIUART_QCA=y
CONFIG_BT_HCIUART_AG6XX=y
CONFIG_BT_HCIUART_MRVL=y
CONFIG_BT_HCIBCM203X=m
CONFIG_BT_HCIBCM4377=m
CONFIG_BT_HCIBPA10X=m
CONFIG_BT_HCIBFUSB=m
CONFIG_BT_HCIDTL1=m
CONFIG_BT_HCIBT3C=m
CONFIG_BT_HCIBLUECARD=m
CONFIG_BT_HCIVHCI=m
CONFIG_BT_MRVL=m
CONFIG_BT_MRVL_SDIO=m
CONFIG_BT_ATH3K=m
CONFIG_BT_MTKSDIO=m
CONFIG_BT_MTKUART=m
CONFIG_BT_HCIRSI=m
CONFIG_BT_VIRTIO=m
# CONFIG_BT_NXPUART is not set
# end of Bluetooth device drivers

CONFIG_AF_RXRPC=m
CONFIG_AF_RXRPC_IPV6=y
# CONFIG_AF_RXRPC_INJECT_LOSS is not set
# CONFIG_AF_RXRPC_INJECT_RX_DELAY is not set
# CONFIG_AF_RXRPC_DEBUG is not set
CONFIG_RXKAD=y
CONFIG_RXPERF=m
CONFIG_AF_KCM=m
CONFIG_STREAM_PARSER=y
CONFIG_MCTP=y
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_SPY=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
CONFIG_CFG80211_DEBUGFS=y
CONFIG_CFG80211_CRDA_SUPPORT=y
CONFIG_CFG80211_WEXT=y
CONFIG_CFG80211_WEXT_EXPORT=y
CONFIG_LIB80211=m
CONFIG_LIB80211_CRYPT_WEP=m
CONFIG_LIB80211_CRYPT_CCMP=m
CONFIG_LIB80211_CRYPT_TKIP=m
# CONFIG_LIB80211_DEBUG is not set
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_MESH=y
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
CONFIG_MAC80211_MESSAGE_TRACING=y
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
CONFIG_RFKILL=y
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
CONFIG_RFKILL_GPIO=m
CONFIG_NET_9P=m
CONFIG_NET_9P_FD=m
CONFIG_NET_9P_VIRTIO=m
CONFIG_NET_9P_XEN=m
CONFIG_NET_9P_RDMA=m
# CONFIG_NET_9P_DEBUG is not set
CONFIG_CAIF=m
# CONFIG_CAIF_DEBUG is not set
CONFIG_CAIF_NETDEV=m
CONFIG_CAIF_USB=m
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
CONFIG_NFC=m
CONFIG_NFC_DIGITAL=m
CONFIG_NFC_NCI=m
CONFIG_NFC_NCI_SPI=m
CONFIG_NFC_NCI_UART=m
CONFIG_NFC_HCI=m
CONFIG_NFC_SHDLC=y

#
# Near Field Communication (NFC) devices
#
CONFIG_NFC_TRF7970A=m
CONFIG_NFC_MEI_PHY=m
CONFIG_NFC_SIM=m
CONFIG_NFC_PORT100=m
CONFIG_NFC_VIRTUAL_NCI=m
CONFIG_NFC_FDP=m
CONFIG_NFC_FDP_I2C=m
CONFIG_NFC_PN544=m
CONFIG_NFC_PN544_I2C=m
CONFIG_NFC_PN544_MEI=m
CONFIG_NFC_PN533=m
CONFIG_NFC_PN533_USB=m
CONFIG_NFC_PN533_I2C=m
CONFIG_NFC_PN532_UART=m
CONFIG_NFC_MICROREAD=m
CONFIG_NFC_MICROREAD_I2C=m
CONFIG_NFC_MICROREAD_MEI=m
CONFIG_NFC_MRVL=m
CONFIG_NFC_MRVL_USB=m
CONFIG_NFC_MRVL_UART=m
CONFIG_NFC_MRVL_I2C=m
CONFIG_NFC_MRVL_SPI=m
CONFIG_NFC_ST21NFCA=m
CONFIG_NFC_ST21NFCA_I2C=m
CONFIG_NFC_ST_NCI=m
CONFIG_NFC_ST_NCI_I2C=m
CONFIG_NFC_ST_NCI_SPI=m
CONFIG_NFC_NXP_NCI=m
CONFIG_NFC_NXP_NCI_I2C=m
CONFIG_NFC_S3FWRN5=m
CONFIG_NFC_S3FWRN5_I2C=m
CONFIG_NFC_S3FWRN82_UART=m
CONFIG_NFC_ST95HF=m
# end of Near Field Communication (NFC) devices

CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SELFTESTS=y
CONFIG_NET_SOCK_MSG=y
CONFIG_NET_DEVLINK=y
CONFIG_PAGE_POOL=y
# CONFIG_PAGE_POOL_STATS is not set
CONFIG_FAILOVER=m
CONFIG_ETHTOOL_NETLINK=y

#
# Device Drivers
#
CONFIG_HAVE_EISA=y
CONFIG_EISA=y
CONFIG_EISA_VLB_PRIMING=y
CONFIG_EISA_PCI_EISA=y
CONFIG_EISA_VIRTUAL_ROOT=y
CONFIG_EISA_NAMES=y
CONFIG_HAVE_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
# CONFIG_PCIEAER_INJECT is not set
# CONFIG_PCIE_ECRC is not set
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_PCIE_DPC=y
CONFIG_PCIE_PTM=y
CONFIG_PCIE_EDR=y
CONFIG_PCI_MSI=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
CONFIG_PCI_REALLOC_ENABLE_AUTO=y
CONFIG_PCI_STUB=m
CONFIG_PCI_PF_STUB=m
CONFIG_XEN_PCIDEV_FRONTEND=m
CONFIG_PCI_ATS=y
CONFIG_PCI_DOE=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
CONFIG_PCI_P2PDMA=y
CONFIG_PCI_LABEL=y
CONFIG_PCI_HYPERV=m
# CONFIG_PCIE_BUS_TUNE_OFF is not set
CONFIG_PCIE_BUS_DEFAULT=y
# CONFIG_PCIE_BUS_SAFE is not set
# CONFIG_PCIE_BUS_PERFORMANCE is not set
# CONFIG_PCIE_BUS_PEER2PEER is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=m
CONFIG_HOTPLUG_PCI_CPCI=y
CONFIG_HOTPLUG_PCI_CPCI_ZT5550=m
CONFIG_HOTPLUG_PCI_CPCI_GENERIC=m
CONFIG_HOTPLUG_PCI_SHPC=y

#
# PCI controller drivers
#
CONFIG_VMD=m
CONFIG_PCI_HYPERV_INTERFACE=m

#
# Cadence-based PCIe controllers
#
# end of Cadence-based PCIe controllers

#
# DesignWare-based PCIe controllers
#
CONFIG_PCIE_DW=y
CONFIG_PCIE_DW_HOST=y
CONFIG_PCIE_DW_EP=y
# CONFIG_PCI_MESON is not set
CONFIG_PCIE_DW_PLAT=y
CONFIG_PCIE_DW_PLAT_HOST=y
CONFIG_PCIE_DW_PLAT_EP=y
# end of DesignWare-based PCIe controllers

#
# Mobiveil-based PCIe controllers
#
# end of Mobiveil-based PCIe controllers
# end of PCI controller drivers

#
# PCI Endpoint
#
CONFIG_PCI_ENDPOINT=y
CONFIG_PCI_ENDPOINT_CONFIGFS=y
# CONFIG_PCI_EPF_TEST is not set
CONFIG_PCI_EPF_NTB=m
CONFIG_PCI_EPF_VNTB=m
# CONFIG_PCI_EPF_MHI is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
CONFIG_PCI_SW_SWITCHTEC=m
# end of PCI switch controller drivers

CONFIG_CXL_BUS=m
CONFIG_CXL_PCI=m
# CONFIG_CXL_MEM_RAW_COMMANDS is not set
CONFIG_CXL_ACPI=m
CONFIG_CXL_PMEM=m
CONFIG_CXL_MEM=m
CONFIG_CXL_PORT=m
CONFIG_CXL_SUSPEND=y
CONFIG_CXL_REGION=y
# CONFIG_CXL_REGION_INVALIDATION_TEST is not set
CONFIG_CXL_PMU=m
CONFIG_PCCARD=m
CONFIG_PCMCIA=m
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=m
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
CONFIG_PD6729=m
CONFIG_I82092=m
CONFIG_PCCARD_NONSTATIC=y
CONFIG_RAPIDIO=y
CONFIG_RAPIDIO_TSI721=m
CONFIG_RAPIDIO_DISC_TIMEOUT=30
# CONFIG_RAPIDIO_ENABLE_RX_TX_PORTS is not set
CONFIG_RAPIDIO_DMA_ENGINE=y
# CONFIG_RAPIDIO_DEBUG is not set
CONFIG_RAPIDIO_ENUM_BASIC=m
CONFIG_RAPIDIO_CHMAN=m
CONFIG_RAPIDIO_MPORT_CDEV=m

#
# RapidIO Switch drivers
#
CONFIG_RAPIDIO_CPS_XX=m
CONFIG_RAPIDIO_CPS_GEN2=m
CONFIG_RAPIDIO_RXS_GEN3=m
# end of RapidIO Switch drivers

#
# Generic Driver Options
#
CONFIG_AUXILIARY_BUS=y
CONFIG_UEVENT_HELPER=y
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DEVTMPFS_SAFE=y
# CONFIG_STANDALONE is not set
CONFIG_PREVENT_FIRMWARE_BUILD=y

#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_DEBUG=y
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_FW_LOADER_SYSFS=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_FW_LOADER_COMPRESS_XZ=y
CONFIG_FW_LOADER_COMPRESS_ZSTD=y
CONFIG_FW_CACHE=y
CONFIG_FW_UPLOAD=y
# end of Firmware loader

CONFIG_WANT_DEV_COREDUMP=y
CONFIG_ALLOW_DEV_COREDUMP=y
CONFIG_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
CONFIG_HMEM_REPORTING=y
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=y
CONFIG_REGMAP_SLIMBUS=m
CONFIG_REGMAP_SPI=y
CONFIG_REGMAP_SPMI=m
CONFIG_REGMAP_W1=m
CONFIG_REGMAP_MMIO=y
CONFIG_REGMAP_IRQ=y
CONFIG_REGMAP_SOUNDWIRE=m
CONFIG_REGMAP_SOUNDWIRE_MBQ=m
CONFIG_REGMAP_SCCB=m
CONFIG_REGMAP_I3C=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT is not set
# end of Generic Driver Options

#
# Bus devices
#
CONFIG_MHI_BUS=m
# CONFIG_MHI_BUS_DEBUG is not set
CONFIG_MHI_BUS_PCI_GENERIC=m
CONFIG_MHI_BUS_EP=m
# end of Bus devices

#
# Cache Drivers
#
# end of Cache Drivers

CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y

#
# Firmware Drivers
#

#
# ARM System Control and Management Interface Protocol
#
# end of ARM System Control and Management Interface Protocol

CONFIG_EDD=y
CONFIG_EDD_OFF=y
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=m
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
CONFIG_FW_CFG_SYSFS=m
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
CONFIG_SYSFB=y
# CONFIG_SYSFB_SIMPLEFB is not set
CONFIG_FW_CS_DSP=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_ESRT=y
CONFIG_EFI_VARS_PSTORE=m
# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
CONFIG_EFI_SOFT_RESERVE=y
CONFIG_EFI_DXE_MEM_ATTRIBUTES=y
CONFIG_EFI_RUNTIME_WRAPPERS=y
CONFIG_EFI_BOOTLOADER_CONTROL=m
CONFIG_EFI_CAPSULE_LOADER=m
CONFIG_EFI_TEST=m
CONFIG_EFI_DEV_PATH_PARSER=y
CONFIG_APPLE_PROPERTIES=y
CONFIG_RESET_ATTACK_MITIGATION=y
CONFIG_EFI_RCI2_TABLE=y
# CONFIG_EFI_DISABLE_PCI_DMA is not set
CONFIG_EFI_EARLYCON=y
CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
# CONFIG_EFI_DISABLE_RUNTIME is not set
CONFIG_EFI_COCO_SECRET=y
CONFIG_UNACCEPTED_MEMORY=y
CONFIG_EFI_EMBEDDED_FIRMWARE=y
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

CONFIG_GNSS=m
CONFIG_GNSS_SERIAL=m
CONFIG_GNSS_MTK_SERIAL=m
CONFIG_GNSS_SIRF_SERIAL=m
CONFIG_GNSS_UBX_SERIAL=m
CONFIG_GNSS_USB=m
CONFIG_MTD=m
# CONFIG_MTD_TESTS is not set

#
# Partition parsers
#
CONFIG_MTD_AR7_PARTS=m
CONFIG_MTD_CMDLINE_PARTS=m
CONFIG_MTD_REDBOOT_PARTS=m
CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1
# CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED is not set
# CONFIG_MTD_REDBOOT_PARTS_READONLY is not set
# end of Partition parsers

#
# User Modules And Translation Layers
#
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_BLOCK=m
CONFIG_MTD_BLOCK_RO=m

#
# Note that in some cases UBI block is preferred. See MTD_UBI_BLOCK.
#
CONFIG_FTL=m
CONFIG_NFTL=m
CONFIG_NFTL_RW=y
CONFIG_INFTL=m
CONFIG_RFD_FTL=m
CONFIG_SSFDC=m
CONFIG_SM_FTL=m
CONFIG_MTD_OOPS=m
CONFIG_MTD_PSTORE=m
CONFIG_MTD_SWAP=m
# CONFIG_MTD_PARTITIONED_MASTER is not set

#
# RAM/ROM/Flash chip drivers
#
CONFIG_MTD_CFI=m
CONFIG_MTD_JEDECPROBE=m
CONFIG_MTD_GEN_PROBE=m
# CONFIG_MTD_CFI_ADV_OPTIONS is not set
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
CONFIG_MTD_CFI_INTELEXT=m
CONFIG_MTD_CFI_AMDSTD=m
CONFIG_MTD_CFI_STAA=m
CONFIG_MTD_CFI_UTIL=m
CONFIG_MTD_RAM=m
CONFIG_MTD_ROM=m
CONFIG_MTD_ABSENT=m
# end of RAM/ROM/Flash chip drivers

#
# Mapping drivers for chip access
#
CONFIG_MTD_COMPLEX_MAPPINGS=y
CONFIG_MTD_PHYSMAP=m
# CONFIG_MTD_PHYSMAP_COMPAT is not set
CONFIG_MTD_PHYSMAP_GPIO_ADDR=y
CONFIG_MTD_SBC_GXX=m
CONFIG_MTD_AMD76XROM=m
CONFIG_MTD_ICHXROM=m
CONFIG_MTD_ESB2ROM=m
CONFIG_MTD_CK804XROM=m
CONFIG_MTD_SCB2_FLASH=m
CONFIG_MTD_NETtel=m
CONFIG_MTD_L440GX=m
CONFIG_MTD_PCI=m
CONFIG_MTD_PCMCIA=m
# CONFIG_MTD_PCMCIA_ANONYMOUS is not set
CONFIG_MTD_INTEL_VR_NOR=m
CONFIG_MTD_PLATRAM=m
# end of Mapping drivers for chip access

#
# Self-contained MTD device drivers
#
CONFIG_MTD_PMC551=m
# CONFIG_MTD_PMC551_BUGFIX is not set
# CONFIG_MTD_PMC551_DEBUG is not set
CONFIG_MTD_DATAFLASH=m
# CONFIG_MTD_DATAFLASH_WRITE_VERIFY is not set
CONFIG_MTD_DATAFLASH_OTP=y
CONFIG_MTD_MCHP23K256=m
CONFIG_MTD_MCHP48L640=m
CONFIG_MTD_SST25L=m
CONFIG_MTD_SLRAM=m
CONFIG_MTD_PHRAM=m
CONFIG_MTD_MTDRAM=m
CONFIG_MTDRAM_TOTAL_SIZE=4096
CONFIG_MTDRAM_ERASE_SIZE=128
CONFIG_MTD_BLOCK2MTD=m

#
# Disk-On-Chip Device Drivers
#
# CONFIG_MTD_DOCG3 is not set
# end of Self-contained MTD device drivers

#
# NAND
#
CONFIG_MTD_NAND_CORE=m
CONFIG_MTD_ONENAND=m
CONFIG_MTD_ONENAND_VERIFY_WRITE=y
CONFIG_MTD_ONENAND_GENERIC=m
# CONFIG_MTD_ONENAND_OTP is not set
CONFIG_MTD_ONENAND_2X_PROGRAM=y
CONFIG_MTD_RAW_NAND=m

#
# Raw/parallel NAND flash controllers
#
CONFIG_MTD_NAND_DENALI=m
CONFIG_MTD_NAND_DENALI_PCI=m
CONFIG_MTD_NAND_CAFE=m
CONFIG_MTD_NAND_MXIC=m
CONFIG_MTD_NAND_GPIO=m
CONFIG_MTD_NAND_PLATFORM=m
CONFIG_MTD_NAND_ARASAN=m

#
# Misc
#
CONFIG_MTD_SM_COMMON=m
CONFIG_MTD_NAND_NANDSIM=m
CONFIG_MTD_NAND_RICOH=m
CONFIG_MTD_NAND_DISKONCHIP=m
# CONFIG_MTD_NAND_DISKONCHIP_PROBE_ADVANCED is not set
CONFIG_MTD_NAND_DISKONCHIP_PROBE_ADDRESS=0
# CONFIG_MTD_NAND_DISKONCHIP_BBTWRITE is not set
CONFIG_MTD_SPI_NAND=m

#
# ECC engine support
#
CONFIG_MTD_NAND_ECC=y
CONFIG_MTD_NAND_ECC_SW_HAMMING=y
# CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC is not set
CONFIG_MTD_NAND_ECC_SW_BCH=y
CONFIG_MTD_NAND_ECC_MXIC=y
# end of ECC engine support
# end of NAND

#
# LPDDR & LPDDR2 PCM memory drivers
#
CONFIG_MTD_LPDDR=m
CONFIG_MTD_QINFO_PROBE=m
# end of LPDDR & LPDDR2 PCM memory drivers

CONFIG_MTD_SPI_NOR=m
CONFIG_MTD_SPI_NOR_USE_4K_SECTORS=y
# CONFIG_MTD_SPI_NOR_SWP_DISABLE is not set
CONFIG_MTD_SPI_NOR_SWP_DISABLE_ON_VOLATILE=y
# CONFIG_MTD_SPI_NOR_SWP_KEEP is not set
CONFIG_MTD_UBI=m
CONFIG_MTD_UBI_WL_THRESHOLD=4096
CONFIG_MTD_UBI_BEB_LIMIT=20
CONFIG_MTD_UBI_FASTMAP=y
CONFIG_MTD_UBI_GLUEBI=m
CONFIG_MTD_UBI_BLOCK=y
CONFIG_MTD_HYPERBUS=m
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_SERIAL=m
CONFIG_PARPORT_PC_FIFO=y
# CONFIG_PARPORT_PC_SUPERIO is not set
CONFIG_PARPORT_PC_PCMCIA=m
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_FD=m
# CONFIG_BLK_DEV_FD_RAWCMD is not set
CONFIG_CDROM=y
CONFIG_BLK_DEV_PCIESSD_MTIP32XX=m
CONFIG_ZRAM=m
CONFIG_ZRAM_DEF_COMP_LZORLE=y
# CONFIG_ZRAM_DEF_COMP_ZSTD is not set
# CONFIG_ZRAM_DEF_COMP_LZ4 is not set
# CONFIG_ZRAM_DEF_COMP_LZO is not set
# CONFIG_ZRAM_DEF_COMP_LZ4HC is not set
# CONFIG_ZRAM_DEF_COMP_842 is not set
CONFIG_ZRAM_DEF_COMP="lzo-rle"
CONFIG_ZRAM_WRITEBACK=y
CONFIG_ZRAM_MEMORY_TRACKING=y
# CONFIG_ZRAM_MULTI_COMP is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
CONFIG_BLK_DEV_DRBD=m
# CONFIG_DRBD_FAULT_INJECTION is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=m
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_UBLK is not set
CONFIG_BLK_DEV_RNBD=y
CONFIG_BLK_DEV_RNBD_CLIENT=m
CONFIG_BLK_DEV_RNBD_SERVER=m

#
# NVME Support
#
CONFIG_NVME_COMMON=m
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_VERBOSE_ERRORS is not set
CONFIG_NVME_HWMON=y
CONFIG_NVME_FABRICS=m
CONFIG_NVME_RDMA=m
CONFIG_NVME_FC=m
CONFIG_NVME_TCP=m
CONFIG_NVME_AUTH=y
CONFIG_NVME_TARGET=m
CONFIG_NVME_TARGET_PASSTHRU=y
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_RDMA=m
CONFIG_NVME_TARGET_FC=m
# CONFIG_NVME_TARGET_FCLOOP is not set
CONFIG_NVME_TARGET_TCP=m
CONFIG_NVME_TARGET_AUTH=y
# end of NVME Support

#
# Misc devices
#
CONFIG_SENSORS_LIS3LV02D=m
CONFIG_AD525X_DPOT=m
CONFIG_AD525X_DPOT_I2C=m
CONFIG_AD525X_DPOT_SPI=m
CONFIG_DUMMY_IRQ=m
CONFIG_IBM_ASM=m
CONFIG_PHANTOM=m
CONFIG_TIFM_CORE=m
CONFIG_TIFM_7XX1=m
CONFIG_ICS932S401=m
CONFIG_ENCLOSURE_SERVICES=m
CONFIG_SGI_XP=m
CONFIG_SMPRO_ERRMON=m
CONFIG_SMPRO_MISC=m
CONFIG_HP_ILO=m
CONFIG_SGI_GRU=m
# CONFIG_SGI_GRU_DEBUG is not set
CONFIG_APDS9802ALS=m
CONFIG_ISL29003=m
CONFIG_ISL29020=m
CONFIG_SENSORS_TSL2550=m
CONFIG_SENSORS_BH1770=m
CONFIG_SENSORS_APDS990X=m
CONFIG_HMC6352=m
CONFIG_DS1682=m
CONFIG_VMWARE_BALLOON=m
CONFIG_LATTICE_ECP3_CONFIG=m
CONFIG_SRAM=y
CONFIG_DW_XDATA_PCIE=m
# CONFIG_PCI_ENDPOINT_TEST is not set
CONFIG_XILINX_SDFEC=m
CONFIG_MISC_RTSX=m
CONFIG_C2PORT=m
CONFIG_C2PORT_DURAMAR_2150=m

#
# EEPROM support
#
CONFIG_EEPROM_AT24=m
CONFIG_EEPROM_AT25=m
CONFIG_EEPROM_LEGACY=m
CONFIG_EEPROM_MAX6875=m
CONFIG_EEPROM_93CX6=m
CONFIG_EEPROM_93XX46=m
CONFIG_EEPROM_IDT_89HPESX=m
CONFIG_EEPROM_EE1004=m
# end of EEPROM support

CONFIG_CB710_CORE=m
# CONFIG_CB710_DEBUG is not set
CONFIG_CB710_DEBUG_ASSUMPTIONS=y

#
# Texas Instruments shared transport line discipline
#
CONFIG_TI_ST=m
# end of Texas Instruments shared transport line discipline

CONFIG_SENSORS_LIS3_I2C=m
CONFIG_ALTERA_STAPL=m
CONFIG_INTEL_MEI=m
CONFIG_INTEL_MEI_ME=m
CONFIG_INTEL_MEI_TXE=m
CONFIG_INTEL_MEI_GSC=m
CONFIG_INTEL_MEI_HDCP=m
CONFIG_INTEL_MEI_PXP=m
# CONFIG_INTEL_MEI_GSC_PROXY is not set
CONFIG_VMWARE_VMCI=m
CONFIG_GENWQE=m
CONFIG_GENWQE_PLATFORM_ERROR_RECOVERY=0
CONFIG_ECHO=m
CONFIG_BCM_VK=m
CONFIG_BCM_VK_TTY=y
CONFIG_MISC_ALCOR_PCI=m
CONFIG_MISC_RTSX_PCI=m
CONFIG_MISC_RTSX_USB=m
CONFIG_UACCE=m
CONFIG_PVPANIC=y
CONFIG_PVPANIC_MMIO=m
CONFIG_PVPANIC_PCI=m
CONFIG_GP_PCI1XXXX=m
# end of Misc devices

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m
CONFIG_SCSI_COMMON=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=y
CONFIG_BLK_DEV_BSG=y
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SRP_ATTRS=m
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
CONFIG_SCSI_CXGB3_ISCSI=m
CONFIG_SCSI_CXGB4_ISCSI=m
CONFIG_SCSI_BNX2_ISCSI=m
CONFIG_SCSI_BNX2X_FCOE=m
CONFIG_BE2ISCSI=m
CONFIG_BLK_DEV_3W_XXXX_RAID=m
CONFIG_SCSI_HPSA=m
CONFIG_SCSI_3W_9XXX=m
CONFIG_SCSI_3W_SAS=m
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AHA1740=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=5000
# CONFIG_AIC7XXX_DEBUG_ENABLE is not set
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=5000
# CONFIG_AIC79XX_DEBUG_ENABLE is not set
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
CONFIG_SCSI_MVUMI=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
CONFIG_SCSI_ESAS2R=m
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPI3MR=m
CONFIG_SCSI_SMARTPQI=m
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
CONFIG_SCSI_FLASHPOINT=y
CONFIG_SCSI_MYRB=m
CONFIG_SCSI_MYRS=m
CONFIG_VMWARE_PVSCSI=m
CONFIG_XEN_SCSI_FRONTEND=m
CONFIG_HYPERV_STORAGE=m
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
CONFIG_FCOE_FNIC=m
CONFIG_SCSI_SNIC=m
# CONFIG_SCSI_SNIC_DEBUG_FS is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_FDOMAIN=m
CONFIG_SCSI_FDOMAIN_PCI=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
CONFIG_SCSI_INIA100=m
CONFIG_SCSI_PPA=m
CONFIG_SCSI_IMM=m
# CONFIG_SCSI_IZIP_EPP16 is not set
# CONFIG_SCSI_IZIP_SLOW_CTR is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
CONFIG_SCSI_IPR_TRACE=y
CONFIG_SCSI_IPR_DUMP=y
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
CONFIG_TCM_QLA2XXX=m
# CONFIG_TCM_QLA2XXX_DEBUG is not set
CONFIG_SCSI_QLA_ISCSI=m
CONFIG_QEDI=m
CONFIG_QEDF=m
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_EFCT=m
CONFIG_SCSI_SIM710=m
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_AM53C974=m
CONFIG_SCSI_WD719X=m
CONFIG_SCSI_DEBUG=m
CONFIG_SCSI_PMCRAID=m
CONFIG_SCSI_PM8001=m
CONFIG_SCSI_BFA_FC=m
CONFIG_SCSI_VIRTIO=y
CONFIG_SCSI_CHELSIO_FCOE=m
CONFIG_SCSI_LOWLEVEL_PCMCIA=y
CONFIG_PCMCIA_AHA152X=m
CONFIG_PCMCIA_FDOMAIN=m
CONFIG_PCMCIA_QLOGIC=m
CONFIG_PCMCIA_SYM53C500=m
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=m
CONFIG_SCSI_DH_HP_SW=m
CONFIG_SCSI_DH_EMC=m
CONFIG_SCSI_DH_ALUA=m
# end of SCSI device support

CONFIG_ATA=y
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
CONFIG_SATA_ZPODD=y
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
CONFIG_SATA_MOBILE_LPM_POLICY=3
CONFIG_SATA_AHCI_PLATFORM=m
CONFIG_AHCI_DWC=m
CONFIG_SATA_INIC162X=m
CONFIG_SATA_ACARD_AHCI=m
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=y
CONFIG_SATA_DWC=m
CONFIG_SATA_DWC_OLD_DMA=y
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
CONFIG_PATA_ALI=m
CONFIG_PATA_AMD=m
CONFIG_PATA_ARTOP=m
CONFIG_PATA_ATIIXP=m
CONFIG_PATA_ATP867X=m
CONFIG_PATA_CMD64X=m
CONFIG_PATA_CYPRESS=m
CONFIG_PATA_EFAR=m
CONFIG_PATA_HPT366=m
CONFIG_PATA_HPT37X=m
CONFIG_PATA_HPT3X2N=m
CONFIG_PATA_HPT3X3=m
# CONFIG_PATA_HPT3X3_DMA is not set
CONFIG_PATA_IT8213=m
CONFIG_PATA_IT821X=m
CONFIG_PATA_JMICRON=m
CONFIG_PATA_MARVELL=m
CONFIG_PATA_NETCELL=m
CONFIG_PATA_NINJA32=m
CONFIG_PATA_NS87415=m
CONFIG_PATA_OLDPIIX=m
CONFIG_PATA_OPTIDMA=m
CONFIG_PATA_PDC2027X=m
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
CONFIG_PATA_RDC=m
CONFIG_PATA_SCH=m
CONFIG_PATA_SERVERWORKS=m
CONFIG_PATA_SIL680=m
CONFIG_PATA_SIS=y
CONFIG_PATA_TOSHIBA=m
CONFIG_PATA_TRIFLEX=m
CONFIG_PATA_VIA=m
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
CONFIG_PATA_CMD640_PCI=m
CONFIG_PATA_MPIIX=m
CONFIG_PATA_NS87410=m
CONFIG_PATA_OPTI=m
CONFIG_PATA_PCMCIA=m
CONFIG_PATA_RZ1000=m
# CONFIG_PATA_PARPORT is not set

#
# Generic fallback / legacy drivers
#
CONFIG_PATA_ACPI=m
CONFIG_ATA_GENERIC=y
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_BITMAP_FILE=y
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_MD_CLUSTER=m
CONFIG_BCACHE=m
# CONFIG_BCACHE_DEBUG is not set
# CONFIG_BCACHE_CLOSURES_DEBUG is not set
CONFIG_BCACHE_ASYNC_REGISTRATION=y
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=y
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
CONFIG_DM_UNSTRIPED=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_SMQ=m
CONFIG_DM_WRITECACHE=m
CONFIG_DM_EBS=m
CONFIG_DM_ERA=m
CONFIG_DM_CLONE=m
CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
CONFIG_DM_MULTIPATH_HST=m
CONFIG_DM_MULTIPATH_IOA=m
CONFIG_DM_DELAY=m
# CONFIG_DM_DUST is not set
CONFIG_DM_INIT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_SECONDARY_KEYRING=y
# CONFIG_DM_VERITY_FEC is not set
CONFIG_DM_SWITCH=m
CONFIG_DM_LOG_WRITES=m
CONFIG_DM_INTEGRITY=m
CONFIG_DM_ZONED=m
CONFIG_DM_AUDIT=y
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_TCM_USER2=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
CONFIG_ISCSI_TARGET_CXGB4=m
CONFIG_SBP_TARGET=m
# CONFIG_REMOTE_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=128
CONFIG_FUSION_CTL=m
CONFIG_FUSION_LAN=m
CONFIG_FUSION_LOGGING=y

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
CONFIG_FIREWIRE_NOSY=m
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=m
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
CONFIG_BONDING=m
CONFIG_DUMMY=m
CONFIG_WIREGUARD=m
# CONFIG_WIREGUARD_DEBUG is not set
CONFIG_EQUALIZER=m
CONFIG_NET_FC=y
CONFIG_IFB=m
CONFIG_NET_TEAM=m
CONFIG_NET_TEAM_MODE_BROADCAST=m
CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
CONFIG_NET_TEAM_MODE_RANDOM=m
CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
CONFIG_NET_TEAM_MODE_LOADBALANCE=m
CONFIG_MACVLAN=m
CONFIG_MACVTAP=m
CONFIG_IPVLAN_L3S=y
CONFIG_IPVLAN=m
CONFIG_IPVTAP=m
CONFIG_VXLAN=m
CONFIG_GENEVE=m
CONFIG_BAREUDP=m
CONFIG_GTP=m
CONFIG_AMT=m
CONFIG_MACSEC=m
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
# CONFIG_NETCONSOLE_EXTENDED_LOG is not set
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_NTB_NETDEV=m
CONFIG_RIONET=m
CONFIG_RIONET_TX_SIZE=128
CONFIG_RIONET_RX_SIZE=128
CONFIG_TUN=y
CONFIG_TAP=m
# CONFIG_TUN_VNET_CROSS_LE is not set
CONFIG_VETH=m
CONFIG_VIRTIO_NET=m
CONFIG_NLMON=m
CONFIG_NET_VRF=m
CONFIG_VSOCKMON=m
CONFIG_MHI_NET=m
CONFIG_SUNGEM_PHY=m
CONFIG_ARCNET=m
CONFIG_ARCNET_1201=m
CONFIG_ARCNET_1051=m
CONFIG_ARCNET_RAW=m
CONFIG_ARCNET_CAP=m
CONFIG_ARCNET_COM90xx=m
CONFIG_ARCNET_COM90xxIO=m
CONFIG_ARCNET_RIM_I=m
CONFIG_ARCNET_COM20020=m
CONFIG_ARCNET_COM20020_PCI=m
CONFIG_ARCNET_COM20020_CS=m
CONFIG_ATM_DRIVERS=y
CONFIG_ATM_DUMMY=m
CONFIG_ATM_TCP=m
CONFIG_ATM_LANAI=m
CONFIG_ATM_ENI=m
# CONFIG_ATM_ENI_DEBUG is not set
# CONFIG_ATM_ENI_TUNE_BURST is not set
CONFIG_ATM_NICSTAR=m
# CONFIG_ATM_NICSTAR_USE_SUNI is not set
# CONFIG_ATM_NICSTAR_USE_IDT77105 is not set
CONFIG_ATM_IDT77252=m
# CONFIG_ATM_IDT77252_DEBUG is not set
# CONFIG_ATM_IDT77252_RCV_ALL is not set
CONFIG_ATM_IDT77252_USE_SUNI=y
CONFIG_ATM_IA=m
# CONFIG_ATM_IA_DEBUG is not set
CONFIG_ATM_FORE200E=m
# CONFIG_ATM_FORE200E_USE_TASKLET is not set
CONFIG_ATM_FORE200E_TX_RETRY=16
CONFIG_ATM_FORE200E_DEBUG=0
CONFIG_ATM_HE=m
CONFIG_ATM_HE_USE_SUNI=y
CONFIG_ATM_SOLOS=m
CONFIG_CAIF_DRIVERS=y
CONFIG_CAIF_TTY=m
CONFIG_CAIF_VIRTIO=m

#
# Distributed Switch Architecture drivers
#
CONFIG_B53=m
CONFIG_B53_SPI_DRIVER=m
CONFIG_B53_MDIO_DRIVER=m
CONFIG_B53_MMAP_DRIVER=m
CONFIG_B53_SRAB_DRIVER=m
CONFIG_B53_SERDES=m
CONFIG_NET_DSA_BCM_SF2=m
# CONFIG_NET_DSA_LOOP is not set
CONFIG_NET_DSA_HIRSCHMANN_HELLCREEK=m
CONFIG_NET_DSA_LANTIQ_GSWIP=m
CONFIG_NET_DSA_MT7530=m
CONFIG_NET_DSA_MT7530_MDIO=m
CONFIG_NET_DSA_MT7530_MMIO=m
CONFIG_NET_DSA_MV88E6060=m
CONFIG_NET_DSA_MICROCHIP_KSZ_COMMON=m
CONFIG_NET_DSA_MICROCHIP_KSZ9477_I2C=m
CONFIG_NET_DSA_MICROCHIP_KSZ_SPI=m
# CONFIG_NET_DSA_MICROCHIP_KSZ_PTP is not set
CONFIG_NET_DSA_MICROCHIP_KSZ8863_SMI=m
CONFIG_NET_DSA_MV88E6XXX=m
CONFIG_NET_DSA_MV88E6XXX_PTP=y
CONFIG_NET_DSA_MSCC_FELIX_DSA_LIB=m
# CONFIG_NET_DSA_MSCC_OCELOT_EXT is not set
CONFIG_NET_DSA_MSCC_SEVILLE=m
CONFIG_NET_DSA_AR9331=m
CONFIG_NET_DSA_QCA8K=m
# CONFIG_NET_DSA_QCA8K_LEDS_SUPPORT is not set
CONFIG_NET_DSA_SJA1105=m
CONFIG_NET_DSA_SJA1105_PTP=y
CONFIG_NET_DSA_SJA1105_TAS=y
CONFIG_NET_DSA_SJA1105_VL=y
CONFIG_NET_DSA_XRS700X=m
CONFIG_NET_DSA_XRS700X_I2C=m
CONFIG_NET_DSA_XRS700X_MDIO=m
CONFIG_NET_DSA_REALTEK=m
# CONFIG_NET_DSA_REALTEK_MDIO is not set
# CONFIG_NET_DSA_REALTEK_SMI is not set
CONFIG_NET_DSA_REALTEK_RTL8365MB=m
CONFIG_NET_DSA_REALTEK_RTL8366RB=m
CONFIG_NET_DSA_SMSC_LAN9303=m
CONFIG_NET_DSA_SMSC_LAN9303_I2C=m
CONFIG_NET_DSA_SMSC_LAN9303_MDIO=m
CONFIG_NET_DSA_VITESSE_VSC73XX=m
CONFIG_NET_DSA_VITESSE_VSC73XX_SPI=m
CONFIG_NET_DSA_VITESSE_VSC73XX_PLATFORM=m
# end of Distributed Switch Architecture drivers

CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
CONFIG_EL3=m
CONFIG_PCMCIA_3C574=m
CONFIG_PCMCIA_3C589=m
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
CONFIG_ADAPTEC_STARFIRE=m
CONFIG_NET_VENDOR_AGERE=y
CONFIG_ET131X=m
CONFIG_NET_VENDOR_ALACRITECH=y
CONFIG_SLICOSS=m
CONFIG_NET_VENDOR_ALTEON=y
CONFIG_ACENIC=m
# CONFIG_ACENIC_OMIT_TIGON_I is not set
CONFIG_ALTERA_TSE=m
CONFIG_NET_VENDOR_AMAZON=y
CONFIG_ENA_ETHERNET=m
CONFIG_NET_VENDOR_AMD=y
CONFIG_AMD8111_ETH=m
CONFIG_PCNET32=m
CONFIG_PCMCIA_NMCLAN=m
CONFIG_AMD_XGBE=m
CONFIG_AMD_XGBE_DCB=y
CONFIG_AMD_XGBE_HAVE_ECC=y
# CONFIG_PDS_CORE is not set
CONFIG_NET_VENDOR_AQUANTIA=y
CONFIG_AQTION=m
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ASIX=y
CONFIG_SPI_AX88796C=m
# CONFIG_SPI_AX88796C_COMPRESSION is not set
CONFIG_NET_VENDOR_ATHEROS=y
CONFIG_ATL2=m
CONFIG_ATL1=m
CONFIG_ATL1E=m
CONFIG_ATL1C=m
CONFIG_ALX=m
CONFIG_CX_ECAT=m
CONFIG_NET_VENDOR_BROADCOM=y
CONFIG_B44=m
CONFIG_B44_PCI_AUTOSELECT=y
CONFIG_B44_PCICORE_AUTOSELECT=y
CONFIG_B44_PCI=y
CONFIG_BCMGENET=m
CONFIG_BNX2=m
CONFIG_CNIC=m
CONFIG_TIGON3=m
CONFIG_TIGON3_HWMON=y
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
CONFIG_SYSTEMPORT=m
CONFIG_BNXT=m
CONFIG_BNXT_SRIOV=y
CONFIG_BNXT_FLOWER_OFFLOAD=y
CONFIG_BNXT_DCB=y
CONFIG_BNXT_HWMON=y
CONFIG_NET_VENDOR_CADENCE=y
CONFIG_MACB=m
CONFIG_MACB_USE_HWSTAMP=y
CONFIG_MACB_PCI=m
CONFIG_NET_VENDOR_CAVIUM=y
CONFIG_THUNDER_NIC_PF=m
CONFIG_THUNDER_NIC_VF=m
CONFIG_THUNDER_NIC_BGX=m
CONFIG_THUNDER_NIC_RGX=m
CONFIG_CAVIUM_PTP=m
CONFIG_LIQUIDIO_CORE=m
CONFIG_LIQUIDIO=m
CONFIG_LIQUIDIO_VF=m
CONFIG_NET_VENDOR_CHELSIO=y
CONFIG_CHELSIO_T1=m
CONFIG_CHELSIO_T1_1G=y
CONFIG_CHELSIO_T3=m
CONFIG_CHELSIO_T4=m
CONFIG_CHELSIO_T4_DCB=y
CONFIG_CHELSIO_T4_FCOE=y
CONFIG_CHELSIO_T4VF=m
CONFIG_CHELSIO_LIB=m
CONFIG_CHELSIO_INLINE_CRYPTO=y
CONFIG_CHELSIO_IPSEC_INLINE=m
CONFIG_CHELSIO_TLS_DEVICE=m
CONFIG_NET_VENDOR_CIRRUS=y
CONFIG_NET_VENDOR_CISCO=y
CONFIG_ENIC=m
CONFIG_NET_VENDOR_CORTINA=y
CONFIG_NET_VENDOR_DAVICOM=y
CONFIG_DM9051=m
CONFIG_DNET=m
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
CONFIG_DE2104X=m
CONFIG_DE2104X_DSL=0
CONFIG_TULIP=m
# CONFIG_TULIP_MWI is not set
# CONFIG_TULIP_MMIO is not set
# CONFIG_TULIP_NAPI is not set
CONFIG_WINBOND_840=m
CONFIG_DM9102=m
CONFIG_ULI526X=m
CONFIG_PCMCIA_XIRCOM=m
CONFIG_NET_VENDOR_DLINK=y
CONFIG_DL2K=m
CONFIG_SUNDANCE=m
# CONFIG_SUNDANCE_MMIO is not set
CONFIG_NET_VENDOR_EMULEX=y
CONFIG_BE2NET=m
CONFIG_BE2NET_HWMON=y
CONFIG_BE2NET_BE2=y
CONFIG_BE2NET_BE3=y
CONFIG_BE2NET_LANCER=y
CONFIG_BE2NET_SKYHAWK=y
CONFIG_NET_VENDOR_ENGLEDER=y
CONFIG_TSNEP=m
# CONFIG_TSNEP_SELFTESTS is not set
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_FUJITSU=y
CONFIG_PCMCIA_FMVJ18X=m
CONFIG_NET_VENDOR_FUNGIBLE=y
CONFIG_FUN_CORE=m
CONFIG_FUN_ETH=m
CONFIG_NET_VENDOR_GOOGLE=y
CONFIG_GVE=m
CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m
CONFIG_NET_VENDOR_I825XX=y
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_E1000E_HWTS=y
CONFIG_IGB=m
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
CONFIG_IGBVF=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBE_IPSEC=y
CONFIG_IXGBEVF=m
CONFIG_IXGBEVF_IPSEC=y
CONFIG_I40E=m
CONFIG_I40E_DCB=y
CONFIG_IAVF=m
CONFIG_I40EVF=m
CONFIG_ICE=m
CONFIG_ICE_SWITCHDEV=y
CONFIG_ICE_HWTS=y
CONFIG_FM10K=m
CONFIG_IGC=m
CONFIG_JME=m
CONFIG_NET_VENDOR_ADI=y
CONFIG_ADIN1110=m
CONFIG_NET_VENDOR_LITEX=y
CONFIG_NET_VENDOR_MARVELL=y
CONFIG_MVMDIO=m
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
CONFIG_SKGE_GENESIS=y
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_OCTEON_EP=m
CONFIG_PRESTERA=m
CONFIG_PRESTERA_PCI=m
CONFIG_NET_VENDOR_MELLANOX=y
CONFIG_MLX4_EN=m
CONFIG_MLX4_EN_DCB=y
CONFIG_MLX4_CORE=m
CONFIG_MLX4_DEBUG=y
CONFIG_MLX4_CORE_GEN2=y
CONFIG_MLX5_CORE=m
CONFIG_MLX5_FPGA=y
CONFIG_MLX5_CORE_EN=y
CONFIG_MLX5_EN_ARFS=y
CONFIG_MLX5_EN_RXNFC=y
CONFIG_MLX5_MPFS=y
CONFIG_MLX5_ESWITCH=y
CONFIG_MLX5_BRIDGE=y
CONFIG_MLX5_CLS_ACT=y
CONFIG_MLX5_TC_CT=y
CONFIG_MLX5_TC_SAMPLE=y
CONFIG_MLX5_CORE_EN_DCB=y
CONFIG_MLX5_CORE_IPOIB=y
# CONFIG_MLX5_MACSEC is not set
CONFIG_MLX5_EN_IPSEC=y
CONFIG_MLX5_EN_TLS=y
CONFIG_MLX5_SW_STEERING=y
CONFIG_MLX5_SF=y
CONFIG_MLX5_SF_MANAGER=y
CONFIG_MLXSW_CORE=m
CONFIG_MLXSW_CORE_HWMON=y
CONFIG_MLXSW_CORE_THERMAL=y
CONFIG_MLXSW_PCI=m
CONFIG_MLXSW_I2C=m
CONFIG_MLXSW_SPECTRUM=m
CONFIG_MLXSW_SPECTRUM_DCB=y
CONFIG_MLXSW_MINIMAL=m
CONFIG_MLXFW=m
CONFIG_NET_VENDOR_MICREL=y
CONFIG_KS8842=m
CONFIG_KS8851=m
CONFIG_KS8851_MLL=m
CONFIG_KSZ884X_PCI=m
CONFIG_NET_VENDOR_MICROCHIP=y
CONFIG_ENC28J60=m
# CONFIG_ENC28J60_WRITEVERIFY is not set
CONFIG_ENCX24J600=m
CONFIG_LAN743X=m
CONFIG_VCAP=y
CONFIG_NET_VENDOR_MICROSEMI=y
CONFIG_MSCC_OCELOT_SWITCH_LIB=m
CONFIG_NET_VENDOR_MICROSOFT=y
CONFIG_MICROSOFT_MANA=m
CONFIG_NET_VENDOR_MYRI=y
CONFIG_MYRI10GE=m
CONFIG_MYRI10GE_DCA=y
CONFIG_FEALNX=m
CONFIG_NET_VENDOR_NI=y
CONFIG_NI_XGE_MANAGEMENT_ENET=m
CONFIG_NET_VENDOR_NATSEMI=y
CONFIG_NATSEMI=m
CONFIG_NS83820=m
CONFIG_NET_VENDOR_NETERION=y
CONFIG_S2IO=m
CONFIG_NET_VENDOR_NETRONOME=y
CONFIG_NFP=m
CONFIG_NFP_APP_FLOWER=y
CONFIG_NFP_APP_ABM_NIC=y
CONFIG_NFP_NET_IPSEC=y
# CONFIG_NFP_DEBUG is not set
CONFIG_NET_VENDOR_8390=y
CONFIG_PCMCIA_AXNET=m
CONFIG_NE2K_PCI=m
CONFIG_PCMCIA_PCNET=m
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=m
CONFIG_NET_VENDOR_OKI=y
CONFIG_ETHOC=m
CONFIG_NET_VENDOR_PACKET_ENGINES=y
CONFIG_HAMACHI=m
CONFIG_YELLOWFIN=m
CONFIG_NET_VENDOR_PENSANDO=y
CONFIG_IONIC=m
CONFIG_NET_VENDOR_QLOGIC=y
CONFIG_QLA3XXX=m
CONFIG_QLCNIC=m
CONFIG_QLCNIC_SRIOV=y
CONFIG_QLCNIC_DCB=y
CONFIG_QLCNIC_HWMON=y
CONFIG_NETXEN_NIC=m
CONFIG_QED=m
CONFIG_QED_LL2=y
CONFIG_QED_SRIOV=y
CONFIG_QEDE=m
CONFIG_QED_RDMA=y
CONFIG_QED_ISCSI=y
CONFIG_QED_FCOE=y
CONFIG_QED_OOO=y
CONFIG_NET_VENDOR_BROCADE=y
CONFIG_BNA=m
CONFIG_NET_VENDOR_QUALCOMM=y
CONFIG_QCOM_EMAC=m
CONFIG_RMNET=m
CONFIG_NET_VENDOR_RDC=y
CONFIG_R6040=m
CONFIG_NET_VENDOR_REALTEK=y
CONFIG_ATP=m
CONFIG_8139CP=m
CONFIG_8139TOO=m
CONFIG_8139TOO_PIO=y
# CONFIG_8139TOO_TUNE_TWISTER is not set
CONFIG_8139TOO_8129=y
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
CONFIG_NET_VENDOR_RENESAS=y
CONFIG_NET_VENDOR_ROCKER=y
CONFIG_ROCKER=m
CONFIG_NET_VENDOR_SAMSUNG=y
CONFIG_SXGBE_ETH=m
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
CONFIG_SIS900=m
CONFIG_SIS190=m
CONFIG_NET_VENDOR_SOLARFLARE=y
CONFIG_SFC=m
CONFIG_SFC_MTD=y
CONFIG_SFC_MCDI_MON=y
CONFIG_SFC_SRIOV=y
CONFIG_SFC_MCDI_LOGGING=y
CONFIG_SFC_FALCON=m
CONFIG_SFC_FALCON_MTD=y
CONFIG_SFC_SIENA=m
CONFIG_SFC_SIENA_MTD=y
CONFIG_SFC_SIENA_MCDI_MON=y
CONFIG_SFC_SIENA_SRIOV=y
CONFIG_SFC_SIENA_MCDI_LOGGING=y
CONFIG_NET_VENDOR_SMSC=y
CONFIG_PCMCIA_SMC91C92=m
CONFIG_EPIC100=m
CONFIG_SMSC911X=m
CONFIG_SMSC9420=m
CONFIG_NET_VENDOR_SOCIONEXT=y
CONFIG_NET_VENDOR_STMICRO=y
CONFIG_STMMAC_ETH=m
# CONFIG_STMMAC_SELFTESTS is not set
CONFIG_STMMAC_PLATFORM=m
CONFIG_DWMAC_GENERIC=m
CONFIG_DWMAC_INTEL=m
CONFIG_DWMAC_LOONGSON=m
CONFIG_STMMAC_PCI=m
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
CONFIG_NIU=m
CONFIG_NET_VENDOR_SYNOPSYS=y
CONFIG_DWC_XLGMAC=m
CONFIG_DWC_XLGMAC_PCI=m
CONFIG_NET_VENDOR_TEHUTI=y
CONFIG_TEHUTI=m
CONFIG_NET_VENDOR_TI=y
# CONFIG_TI_CPSW_PHY_SEL is not set
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VERTEXCOM=y
CONFIG_MSE102X=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
CONFIG_VIA_RHINE_MMIO=y
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WANGXUN=y
CONFIG_LIBWX=m
CONFIG_NGBE=m
CONFIG_TXGBE=m
CONFIG_NET_VENDOR_WIZNET=y
CONFIG_WIZNET_W5100=m
CONFIG_WIZNET_W5300=m
# CONFIG_WIZNET_BUS_DIRECT is not set
# CONFIG_WIZNET_BUS_INDIRECT is not set
CONFIG_WIZNET_BUS_ANY=y
CONFIG_WIZNET_W5100_SPI=m
CONFIG_NET_VENDOR_XILINX=y
CONFIG_XILINX_EMACLITE=m
CONFIG_XILINX_AXI_EMAC=m
CONFIG_XILINX_LL_TEMAC=m
CONFIG_NET_VENDOR_XIRCOM=y
CONFIG_PCMCIA_XIRC2PS=m
CONFIG_FDDI=y
CONFIG_DEFXX=m
CONFIG_SKFP=m
# CONFIG_HIPPI is not set
CONFIG_NET_SB1000=m
CONFIG_PHYLINK=m
CONFIG_PHYLIB=y
CONFIG_SWPHY=y
CONFIG_LED_TRIGGER_PHY=y
CONFIG_FIXED_PHY=y
CONFIG_SFP=m

#
# MII PHY device drivers
#
CONFIG_AMD_PHY=m
CONFIG_ADIN_PHY=m
CONFIG_ADIN1100_PHY=m
CONFIG_AQUANTIA_PHY=m
CONFIG_AX88796B_PHY=m
CONFIG_BROADCOM_PHY=m
CONFIG_BCM54140_PHY=m
CONFIG_BCM7XXX_PHY=m
CONFIG_BCM84881_PHY=y
CONFIG_BCM87XX_PHY=m
CONFIG_BCM_NET_PHYLIB=m
CONFIG_BCM_NET_PHYPTP=m
CONFIG_CICADA_PHY=m
CONFIG_CORTINA_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_ICPLUS_PHY=m
CONFIG_LXT_PHY=m
CONFIG_INTEL_XWAY_PHY=m
CONFIG_LSI_ET1011C_PHY=m
CONFIG_MARVELL_PHY=m
CONFIG_MARVELL_10G_PHY=m
# CONFIG_MARVELL_88Q2XXX_PHY is not set
CONFIG_MARVELL_88X2222_PHY=m
CONFIG_MAXLINEAR_GPHY=m
CONFIG_MEDIATEK_GE_PHY=m
# CONFIG_MEDIATEK_GE_SOC_PHY is not set
CONFIG_MICREL_PHY=m
# CONFIG_MICROCHIP_T1S_PHY is not set
CONFIG_MICROCHIP_PHY=m
CONFIG_MICROCHIP_T1_PHY=m
CONFIG_MICROSEMI_PHY=m
CONFIG_MOTORCOMM_PHY=m
CONFIG_NATIONAL_PHY=m
# CONFIG_NXP_CBTX_PHY is not set
CONFIG_NXP_C45_TJA11XX_PHY=m
CONFIG_NXP_TJA11XX_PHY=m
# CONFIG_NCN26000_PHY is not set
CONFIG_AT803X_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_REALTEK_PHY=m
CONFIG_RENESAS_PHY=m
CONFIG_ROCKCHIP_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_STE10XP=m
CONFIG_TERANETICS_PHY=m
CONFIG_DP83822_PHY=m
CONFIG_DP83TC811_PHY=m
CONFIG_DP83848_PHY=m
CONFIG_DP83867_PHY=m
CONFIG_DP83869_PHY=m
CONFIG_DP83TD510_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_XILINX_GMII2RGMII=m
CONFIG_MICREL_KS8995MA=m
CONFIG_PSE_CONTROLLER=y
CONFIG_PSE_REGULATOR=m
CONFIG_CAN_DEV=m
CONFIG_CAN_VCAN=m
CONFIG_CAN_VXCAN=m
CONFIG_CAN_NETLINK=y
CONFIG_CAN_CALC_BITTIMING=y
CONFIG_CAN_RX_OFFLOAD=y
CONFIG_CAN_CAN327=m
CONFIG_CAN_JANZ_ICAN3=m
CONFIG_CAN_KVASER_PCIEFD=m
CONFIG_CAN_SLCAN=m
CONFIG_CAN_C_CAN=m
CONFIG_CAN_C_CAN_PLATFORM=m
CONFIG_CAN_C_CAN_PCI=m
CONFIG_CAN_CC770=m
CONFIG_CAN_CC770_ISA=m
CONFIG_CAN_CC770_PLATFORM=m
CONFIG_CAN_CTUCANFD=m
CONFIG_CAN_CTUCANFD_PCI=m
CONFIG_CAN_IFI_CANFD=m
CONFIG_CAN_M_CAN=m
CONFIG_CAN_M_CAN_PCI=m
CONFIG_CAN_M_CAN_PLATFORM=m
CONFIG_CAN_M_CAN_TCAN4X5X=m
CONFIG_CAN_PEAK_PCIEFD=m
CONFIG_CAN_SJA1000=m
CONFIG_CAN_EMS_PCI=m
CONFIG_CAN_EMS_PCMCIA=m
CONFIG_CAN_F81601=m
CONFIG_CAN_KVASER_PCI=m
CONFIG_CAN_PEAK_PCI=m
CONFIG_CAN_PEAK_PCIEC=y
CONFIG_CAN_PEAK_PCMCIA=m
CONFIG_CAN_PLX_PCI=m
CONFIG_CAN_SJA1000_ISA=m
CONFIG_CAN_SJA1000_PLATFORM=m
CONFIG_CAN_SOFTING=m
CONFIG_CAN_SOFTING_CS=m

#
# CAN SPI interfaces
#
CONFIG_CAN_HI311X=m
CONFIG_CAN_MCP251X=m
CONFIG_CAN_MCP251XFD=m
# CONFIG_CAN_MCP251XFD_SANITY is not set
# end of CAN SPI interfaces

#
# CAN USB interfaces
#
CONFIG_CAN_8DEV_USB=m
CONFIG_CAN_EMS_USB=m
CONFIG_CAN_ESD_USB=m
CONFIG_CAN_ETAS_ES58X=m
# CONFIG_CAN_F81604 is not set
CONFIG_CAN_GS_USB=m
CONFIG_CAN_KVASER_USB=m
CONFIG_CAN_MCBA_USB=m
CONFIG_CAN_PEAK_USB=m
CONFIG_CAN_UCAN=m
# end of CAN USB interfaces

# CONFIG_CAN_DEBUG_DEVICES is not set

#
# MCTP Device Drivers
#
CONFIG_MCTP_SERIAL=m
# end of MCTP Device Drivers

CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
CONFIG_FWNODE_MDIO=y
CONFIG_ACPI_MDIO=y
CONFIG_MDIO_DEVRES=y
CONFIG_MDIO_BITBANG=m
CONFIG_MDIO_BCM_UNIMAC=m
CONFIG_MDIO_CAVIUM=m
CONFIG_MDIO_GPIO=m
CONFIG_MDIO_I2C=m
CONFIG_MDIO_MVUSB=m
CONFIG_MDIO_MSCC_MIIM=m
CONFIG_MDIO_REGMAP=m
CONFIG_MDIO_THUNDER=m

#
# MDIO Multiplexers
#

#
# PCS device drivers
#
CONFIG_PCS_XPCS=m
CONFIG_PCS_LYNX=m
CONFIG_PCS_MTK_LYNXI=m
# end of PCS device drivers

CONFIG_PLIP=m
CONFIG_PPP=y
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_FILTER=y
CONFIG_PPP_MPPE=m
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOATM=m
CONFIG_PPPOE=m
# CONFIG_PPPOE_HASH_BITS_1 is not set
# CONFIG_PPPOE_HASH_BITS_2 is not set
CONFIG_PPPOE_HASH_BITS_4=y
# CONFIG_PPPOE_HASH_BITS_8 is not set
CONFIG_PPPOE_HASH_BITS=4
CONFIG_PPTP=m
CONFIG_PPPOL2TP=m
CONFIG_PPP_ASYNC=m
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=y
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
CONFIG_SLIP_MODE_SLIP6=y
CONFIG_USB_NET_DRIVERS=m
CONFIG_USB_CATC=m
CONFIG_USB_KAWETH=m
CONFIG_USB_PEGASUS=m
CONFIG_USB_RTL8150=m
CONFIG_USB_RTL8152=m
CONFIG_USB_LAN78XX=m
CONFIG_USB_USBNET=m
CONFIG_USB_NET_AX8817X=m
CONFIG_USB_NET_AX88179_178A=m
CONFIG_USB_NET_CDCETHER=m
CONFIG_USB_NET_CDC_EEM=m
CONFIG_USB_NET_CDC_NCM=m
CONFIG_USB_NET_HUAWEI_CDC_NCM=m
CONFIG_USB_NET_CDC_MBIM=m
CONFIG_USB_NET_DM9601=m
CONFIG_USB_NET_SR9700=m
CONFIG_USB_NET_SR9800=m
CONFIG_USB_NET_SMSC75XX=m
CONFIG_USB_NET_SMSC95XX=m
CONFIG_USB_NET_GL620A=m
CONFIG_USB_NET_NET1080=m
CONFIG_USB_NET_PLUSB=m
CONFIG_USB_NET_MCS7830=m
CONFIG_USB_NET_RNDIS_HOST=m
CONFIG_USB_NET_CDC_SUBSET_ENABLE=m
CONFIG_USB_NET_CDC_SUBSET=m
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_BELKIN=y
CONFIG_USB_ARMLINUX=y
CONFIG_USB_EPSON2888=y
CONFIG_USB_KC2190=y
CONFIG_USB_NET_ZAURUS=m
CONFIG_USB_NET_CX82310_ETH=m
CONFIG_USB_NET_KALMIA=m
CONFIG_USB_NET_QMI_WWAN=m
CONFIG_USB_HSO=m
CONFIG_USB_NET_INT51X1=m
CONFIG_USB_CDC_PHONET=m
CONFIG_USB_IPHETH=m
CONFIG_USB_SIERRA_NET=m
CONFIG_USB_VL600=m
CONFIG_USB_NET_CH9200=m
CONFIG_USB_NET_AQC111=m
CONFIG_USB_RTL8153_ECM=m
CONFIG_WLAN=y
CONFIG_WLAN_VENDOR_ADMTEK=y
CONFIG_ADM8211=m
CONFIG_ATH_COMMON=m
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
CONFIG_ATH5K=m
# CONFIG_ATH5K_DEBUG is not set
# CONFIG_ATH5K_TRACER is not set
CONFIG_ATH5K_PCI=y
CONFIG_ATH9K_HW=m
CONFIG_ATH9K_COMMON=m
CONFIG_ATH9K_COMMON_DEBUG=y
CONFIG_ATH9K_BTCOEX_SUPPORT=y
CONFIG_ATH9K=m
CONFIG_ATH9K_PCI=y
CONFIG_ATH9K_AHB=y
CONFIG_ATH9K_DEBUGFS=y
CONFIG_ATH9K_STATION_STATISTICS=y
# CONFIG_ATH9K_DYNACK is not set
CONFIG_ATH9K_WOW=y
CONFIG_ATH9K_RFKILL=y
CONFIG_ATH9K_CHANNEL_CONTEXT=y
CONFIG_ATH9K_PCOEM=y
CONFIG_ATH9K_PCI_NO_EEPROM=m
CONFIG_ATH9K_HTC=m
CONFIG_ATH9K_HTC_DEBUGFS=y
CONFIG_ATH9K_HWRNG=y
CONFIG_ATH9K_COMMON_SPECTRAL=y
CONFIG_CARL9170=m
CONFIG_CARL9170_LEDS=y
# CONFIG_CARL9170_DEBUGFS is not set
CONFIG_CARL9170_WPC=y
CONFIG_CARL9170_HWRNG=y
CONFIG_ATH6KL=m
CONFIG_ATH6KL_SDIO=m
CONFIG_ATH6KL_USB=m
# CONFIG_ATH6KL_DEBUG is not set
# CONFIG_ATH6KL_TRACING is not set
CONFIG_AR5523=m
CONFIG_WIL6210=m
CONFIG_WIL6210_ISR_COR=y
CONFIG_WIL6210_TRACING=y
CONFIG_WIL6210_DEBUGFS=y
CONFIG_ATH10K=m
CONFIG_ATH10K_CE=y
CONFIG_ATH10K_PCI=m
CONFIG_ATH10K_SDIO=m
CONFIG_ATH10K_USB=m
# CONFIG_ATH10K_DEBUG is not set
CONFIG_ATH10K_DEBUGFS=y
CONFIG_ATH10K_SPECTRAL=y
CONFIG_ATH10K_TRACING=y
CONFIG_WCN36XX=m
# CONFIG_WCN36XX_DEBUGFS is not set
CONFIG_ATH11K=m
CONFIG_ATH11K_AHB=m
CONFIG_ATH11K_PCI=m
# CONFIG_ATH11K_DEBUG is not set
CONFIG_ATH11K_DEBUGFS=y
CONFIG_ATH11K_TRACING=y
CONFIG_ATH11K_SPECTRAL=y
# CONFIG_ATH12K is not set
CONFIG_WLAN_VENDOR_ATMEL=y
CONFIG_ATMEL=m
CONFIG_PCI_ATMEL=m
CONFIG_PCMCIA_ATMEL=m
CONFIG_AT76C50X_USB=m
CONFIG_WLAN_VENDOR_BROADCOM=y
CONFIG_B43=m
CONFIG_B43_BCMA=y
CONFIG_B43_SSB=y
CONFIG_B43_BUSES_BCMA_AND_SSB=y
# CONFIG_B43_BUSES_BCMA is not set
# CONFIG_B43_BUSES_SSB is not set
CONFIG_B43_PCI_AUTOSELECT=y
CONFIG_B43_PCICORE_AUTOSELECT=y
# CONFIG_B43_SDIO is not set
CONFIG_B43_BCMA_PIO=y
CONFIG_B43_PIO=y
CONFIG_B43_PHY_G=y
CONFIG_B43_PHY_N=y
CONFIG_B43_PHY_LP=y
CONFIG_B43_PHY_HT=y
CONFIG_B43_LEDS=y
CONFIG_B43_HWRNG=y
# CONFIG_B43_DEBUG is not set
CONFIG_B43LEGACY=m
CONFIG_B43LEGACY_PCI_AUTOSELECT=y
CONFIG_B43LEGACY_PCICORE_AUTOSELECT=y
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_HWRNG=y
# CONFIG_B43LEGACY_DEBUG is not set
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
CONFIG_BRCMUTIL=m
CONFIG_BRCMSMAC=m
CONFIG_BRCMSMAC_LEDS=y
CONFIG_BRCMFMAC=m
CONFIG_BRCMFMAC_PROTO_BCDC=y
CONFIG_BRCMFMAC_PROTO_MSGBUF=y
CONFIG_BRCMFMAC_SDIO=y
CONFIG_BRCMFMAC_USB=y
CONFIG_BRCMFMAC_PCIE=y
CONFIG_BRCM_TRACING=y
# CONFIG_BRCMDBG is not set
CONFIG_WLAN_VENDOR_CISCO=y
CONFIG_AIRO=m
CONFIG_AIRO_CS=m
CONFIG_WLAN_VENDOR_INTEL=y
CONFIG_IPW2100=m
CONFIG_IPW2100_MONITOR=y
# CONFIG_IPW2100_DEBUG is not set
CONFIG_IPW2200=m
CONFIG_IPW2200_MONITOR=y
CONFIG_IPW2200_RADIOTAP=y
CONFIG_IPW2200_PROMISCUOUS=y
CONFIG_IPW2200_QOS=y
# CONFIG_IPW2200_DEBUG is not set
CONFIG_LIBIPW=m
# CONFIG_LIBIPW_DEBUG is not set
CONFIG_IWLEGACY=m
CONFIG_IWL4965=m
CONFIG_IWL3945=m

#
# iwl3945 / iwl4965 Debugging Options
#
# CONFIG_IWLEGACY_DEBUG is not set
CONFIG_IWLEGACY_DEBUGFS=y
# end of iwl3945 / iwl4965 Debugging Options

CONFIG_IWLWIFI=m
CONFIG_IWLWIFI_LEDS=y
CONFIG_IWLDVM=m
CONFIG_IWLMVM=m
CONFIG_IWLWIFI_OPMODE_MODULAR=y

#
# Debugging Options
#
# CONFIG_IWLWIFI_DEBUG is not set
CONFIG_IWLWIFI_DEBUGFS=y
CONFIG_IWLWIFI_DEVICE_TRACING=y
# end of Debugging Options

CONFIG_WLAN_VENDOR_INTERSIL=y
CONFIG_HOSTAP=m
CONFIG_HOSTAP_FIRMWARE=y
CONFIG_HOSTAP_FIRMWARE_NVRAM=y
CONFIG_HOSTAP_PLX=m
CONFIG_HOSTAP_PCI=m
CONFIG_HOSTAP_CS=m
CONFIG_HERMES=m
# CONFIG_HERMES_PRISM is not set
CONFIG_HERMES_CACHE_FW_ON_INIT=y
CONFIG_PLX_HERMES=m
CONFIG_TMD_HERMES=m
CONFIG_NORTEL_HERMES=m
CONFIG_PCMCIA_HERMES=m
CONFIG_PCMCIA_SPECTRUM=m
CONFIG_ORINOCO_USB=m
CONFIG_P54_COMMON=m
CONFIG_P54_USB=m
CONFIG_P54_PCI=m
CONFIG_P54_SPI=m
# CONFIG_P54_SPI_DEFAULT_EEPROM is not set
CONFIG_P54_LEDS=y
CONFIG_WLAN_VENDOR_MARVELL=y
CONFIG_LIBERTAS=m
CONFIG_LIBERTAS_USB=m
CONFIG_LIBERTAS_CS=m
CONFIG_LIBERTAS_SDIO=m
CONFIG_LIBERTAS_SPI=m
# CONFIG_LIBERTAS_DEBUG is not set
CONFIG_LIBERTAS_MESH=y
CONFIG_LIBERTAS_THINFIRM=m
# CONFIG_LIBERTAS_THINFIRM_DEBUG is not set
CONFIG_LIBERTAS_THINFIRM_USB=m
CONFIG_MWIFIEX=m
CONFIG_MWIFIEX_SDIO=m
CONFIG_MWIFIEX_PCIE=m
CONFIG_MWIFIEX_USB=m
CONFIG_MWL8K=m
CONFIG_WLAN_VENDOR_MEDIATEK=y
CONFIG_MT7601U=m
CONFIG_MT76_CORE=m
CONFIG_MT76_LEDS=y
CONFIG_MT76_USB=m
CONFIG_MT76_SDIO=m
CONFIG_MT76x02_LIB=m
CONFIG_MT76x02_USB=m
CONFIG_MT76_CONNAC_LIB=m
CONFIG_MT792x_LIB=m
CONFIG_MT792x_USB=m
CONFIG_MT76x0_COMMON=m
CONFIG_MT76x0U=m
CONFIG_MT76x0E=m
CONFIG_MT76x2_COMMON=m
CONFIG_MT76x2E=m
CONFIG_MT76x2U=m
CONFIG_MT7603E=m
CONFIG_MT7615_COMMON=m
CONFIG_MT7615E=m
CONFIG_MT7663_USB_SDIO_COMMON=m
CONFIG_MT7663U=m
CONFIG_MT7663S=m
CONFIG_MT7915E=m
CONFIG_MT7921_COMMON=m
CONFIG_MT7921E=m
CONFIG_MT7921S=m
CONFIG_MT7921U=m
CONFIG_MT7996E=m
CONFIG_WLAN_VENDOR_MICROCHIP=y
CONFIG_WILC1000=m
CONFIG_WILC1000_SDIO=m
CONFIG_WILC1000_SPI=m
CONFIG_WILC1000_HW_OOB_INTR=y
CONFIG_WLAN_VENDOR_PURELIFI=y
CONFIG_PLFXLC=m
CONFIG_WLAN_VENDOR_RALINK=y
CONFIG_RT2X00=m
CONFIG_RT2400PCI=m
CONFIG_RT2500PCI=m
CONFIG_RT61PCI=m
CONFIG_RT2800PCI=m
CONFIG_RT2800PCI_RT33XX=y
CONFIG_RT2800PCI_RT35XX=y
CONFIG_RT2800PCI_RT53XX=y
CONFIG_RT2800PCI_RT3290=y
CONFIG_RT2500USB=m
CONFIG_RT73USB=m
CONFIG_RT2800USB=m
CONFIG_RT2800USB_RT33XX=y
CONFIG_RT2800USB_RT35XX=y
CONFIG_RT2800USB_RT3573=y
CONFIG_RT2800USB_RT53XX=y
CONFIG_RT2800USB_RT55XX=y
CONFIG_RT2800USB_UNKNOWN=y
CONFIG_RT2800_LIB=m
CONFIG_RT2800_LIB_MMIO=m
CONFIG_RT2X00_LIB_MMIO=m
CONFIG_RT2X00_LIB_PCI=m
CONFIG_RT2X00_LIB_USB=m
CONFIG_RT2X00_LIB=m
CONFIG_RT2X00_LIB_FIRMWARE=y
CONFIG_RT2X00_LIB_CRYPTO=y
CONFIG_RT2X00_LIB_LEDS=y
# CONFIG_RT2X00_LIB_DEBUGFS is not set
# CONFIG_RT2X00_DEBUG is not set
CONFIG_WLAN_VENDOR_REALTEK=y
CONFIG_RTL8180=m
CONFIG_RTL8187=m
CONFIG_RTL8187_LEDS=y
CONFIG_RTL_CARDS=m
CONFIG_RTL8192CE=m
CONFIG_RTL8192SE=m
CONFIG_RTL8192DE=m
CONFIG_RTL8723AE=m
CONFIG_RTL8723BE=m
CONFIG_RTL8188EE=m
CONFIG_RTL8192EE=m
CONFIG_RTL8821AE=m
CONFIG_RTL8192CU=m
CONFIG_RTLWIFI=m
CONFIG_RTLWIFI_PCI=m
CONFIG_RTLWIFI_USB=m
# CONFIG_RTLWIFI_DEBUG is not set
CONFIG_RTL8192C_COMMON=m
CONFIG_RTL8723_COMMON=m
CONFIG_RTLBTCOEXIST=m
CONFIG_RTL8XXXU=m
CONFIG_RTL8XXXU_UNTESTED=y
CONFIG_RTW88=m
CONFIG_RTW88_CORE=m
CONFIG_RTW88_PCI=m
CONFIG_RTW88_USB=m
CONFIG_RTW88_8822B=m
CONFIG_RTW88_8822C=m
CONFIG_RTW88_8723D=m
CONFIG_RTW88_8821C=m
CONFIG_RTW88_8822BE=m
# CONFIG_RTW88_8822BS is not set
CONFIG_RTW88_8822BU=m
CONFIG_RTW88_8822CE=m
# CONFIG_RTW88_8822CS is not set
CONFIG_RTW88_8822CU=m
CONFIG_RTW88_8723DE=m
# CONFIG_RTW88_8723DS is not set
CONFIG_RTW88_8723DU=m
CONFIG_RTW88_8821CE=m
# CONFIG_RTW88_8821CS is not set
CONFIG_RTW88_8821CU=m
CONFIG_RTW88_DEBUG=y
CONFIG_RTW88_DEBUGFS=y
CONFIG_RTW89=m
CONFIG_RTW89_CORE=m
CONFIG_RTW89_PCI=m
CONFIG_RTW89_8852A=m
CONFIG_RTW89_8852B=m
CONFIG_RTW89_8852C=m
# CONFIG_RTW89_8851BE is not set
CONFIG_RTW89_8852AE=m
CONFIG_RTW89_8852BE=m
CONFIG_RTW89_8852CE=m
CONFIG_RTW89_DEBUG=y
CONFIG_RTW89_DEBUGMSG=y
CONFIG_RTW89_DEBUGFS=y
CONFIG_WLAN_VENDOR_RSI=y
CONFIG_RSI_91X=m
# CONFIG_RSI_DEBUGFS is not set
CONFIG_RSI_SDIO=m
CONFIG_RSI_USB=m
CONFIG_RSI_COEX=y
CONFIG_WLAN_VENDOR_SILABS=y
CONFIG_WFX=m
CONFIG_WLAN_VENDOR_ST=y
CONFIG_CW1200=m
CONFIG_CW1200_WLAN_SDIO=m
CONFIG_CW1200_WLAN_SPI=m
CONFIG_WLAN_VENDOR_TI=y
CONFIG_WL1251=m
CONFIG_WL1251_SPI=m
CONFIG_WL1251_SDIO=m
CONFIG_WL12XX=m
CONFIG_WL18XX=m
CONFIG_WLCORE=m
CONFIG_WLCORE_SDIO=m
CONFIG_WLAN_VENDOR_ZYDAS=y
CONFIG_USB_ZD1201=m
CONFIG_ZD1211RW=m
# CONFIG_ZD1211RW_DEBUG is not set
CONFIG_WLAN_VENDOR_QUANTENNA=y
CONFIG_QTNFMAC=m
CONFIG_QTNFMAC_PCIE=m
CONFIG_PCMCIA_RAYCS=m
CONFIG_PCMCIA_WL3501=m
CONFIG_USB_NET_RNDIS_WLAN=m
CONFIG_MAC80211_HWSIM=m
CONFIG_VIRT_WIFI=m
CONFIG_WAN=y
CONFIG_HDLC=m
CONFIG_HDLC_RAW=m
CONFIG_HDLC_RAW_ETH=m
CONFIG_HDLC_CISCO=m
CONFIG_HDLC_FR=m
CONFIG_HDLC_PPP=m
CONFIG_HDLC_X25=m
CONFIG_PCI200SYN=m
CONFIG_WANXL=m
CONFIG_PC300TOO=m
CONFIG_FARSYNC=m
CONFIG_LAPBETHER=m
CONFIG_IEEE802154_DRIVERS=m
CONFIG_IEEE802154_FAKELB=m
CONFIG_IEEE802154_AT86RF230=m
CONFIG_IEEE802154_MRF24J40=m
CONFIG_IEEE802154_CC2520=m
CONFIG_IEEE802154_ATUSB=m
CONFIG_IEEE802154_ADF7242=m
CONFIG_IEEE802154_CA8210=m
CONFIG_IEEE802154_CA8210_DEBUGFS=y
CONFIG_IEEE802154_MCR20A=m
CONFIG_IEEE802154_HWSIM=m

#
# Wireless WAN
#
CONFIG_WWAN=y
CONFIG_WWAN_DEBUGFS=y
CONFIG_WWAN_HWSIM=m
CONFIG_MHI_WWAN_CTRL=m
CONFIG_MHI_WWAN_MBIM=m
CONFIG_RPMSG_WWAN_CTRL=m
CONFIG_IOSM=m
CONFIG_MTK_T7XX=m
# end of Wireless WAN

CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_NETDEV_BACKEND=m
CONFIG_VMXNET3=m
CONFIG_FUJITSU_ES=m
CONFIG_USB4_NET=m
CONFIG_HYPERV_NET=m
CONFIG_NETDEVSIM=m
CONFIG_NET_FAILOVER=m
CONFIG_ISDN=y
CONFIG_ISDN_CAPI=y
CONFIG_CAPI_TRACE=y
CONFIG_ISDN_CAPI_MIDDLEWARE=y
CONFIG_MISDN=m
CONFIG_MISDN_DSP=m
CONFIG_MISDN_L1OIP=m

#
# mISDN hardware drivers
#
CONFIG_MISDN_HFCPCI=m
CONFIG_MISDN_HFCMULTI=m
CONFIG_MISDN_HFCUSB=m
CONFIG_MISDN_AVMFRITZ=m
CONFIG_MISDN_SPEEDFAX=m
CONFIG_MISDN_INFINEON=m
CONFIG_MISDN_W6692=m
CONFIG_MISDN_NETJET=m
CONFIG_MISDN_HDLC=m
CONFIG_MISDN_IPAC=m
CONFIG_MISDN_ISAR=m

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=m
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_SPARSEKMAP=m
CONFIG_INPUT_MATRIXKMAP=m
CONFIG_INPUT_VIVALDIFMAP=y

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
CONFIG_INPUT_MOUSEDEV_PSAUX=y
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_EVBUG=m

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ADC=m
CONFIG_KEYBOARD_ADP5520=m
CONFIG_KEYBOARD_ADP5588=m
CONFIG_KEYBOARD_ADP5589=m
CONFIG_KEYBOARD_APPLESPI=m
CONFIG_KEYBOARD_ATKBD=y
CONFIG_KEYBOARD_QT1050=m
CONFIG_KEYBOARD_QT1070=m
CONFIG_KEYBOARD_QT2160=m
CONFIG_KEYBOARD_DLINK_DIR685=m
CONFIG_KEYBOARD_LKKBD=m
CONFIG_KEYBOARD_GPIO=m
CONFIG_KEYBOARD_GPIO_POLLED=m
CONFIG_KEYBOARD_TCA6416=m
CONFIG_KEYBOARD_TCA8418=m
CONFIG_KEYBOARD_MATRIX=m
CONFIG_KEYBOARD_LM8323=m
CONFIG_KEYBOARD_LM8333=m
CONFIG_KEYBOARD_MAX7359=m
CONFIG_KEYBOARD_MCS=m
CONFIG_KEYBOARD_MPR121=m
CONFIG_KEYBOARD_NEWTON=m
CONFIG_KEYBOARD_OPENCORES=m
CONFIG_KEYBOARD_PINEPHONE=m
CONFIG_KEYBOARD_SAMSUNG=m
CONFIG_KEYBOARD_STOWAWAY=m
CONFIG_KEYBOARD_SUNKBD=m
CONFIG_KEYBOARD_IQS62X=m
CONFIG_KEYBOARD_TM2_TOUCHKEY=m
CONFIG_KEYBOARD_TWL4030=m
CONFIG_KEYBOARD_XTKBD=m
CONFIG_KEYBOARD_CROS_EC=m
CONFIG_KEYBOARD_MTK_PMIC=m
CONFIG_KEYBOARD_CYPRESS_SF=m
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=m
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
CONFIG_MOUSE_PS2_SENTELIC=y
CONFIG_MOUSE_PS2_TOUCHKIT=y
CONFIG_MOUSE_PS2_FOCALTECH=y
CONFIG_MOUSE_PS2_VMMOUSE=y
CONFIG_MOUSE_PS2_SMBUS=y
CONFIG_MOUSE_SERIAL=m
CONFIG_MOUSE_APPLETOUCH=m
CONFIG_MOUSE_BCM5974=m
CONFIG_MOUSE_CYAPA=m
CONFIG_MOUSE_ELAN_I2C=m
CONFIG_MOUSE_ELAN_I2C_I2C=y
CONFIG_MOUSE_ELAN_I2C_SMBUS=y
CONFIG_MOUSE_VSXXXAA=m
CONFIG_MOUSE_GPIO=m
CONFIG_MOUSE_SYNAPTICS_I2C=m
CONFIG_MOUSE_SYNAPTICS_USB=m
CONFIG_INPUT_JOYSTICK=y
CONFIG_JOYSTICK_ANALOG=m
CONFIG_JOYSTICK_A3D=m
CONFIG_JOYSTICK_ADC=m
CONFIG_JOYSTICK_ADI=m
CONFIG_JOYSTICK_COBRA=m
CONFIG_JOYSTICK_GF2K=m
CONFIG_JOYSTICK_GRIP=m
CONFIG_JOYSTICK_GRIP_MP=m
CONFIG_JOYSTICK_GUILLEMOT=m
CONFIG_JOYSTICK_INTERACT=m
CONFIG_JOYSTICK_SIDEWINDER=m
CONFIG_JOYSTICK_TMDC=m
CONFIG_JOYSTICK_IFORCE=m
CONFIG_JOYSTICK_IFORCE_USB=m
CONFIG_JOYSTICK_IFORCE_232=m
CONFIG_JOYSTICK_WARRIOR=m
CONFIG_JOYSTICK_MAGELLAN=m
CONFIG_JOYSTICK_SPACEORB=m
CONFIG_JOYSTICK_SPACEBALL=m
CONFIG_JOYSTICK_STINGER=m
CONFIG_JOYSTICK_TWIDJOY=m
CONFIG_JOYSTICK_ZHENHUA=m
CONFIG_JOYSTICK_DB9=m
CONFIG_JOYSTICK_GAMECON=m
CONFIG_JOYSTICK_TURBOGRAFX=m
CONFIG_JOYSTICK_AS5011=m
CONFIG_JOYSTICK_JOYDUMP=m
CONFIG_JOYSTICK_XPAD=m
CONFIG_JOYSTICK_XPAD_FF=y
CONFIG_JOYSTICK_XPAD_LEDS=y
CONFIG_JOYSTICK_WALKERA0701=m
CONFIG_JOYSTICK_PSXPAD_SPI=m
CONFIG_JOYSTICK_PSXPAD_SPI_FF=y
CONFIG_JOYSTICK_PXRC=m
CONFIG_JOYSTICK_QWIIC=m
CONFIG_JOYSTICK_FSIA6B=m
CONFIG_JOYSTICK_SENSEHAT=m
CONFIG_INPUT_TABLET=y
CONFIG_TABLET_USB_ACECAD=m
CONFIG_TABLET_USB_AIPTEK=m
CONFIG_TABLET_USB_HANWANG=m
CONFIG_TABLET_USB_KBTAB=m
CONFIG_TABLET_USB_PEGASUS=m
CONFIG_TABLET_SERIAL_WACOM4=m
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_88PM860X=m
CONFIG_TOUCHSCREEN_ADS7846=m
CONFIG_TOUCHSCREEN_AD7877=m
CONFIG_TOUCHSCREEN_AD7879=m
CONFIG_TOUCHSCREEN_AD7879_I2C=m
CONFIG_TOUCHSCREEN_AD7879_SPI=m
CONFIG_TOUCHSCREEN_ADC=m
CONFIG_TOUCHSCREEN_ATMEL_MXT=m
CONFIG_TOUCHSCREEN_ATMEL_MXT_T37=y
CONFIG_TOUCHSCREEN_AUO_PIXCIR=m
CONFIG_TOUCHSCREEN_BU21013=m
CONFIG_TOUCHSCREEN_BU21029=m
CONFIG_TOUCHSCREEN_CHIPONE_ICN8505=m
CONFIG_TOUCHSCREEN_CY8CTMA140=m
CONFIG_TOUCHSCREEN_CY8CTMG110=m
CONFIG_TOUCHSCREEN_CYTTSP_CORE=m
CONFIG_TOUCHSCREEN_CYTTSP_I2C=m
CONFIG_TOUCHSCREEN_CYTTSP_SPI=m
CONFIG_TOUCHSCREEN_CYTTSP4_CORE=m
CONFIG_TOUCHSCREEN_CYTTSP4_I2C=m
CONFIG_TOUCHSCREEN_CYTTSP4_SPI=m
CONFIG_TOUCHSCREEN_CYTTSP5=m
CONFIG_TOUCHSCREEN_DA9034=m
CONFIG_TOUCHSCREEN_DA9052=m
CONFIG_TOUCHSCREEN_DYNAPRO=m
CONFIG_TOUCHSCREEN_HAMPSHIRE=m
CONFIG_TOUCHSCREEN_EETI=m
CONFIG_TOUCHSCREEN_EGALAX_SERIAL=m
CONFIG_TOUCHSCREEN_EXC3000=m
CONFIG_TOUCHSCREEN_FUJITSU=m
CONFIG_TOUCHSCREEN_GOODIX=m
CONFIG_TOUCHSCREEN_HIDEEP=m
CONFIG_TOUCHSCREEN_HYCON_HY46XX=m
CONFIG_TOUCHSCREEN_HYNITRON_CSTXXX=m
CONFIG_TOUCHSCREEN_ILI210X=m
CONFIG_TOUCHSCREEN_ILITEK=m
CONFIG_TOUCHSCREEN_S6SY761=m
CONFIG_TOUCHSCREEN_GUNZE=m
CONFIG_TOUCHSCREEN_EKTF2127=m
CONFIG_TOUCHSCREEN_ELAN=y
CONFIG_TOUCHSCREEN_ELO=m
CONFIG_TOUCHSCREEN_WACOM_W8001=m
CONFIG_TOUCHSCREEN_WACOM_I2C=m
CONFIG_TOUCHSCREEN_MAX11801=m
CONFIG_TOUCHSCREEN_MCS5000=m
CONFIG_TOUCHSCREEN_MMS114=m
CONFIG_TOUCHSCREEN_MELFAS_MIP4=m
CONFIG_TOUCHSCREEN_MSG2638=m
CONFIG_TOUCHSCREEN_MTOUCH=m
# CONFIG_TOUCHSCREEN_NOVATEK_NVT_TS is not set
CONFIG_TOUCHSCREEN_IMAGIS=m
CONFIG_TOUCHSCREEN_INEXIO=m
CONFIG_TOUCHSCREEN_PENMOUNT=m
CONFIG_TOUCHSCREEN_EDT_FT5X06=m
CONFIG_TOUCHSCREEN_TOUCHRIGHT=m
CONFIG_TOUCHSCREEN_TOUCHWIN=m
CONFIG_TOUCHSCREEN_TI_AM335X_TSC=m
CONFIG_TOUCHSCREEN_PIXCIR=m
CONFIG_TOUCHSCREEN_WDT87XX_I2C=m
CONFIG_TOUCHSCREEN_WM831X=m
CONFIG_TOUCHSCREEN_WM97XX=m
CONFIG_TOUCHSCREEN_WM9705=y
CONFIG_TOUCHSCREEN_WM9712=y
CONFIG_TOUCHSCREEN_WM9713=y
CONFIG_TOUCHSCREEN_USB_COMPOSITE=m
CONFIG_TOUCHSCREEN_MC13783=m
CONFIG_TOUCHSCREEN_USB_EGALAX=y
CONFIG_TOUCHSCREEN_USB_PANJIT=y
CONFIG_TOUCHSCREEN_USB_3M=y
CONFIG_TOUCHSCREEN_USB_ITM=y
CONFIG_TOUCHSCREEN_USB_ETURBO=y
CONFIG_TOUCHSCREEN_USB_GUNZE=y
CONFIG_TOUCHSCREEN_USB_DMC_TSC10=y
CONFIG_TOUCHSCREEN_USB_IRTOUCH=y
CONFIG_TOUCHSCREEN_USB_IDEALTEK=y
CONFIG_TOUCHSCREEN_USB_GENERAL_TOUCH=y
CONFIG_TOUCHSCREEN_USB_GOTOP=y
CONFIG_TOUCHSCREEN_USB_JASTEC=y
CONFIG_TOUCHSCREEN_USB_ELO=y
CONFIG_TOUCHSCREEN_USB_E2I=y
CONFIG_TOUCHSCREEN_USB_ZYTRONIC=y
CONFIG_TOUCHSCREEN_USB_ETT_TC45USB=y
CONFIG_TOUCHSCREEN_USB_NEXIO=y
CONFIG_TOUCHSCREEN_USB_EASYTOUCH=y
CONFIG_TOUCHSCREEN_TOUCHIT213=m
CONFIG_TOUCHSCREEN_TSC_SERIO=m
CONFIG_TOUCHSCREEN_TSC200X_CORE=m
CONFIG_TOUCHSCREEN_TSC2004=m
CONFIG_TOUCHSCREEN_TSC2005=m
CONFIG_TOUCHSCREEN_TSC2007=m
CONFIG_TOUCHSCREEN_TSC2007_IIO=y
CONFIG_TOUCHSCREEN_PCAP=m
CONFIG_TOUCHSCREEN_RM_TS=m
CONFIG_TOUCHSCREEN_SILEAD=m
CONFIG_TOUCHSCREEN_SIS_I2C=m
CONFIG_TOUCHSCREEN_ST1232=m
CONFIG_TOUCHSCREEN_STMFTS=m
CONFIG_TOUCHSCREEN_SUR40=m
CONFIG_TOUCHSCREEN_SURFACE3_SPI=m
CONFIG_TOUCHSCREEN_SX8654=m
CONFIG_TOUCHSCREEN_TPS6507X=m
CONFIG_TOUCHSCREEN_ZET6223=m
CONFIG_TOUCHSCREEN_ZFORCE=m
CONFIG_TOUCHSCREEN_COLIBRI_VF50=m
CONFIG_TOUCHSCREEN_ROHM_BU21023=m
CONFIG_TOUCHSCREEN_IQS5XX=m
# CONFIG_TOUCHSCREEN_IQS7211 is not set
CONFIG_TOUCHSCREEN_ZINITIX=m
CONFIG_TOUCHSCREEN_HIMAX_HX83112B=m
CONFIG_INPUT_MISC=y
CONFIG_INPUT_88PM860X_ONKEY=m
CONFIG_INPUT_88PM80X_ONKEY=m
CONFIG_INPUT_AD714X=m
CONFIG_INPUT_AD714X_I2C=m
CONFIG_INPUT_AD714X_SPI=m
CONFIG_INPUT_ARIZONA_HAPTICS=m
CONFIG_INPUT_ATC260X_ONKEY=m
CONFIG_INPUT_BMA150=m
CONFIG_INPUT_E3X0_BUTTON=m
CONFIG_INPUT_PCSPKR=m
CONFIG_INPUT_MAX77693_HAPTIC=m
CONFIG_INPUT_MAX8925_ONKEY=m
CONFIG_INPUT_MAX8997_HAPTIC=m
CONFIG_INPUT_MC13783_PWRBUTTON=m
CONFIG_INPUT_MMA8450=m
CONFIG_INPUT_APANEL=m
CONFIG_INPUT_GPIO_BEEPER=m
CONFIG_INPUT_GPIO_DECODER=m
CONFIG_INPUT_GPIO_VIBRA=m
CONFIG_INPUT_ATLAS_BTNS=m
CONFIG_INPUT_ATI_REMOTE2=m
CONFIG_INPUT_KEYSPAN_REMOTE=m
CONFIG_INPUT_KXTJ9=m
CONFIG_INPUT_POWERMATE=m
CONFIG_INPUT_YEALINK=m
CONFIG_INPUT_CM109=m
CONFIG_INPUT_REGULATOR_HAPTIC=m
CONFIG_INPUT_RETU_PWRBUTTON=m
CONFIG_INPUT_AXP20X_PEK=m
CONFIG_INPUT_TWL4030_PWRBUTTON=m
CONFIG_INPUT_TWL4030_VIBRA=m
CONFIG_INPUT_TWL6040_VIBRA=m
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_PALMAS_PWRBUTTON=m
CONFIG_INPUT_PCF50633_PMU=m
CONFIG_INPUT_PCF8574=m
CONFIG_INPUT_PWM_BEEPER=m
CONFIG_INPUT_PWM_VIBRA=m
CONFIG_INPUT_GPIO_ROTARY_ENCODER=m
CONFIG_INPUT_DA7280_HAPTICS=m
CONFIG_INPUT_DA9052_ONKEY=m
CONFIG_INPUT_DA9055_ONKEY=m
CONFIG_INPUT_DA9063_ONKEY=m
CONFIG_INPUT_WM831X_ON=m
CONFIG_INPUT_PCAP=m
CONFIG_INPUT_ADXL34X=m
CONFIG_INPUT_ADXL34X_I2C=m
CONFIG_INPUT_ADXL34X_SPI=m
CONFIG_INPUT_IMS_PCU=m
CONFIG_INPUT_IQS269A=m
CONFIG_INPUT_IQS626A=m
CONFIG_INPUT_IQS7222=m
CONFIG_INPUT_CMA3000=m
CONFIG_INPUT_CMA3000_I2C=m
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
CONFIG_INPUT_IDEAPAD_SLIDEBAR=m
CONFIG_INPUT_SOC_BUTTON_ARRAY=m
CONFIG_INPUT_DRV260X_HAPTICS=m
CONFIG_INPUT_DRV2665_HAPTICS=m
CONFIG_INPUT_DRV2667_HAPTICS=m
CONFIG_INPUT_RAVE_SP_PWRBUTTON=m
CONFIG_INPUT_RT5120_PWRKEY=m
CONFIG_RMI4_CORE=m
CONFIG_RMI4_I2C=m
CONFIG_RMI4_SPI=m
CONFIG_RMI4_SMB=m
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=m
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
CONFIG_RMI4_F34=y
CONFIG_RMI4_F3A=y
CONFIG_RMI4_F54=y
CONFIG_RMI4_F55=y

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=m
CONFIG_SERIO_CT82C710=m
CONFIG_SERIO_PARKBD=m
CONFIG_SERIO_PCIPS2=m
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
CONFIG_SERIO_PS2MULT=m
CONFIG_SERIO_ARC_PS2=m
CONFIG_HYPERV_KEYBOARD=m
CONFIG_SERIO_GPIO_PS2=m
CONFIG_USERIO=m
CONFIG_GAMEPORT=m
CONFIG_GAMEPORT_EMU10K1=m
CONFIG_GAMEPORT_FM801=m
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=0
CONFIG_LEGACY_TIOCSTI=y
CONFIG_LDISC_AUTOLOAD=y

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_16550A_VARIANTS=y
CONFIG_SERIAL_8250_FINTEK=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCILIB=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_EXAR=m
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_MEN_MCB=m
CONFIG_SERIAL_8250_NR_UARTS=48
CONFIG_SERIAL_8250_RUNTIME_UARTS=32
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
# CONFIG_SERIAL_8250_PCI1XXXX is not set
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
# CONFIG_SERIAL_8250_DFL is not set
CONFIG_SERIAL_8250_DW=m
CONFIG_SERIAL_8250_RT288X=y
CONFIG_SERIAL_8250_LPSS=m
CONFIG_SERIAL_8250_MID=y
CONFIG_SERIAL_8250_PERICOM=m

#
# Non-8250 serial port support
#
CONFIG_SERIAL_KGDB_NMI=y
CONFIG_SERIAL_MAX3100=m
CONFIG_SERIAL_MAX310X=y
CONFIG_SERIAL_UARTLITE=m
CONFIG_SERIAL_UARTLITE_NR_UARTS=1
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
CONFIG_SERIAL_JSM=m
CONFIG_SERIAL_LANTIQ=m
CONFIG_SERIAL_SCCNXP=y
CONFIG_SERIAL_SCCNXP_CONSOLE=y
CONFIG_SERIAL_SC16IS7XX_CORE=m
CONFIG_SERIAL_SC16IS7XX=m
CONFIG_SERIAL_SC16IS7XX_I2C=y
CONFIG_SERIAL_SC16IS7XX_SPI=y
CONFIG_SERIAL_ALTERA_JTAGUART=m
CONFIG_SERIAL_ALTERA_UART=m
CONFIG_SERIAL_ALTERA_UART_MAXPORTS=4
CONFIG_SERIAL_ALTERA_UART_BAUDRATE=115200
CONFIG_SERIAL_ARC=m
CONFIG_SERIAL_ARC_NR_PORTS=1
CONFIG_SERIAL_RP2=m
CONFIG_SERIAL_RP2_NR_UARTS=32
CONFIG_SERIAL_FSL_LPUART=m
CONFIG_SERIAL_FSL_LINFLEXUART=m
CONFIG_SERIAL_MEN_Z135=m
CONFIG_SERIAL_SPRD=m
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_MOXA_INTELLIO=m
CONFIG_MOXA_SMARTIO=m
CONFIG_N_HDLC=m
CONFIG_IPWIRELESS=m
CONFIG_N_GSM=m
CONFIG_NOZOMI=m
CONFIG_NULL_TTY=m
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_RPMSG_TTY=m
CONFIG_SERIAL_DEV_BUS=y
CONFIG_SERIAL_DEV_CTRL_TTYPORT=y
CONFIG_TTY_PRINTK=y
CONFIG_TTY_PRINTK_LEVEL=6
CONFIG_PRINTER=m
# CONFIG_LP_CONSOLE is not set
CONFIG_PPDEV=m
CONFIG_VIRTIO_CONSOLE=y
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_DMI_DECODE=y
CONFIG_IPMI_PLAT_DATA=y
# CONFIG_IPMI_PANIC_EVENT is not set
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_SSIF=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_TIMERIOMEM=m
CONFIG_HW_RANDOM_INTEL=m
CONFIG_HW_RANDOM_AMD=m
CONFIG_HW_RANDOM_BA431=m
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_XIPHERA=m
CONFIG_APPLICOM=m
CONFIG_MWAVE=m
CONFIG_DEVMEM=y
CONFIG_NVRAM=m
CONFIG_DEVPORT=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
CONFIG_HPET_MMAP_DEFAULT=y
CONFIG_HANGCHECK_TIMER=m
CONFIG_UV_MMTIMER=m
CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
CONFIG_TCG_TIS_SPI=m
CONFIG_TCG_TIS_SPI_CR50=y
CONFIG_TCG_TIS_I2C=m
CONFIG_TCG_TIS_I2C_CR50=m
CONFIG_TCG_TIS_I2C_ATMEL=m
CONFIG_TCG_TIS_I2C_INFINEON=m
CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
CONFIG_TCG_XEN=m
CONFIG_TCG_CRB=y
CONFIG_TCG_VTPM_PROXY=m
CONFIG_TCG_TIS_ST33ZP24=m
CONFIG_TCG_TIS_ST33ZP24_I2C=m
CONFIG_TCG_TIS_ST33ZP24_SPI=m
CONFIG_TELCLOCK=m
CONFIG_XILLYBUS_CLASS=m
CONFIG_XILLYBUS=m
CONFIG_XILLYBUS_PCIE=m
CONFIG_XILLYUSB=m
# end of Character devices

#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
CONFIG_I2C_MUX_GPIO=m
CONFIG_I2C_MUX_LTC4306=m
CONFIG_I2C_MUX_PCA9541=m
CONFIG_I2C_MUX_PCA954x=m
CONFIG_I2C_MUX_REG=m
CONFIG_I2C_MUX_MLXCPLD=m
# end of Multiplexer I2C Chip support

CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=m
CONFIG_I2C_ALGOBIT=m
CONFIG_I2C_ALGOPCA=m

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
CONFIG_I2C_CCGX_UCSI=m
CONFIG_I2C_ALI1535=m
CONFIG_I2C_ALI1563=m
CONFIG_I2C_ALI15X3=m
CONFIG_I2C_AMD756=m
CONFIG_I2C_AMD756_S4882=m
CONFIG_I2C_AMD8111=m
CONFIG_I2C_AMD_MP2=m
CONFIG_I2C_I801=m
CONFIG_I2C_ISCH=m
CONFIG_I2C_ISMT=m
CONFIG_I2C_PIIX4=m
CONFIG_I2C_CHT_WC=m
CONFIG_I2C_NFORCE2=m
CONFIG_I2C_NFORCE2_S4985=m
CONFIG_I2C_NVIDIA_GPU=m
CONFIG_I2C_SIS5595=m
CONFIG_I2C_SIS630=m
CONFIG_I2C_SIS96X=m
CONFIG_I2C_VIA=m
CONFIG_I2C_VIAPRO=m

#
# ACPI drivers
#
CONFIG_I2C_SCMI=m

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
CONFIG_I2C_CBUS_GPIO=m
CONFIG_I2C_DESIGNWARE_CORE=y
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=y
CONFIG_I2C_DESIGNWARE_BAYTRAIL=y
CONFIG_I2C_DESIGNWARE_PCI=m
# CONFIG_I2C_EMEV2 is not set
CONFIG_I2C_GPIO=m
# CONFIG_I2C_GPIO_FAULT_INJECTOR is not set
CONFIG_I2C_KEMPLD=m
CONFIG_I2C_OCORES=m
CONFIG_I2C_PCA_PLATFORM=m
CONFIG_I2C_SIMTEC=m
CONFIG_I2C_XILINX=m

#
# External I2C/SMBus adapter drivers
#
CONFIG_I2C_DIOLAN_U2C=m
CONFIG_I2C_DLN2=m
CONFIG_I2C_CP2615=m
CONFIG_I2C_PARPORT=m
CONFIG_I2C_PCI1XXXX=m
CONFIG_I2C_ROBOTFUZZ_OSIF=m
CONFIG_I2C_TAOS_EVM=m
CONFIG_I2C_TINY_USB=m
CONFIG_I2C_VIPERBOARD=m

#
# Other I2C/SMBus bus drivers
#
CONFIG_I2C_MLXCPLD=m
CONFIG_I2C_CROS_EC_TUNNEL=m
CONFIG_I2C_VIRTIO=m
# end of I2C Hardware Bus support

CONFIG_I2C_STUB=m
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

CONFIG_I3C=m
CONFIG_CDNS_I3C_MASTER=m
CONFIG_DW_I3C_MASTER=m
CONFIG_SVC_I3C_MASTER=m
CONFIG_MIPI_I3C_HCI=m
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
CONFIG_SPI_MEM=y

#
# SPI Master Controller Drivers
#
CONFIG_SPI_ALTERA=m
CONFIG_SPI_ALTERA_CORE=m
CONFIG_SPI_ALTERA_DFL=m
CONFIG_SPI_AXI_SPI_ENGINE=m
CONFIG_SPI_BITBANG=m
CONFIG_SPI_BUTTERFLY=m
CONFIG_SPI_CADENCE=m
CONFIG_SPI_DESIGNWARE=m
CONFIG_SPI_DW_DMA=y
CONFIG_SPI_DW_PCI=m
CONFIG_SPI_DW_MMIO=m
CONFIG_SPI_DLN2=m
CONFIG_SPI_GPIO=m
CONFIG_SPI_INTEL=m
CONFIG_SPI_INTEL_PCI=m
CONFIG_SPI_INTEL_PLATFORM=m
CONFIG_SPI_LM70_LLP=m
CONFIG_SPI_MICROCHIP_CORE=m
CONFIG_SPI_MICROCHIP_CORE_QSPI=m
CONFIG_SPI_LANTIQ_SSC=m
CONFIG_SPI_OC_TINY=m
CONFIG_SPI_PCI1XXXX=m
CONFIG_SPI_PXA2XX=m
CONFIG_SPI_PXA2XX_PCI=m
CONFIG_SPI_SC18IS602=m
CONFIG_SPI_SIFIVE=m
CONFIG_SPI_MXIC=m
CONFIG_SPI_XCOMM=m
# CONFIG_SPI_XILINX is not set
CONFIG_SPI_ZYNQMP_GQSPI=m
CONFIG_SPI_AMD=m

#
# SPI Multiplexer support
#
CONFIG_SPI_MUX=m

#
# SPI Protocol Masters
#
CONFIG_SPI_SPIDEV=m
CONFIG_SPI_LOOPBACK_TEST=m
CONFIG_SPI_TLE62X0=m
CONFIG_SPI_SLAVE=y
CONFIG_SPI_SLAVE_TIME=m
CONFIG_SPI_SLAVE_SYSTEM_CONTROL=m
CONFIG_SPI_DYNAMIC=y
CONFIG_SPMI=m
CONFIG_SPMI_HISI3670=m
CONFIG_HSI=m
CONFIG_HSI_BOARDINFO=y

#
# HSI controllers
#

#
# HSI clients
#
CONFIG_HSI_CHAR=m
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_PTP_1588_CLOCK_OPTIONAL=y
CONFIG_DP83640_PHY=m
CONFIG_PTP_1588_CLOCK_INES=m
CONFIG_PTP_1588_CLOCK_KVM=m
CONFIG_PTP_1588_CLOCK_IDT82P33=m
CONFIG_PTP_1588_CLOCK_IDTCM=m
# CONFIG_PTP_1588_CLOCK_MOCK is not set
CONFIG_PTP_1588_CLOCK_VMW=m
CONFIG_PTP_1588_CLOCK_OCP=m
# CONFIG_PTP_DFL_TOD is not set
# end of PTP clock support

CONFIG_PINCTRL=y
CONFIG_PINMUX=y
CONFIG_PINCONF=y
CONFIG_GENERIC_PINCONF=y
# CONFIG_DEBUG_PINCTRL is not set
CONFIG_PINCTRL_AMD=y
CONFIG_PINCTRL_CY8C95X0=m
CONFIG_PINCTRL_DA9062=m
CONFIG_PINCTRL_MCP23S08_I2C=m
CONFIG_PINCTRL_MCP23S08_SPI=m
CONFIG_PINCTRL_MCP23S08=m
CONFIG_PINCTRL_SX150X=y
CONFIG_PINCTRL_MADERA=m
CONFIG_PINCTRL_CS47L15=y
CONFIG_PINCTRL_CS47L35=y
CONFIG_PINCTRL_CS47L85=y
CONFIG_PINCTRL_CS47L90=y
CONFIG_PINCTRL_CS47L92=y

#
# Intel pinctrl drivers
#
CONFIG_PINCTRL_BAYTRAIL=y
CONFIG_PINCTRL_CHERRYVIEW=y
CONFIG_PINCTRL_LYNXPOINT=m
CONFIG_PINCTRL_INTEL=y
CONFIG_PINCTRL_ALDERLAKE=m
CONFIG_PINCTRL_BROXTON=m
CONFIG_PINCTRL_CANNONLAKE=m
CONFIG_PINCTRL_CEDARFORK=m
CONFIG_PINCTRL_DENVERTON=m
CONFIG_PINCTRL_ELKHARTLAKE=m
CONFIG_PINCTRL_EMMITSBURG=m
CONFIG_PINCTRL_GEMINILAKE=m
CONFIG_PINCTRL_ICELAKE=m
CONFIG_PINCTRL_JASPERLAKE=m
CONFIG_PINCTRL_LAKEFIELD=m
CONFIG_PINCTRL_LEWISBURG=m
CONFIG_PINCTRL_METEORLAKE=m
CONFIG_PINCTRL_SUNRISEPOINT=m
CONFIG_PINCTRL_TIGERLAKE=m
# end of Intel pinctrl drivers

#
# Renesas pinctrl drivers
#
# end of Renesas pinctrl drivers

CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_GPIO_ACPI=y
CONFIG_GPIOLIB_IRQCHIP=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_CDEV=y
CONFIG_GPIO_CDEV_V1=y
CONFIG_GPIO_GENERIC=y
CONFIG_GPIO_REGMAP=m
CONFIG_GPIO_MAX730X=m
CONFIG_GPIO_IDIO_16=m

#
# Memory mapped GPIO drivers
#
CONFIG_GPIO_AMDPT=m
CONFIG_GPIO_DWAPB=m
CONFIG_GPIO_EXAR=m
CONFIG_GPIO_GENERIC_PLATFORM=y
CONFIG_GPIO_ICH=m
CONFIG_GPIO_MB86S7X=m
CONFIG_GPIO_MENZ127=m
CONFIG_GPIO_SIOX=m
CONFIG_GPIO_AMD_FCH=m
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
CONFIG_GPIO_VX855=m
CONFIG_GPIO_I8255=m
CONFIG_GPIO_104_DIO_48E=m
CONFIG_GPIO_104_IDIO_16=m
CONFIG_GPIO_104_IDI_48=m
CONFIG_GPIO_F7188X=m
CONFIG_GPIO_GPIO_MM=m
CONFIG_GPIO_IT87=m
CONFIG_GPIO_SCH=m
CONFIG_GPIO_SCH311X=m
CONFIG_GPIO_WINBOND=m
CONFIG_GPIO_WS16C48=m
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_FXL6408 is not set
# CONFIG_GPIO_DS4520 is not set
CONFIG_GPIO_MAX7300=m
CONFIG_GPIO_MAX732X=m
CONFIG_GPIO_PCA953X=m
CONFIG_GPIO_PCA953X_IRQ=y
CONFIG_GPIO_PCA9570=m
CONFIG_GPIO_PCF857X=m
CONFIG_GPIO_TPIC2810=m
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
CONFIG_GPIO_ADP5520=m
CONFIG_GPIO_ARIZONA=m
CONFIG_GPIO_BD9571MWV=m
CONFIG_GPIO_CRYSTAL_COVE=y
CONFIG_GPIO_DA9052=m
CONFIG_GPIO_DA9055=m
CONFIG_GPIO_DLN2=m
# CONFIG_GPIO_ELKHARTLAKE is not set
CONFIG_GPIO_JANZ_TTL=m
CONFIG_GPIO_KEMPLD=m
CONFIG_GPIO_LP3943=m
CONFIG_GPIO_LP873X=m
CONFIG_GPIO_MADERA=m
CONFIG_GPIO_PALMAS=y
CONFIG_GPIO_RC5T583=y
CONFIG_GPIO_TPS65086=m
CONFIG_GPIO_TPS6586X=y
CONFIG_GPIO_TPS65910=y
CONFIG_GPIO_TPS65912=m
CONFIG_GPIO_TPS68470=m
CONFIG_GPIO_TQMX86=m
CONFIG_GPIO_TWL4030=m
CONFIG_GPIO_TWL6040=m
CONFIG_GPIO_WHISKEY_COVE=m
CONFIG_GPIO_WM831X=m
CONFIG_GPIO_WM8350=m
CONFIG_GPIO_WM8994=m
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
CONFIG_GPIO_AMD8111=m
CONFIG_GPIO_ML_IOH=m
CONFIG_GPIO_PCI_IDIO_16=m
CONFIG_GPIO_PCIE_IDIO_24=m
CONFIG_GPIO_RDC321X=m
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
CONFIG_GPIO_MAX3191X=m
CONFIG_GPIO_MAX7301=m
CONFIG_GPIO_MC33880=m
CONFIG_GPIO_PISOSR=m
CONFIG_GPIO_XRA1403=m
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
CONFIG_GPIO_VIPERBOARD=m
# end of USB GPIO expanders

#
# Virtual GPIO drivers
#
CONFIG_GPIO_AGGREGATOR=m
CONFIG_GPIO_LATCH=m
# CONFIG_GPIO_MOCKUP is not set
CONFIG_GPIO_VIRTIO=m
CONFIG_GPIO_SIM=m
# end of Virtual GPIO drivers

CONFIG_W1=m
CONFIG_W1_CON=y

#
# 1-wire Bus Masters
#
CONFIG_W1_MASTER_MATROX=m
CONFIG_W1_MASTER_DS2490=m
CONFIG_W1_MASTER_DS2482=m
CONFIG_W1_MASTER_GPIO=m
CONFIG_W1_MASTER_SGI=m
# end of 1-wire Bus Masters

#
# 1-wire Slaves
#
CONFIG_W1_SLAVE_THERM=m
CONFIG_W1_SLAVE_SMEM=m
CONFIG_W1_SLAVE_DS2405=m
CONFIG_W1_SLAVE_DS2408=m
CONFIG_W1_SLAVE_DS2408_READBACK=y
CONFIG_W1_SLAVE_DS2413=m
CONFIG_W1_SLAVE_DS2406=m
CONFIG_W1_SLAVE_DS2423=m
CONFIG_W1_SLAVE_DS2805=m
CONFIG_W1_SLAVE_DS2430=m
CONFIG_W1_SLAVE_DS2431=m
CONFIG_W1_SLAVE_DS2433=m
# CONFIG_W1_SLAVE_DS2433_CRC is not set
CONFIG_W1_SLAVE_DS2438=m
CONFIG_W1_SLAVE_DS250X=m
CONFIG_W1_SLAVE_DS2780=m
CONFIG_W1_SLAVE_DS2781=m
CONFIG_W1_SLAVE_DS28E04=m
CONFIG_W1_SLAVE_DS28E17=m
# end of 1-wire Slaves

CONFIG_POWER_RESET=y
CONFIG_POWER_RESET_ATC260X=m
CONFIG_POWER_RESET_MT6323=y
CONFIG_POWER_RESET_RESTART=y
CONFIG_POWER_RESET_TPS65086=y
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
CONFIG_GENERIC_ADC_BATTERY=m
CONFIG_IP5XXX_POWER=m
CONFIG_MAX8925_POWER=m
CONFIG_WM831X_BACKUP=m
CONFIG_WM831X_POWER=m
CONFIG_WM8350_POWER=m
CONFIG_TEST_POWER=m
CONFIG_BATTERY_88PM860X=m
CONFIG_CHARGER_ADP5061=m
CONFIG_BATTERY_CW2015=m
CONFIG_BATTERY_DS2760=m
CONFIG_BATTERY_DS2780=m
CONFIG_BATTERY_DS2781=m
CONFIG_BATTERY_DS2782=m
CONFIG_BATTERY_SAMSUNG_SDI=y
CONFIG_BATTERY_SBS=m
CONFIG_CHARGER_SBS=m
CONFIG_MANAGER_SBS=m
CONFIG_BATTERY_BQ27XXX=m
CONFIG_BATTERY_BQ27XXX_I2C=m
CONFIG_BATTERY_BQ27XXX_HDQ=m
# CONFIG_BATTERY_BQ27XXX_DT_UPDATES_NVM is not set
CONFIG_BATTERY_DA9030=m
CONFIG_BATTERY_DA9052=m
CONFIG_CHARGER_DA9150=m
CONFIG_BATTERY_DA9150=m
CONFIG_CHARGER_AXP20X=m
CONFIG_BATTERY_AXP20X=m
CONFIG_AXP20X_POWER=m
CONFIG_AXP288_CHARGER=m
CONFIG_AXP288_FUEL_GAUGE=m
CONFIG_BATTERY_MAX17040=m
CONFIG_BATTERY_MAX17042=m
CONFIG_BATTERY_MAX1721X=m
CONFIG_BATTERY_TWL4030_MADC=m
CONFIG_CHARGER_88PM860X=m
CONFIG_CHARGER_PCF50633=m
CONFIG_BATTERY_RX51=m
CONFIG_CHARGER_ISP1704=m
CONFIG_CHARGER_MAX8903=m
CONFIG_CHARGER_TWL4030=m
CONFIG_CHARGER_LP8727=m
CONFIG_CHARGER_LP8788=m
CONFIG_CHARGER_GPIO=m
CONFIG_CHARGER_MANAGER=y
CONFIG_CHARGER_LT3651=m
CONFIG_CHARGER_LTC4162L=m
CONFIG_CHARGER_MAX14577=m
CONFIG_CHARGER_MAX77693=m
CONFIG_CHARGER_MAX77976=m
CONFIG_CHARGER_MAX8997=m
CONFIG_CHARGER_MAX8998=m
CONFIG_CHARGER_MP2629=m
CONFIG_CHARGER_MT6360=m
CONFIG_CHARGER_MT6370=m
CONFIG_CHARGER_BQ2415X=m
CONFIG_CHARGER_BQ24190=m
CONFIG_CHARGER_BQ24257=m
CONFIG_CHARGER_BQ24735=m
CONFIG_CHARGER_BQ2515X=m
CONFIG_CHARGER_BQ25890=m
CONFIG_CHARGER_BQ25980=m
CONFIG_CHARGER_BQ256XX=m
CONFIG_CHARGER_SMB347=m
CONFIG_CHARGER_TPS65090=m
CONFIG_BATTERY_GAUGE_LTC2941=m
CONFIG_BATTERY_GOLDFISH=m
CONFIG_BATTERY_RT5033=m
# CONFIG_CHARGER_RT5033 is not set
CONFIG_CHARGER_RT9455=m
# CONFIG_CHARGER_RT9467 is not set
# CONFIG_CHARGER_RT9471 is not set
CONFIG_CHARGER_CROS_USBPD=m
CONFIG_CHARGER_CROS_PCHG=m
CONFIG_CHARGER_BD99954=m
CONFIG_CHARGER_WILCO=m
CONFIG_BATTERY_SURFACE=m
CONFIG_CHARGER_SURFACE=m
CONFIG_BATTERY_UG3105=m
CONFIG_HWMON=y
CONFIG_HWMON_VID=m
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
CONFIG_SENSORS_ABITUGURU=m
CONFIG_SENSORS_ABITUGURU3=m
CONFIG_SENSORS_SMPRO=m
CONFIG_SENSORS_AD7314=m
CONFIG_SENSORS_AD7414=m
CONFIG_SENSORS_AD7418=m
CONFIG_SENSORS_ADM1025=m
CONFIG_SENSORS_ADM1026=m
CONFIG_SENSORS_ADM1029=m
CONFIG_SENSORS_ADM1031=m
CONFIG_SENSORS_ADM1177=m
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7X10=m
CONFIG_SENSORS_ADT7310=m
CONFIG_SENSORS_ADT7410=m
CONFIG_SENSORS_ADT7411=m
CONFIG_SENSORS_ADT7462=m
CONFIG_SENSORS_ADT7470=m
CONFIG_SENSORS_ADT7475=m
CONFIG_SENSORS_AHT10=m
CONFIG_SENSORS_AQUACOMPUTER_D5NEXT=m
CONFIG_SENSORS_AS370=m
CONFIG_SENSORS_ASC7621=m
CONFIG_SENSORS_AXI_FAN_CONTROL=m
CONFIG_SENSORS_K8TEMP=m
CONFIG_SENSORS_K10TEMP=m
CONFIG_SENSORS_FAM15H_POWER=m
CONFIG_SENSORS_APPLESMC=m
CONFIG_SENSORS_ASB100=m
CONFIG_SENSORS_ATXP1=m
CONFIG_SENSORS_CORSAIR_CPRO=m
CONFIG_SENSORS_CORSAIR_PSU=m
CONFIG_SENSORS_DRIVETEMP=m
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
CONFIG_SENSORS_DELL_SMM=m
CONFIG_I8K=y
CONFIG_SENSORS_DA9052_ADC=m
CONFIG_SENSORS_DA9055=m
CONFIG_SENSORS_I5K_AMB=m
CONFIG_SENSORS_F71805F=m
CONFIG_SENSORS_F71882FG=m
CONFIG_SENSORS_F75375S=m
CONFIG_SENSORS_MC13783_ADC=m
CONFIG_SENSORS_FSCHMD=m
CONFIG_SENSORS_FTSTEUTATES=m
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
CONFIG_SENSORS_G760A=m
CONFIG_SENSORS_G762=m
CONFIG_SENSORS_HIH6130=m
# CONFIG_SENSORS_HS3001 is not set
CONFIG_SENSORS_IBMAEM=m
CONFIG_SENSORS_IBMPEX=m
CONFIG_SENSORS_IIO_HWMON=m
CONFIG_SENSORS_I5500=m
CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_IT87=m
CONFIG_SENSORS_JC42=m
CONFIG_SENSORS_POWR1220=m
CONFIG_SENSORS_LINEAGE=m
CONFIG_SENSORS_LTC2945=m
CONFIG_SENSORS_LTC2947=m
CONFIG_SENSORS_LTC2947_I2C=m
CONFIG_SENSORS_LTC2947_SPI=m
CONFIG_SENSORS_LTC2990=m
CONFIG_SENSORS_LTC2992=m
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LTC4215=m
CONFIG_SENSORS_LTC4222=m
CONFIG_SENSORS_LTC4245=m
CONFIG_SENSORS_LTC4260=m
CONFIG_SENSORS_LTC4261=m
CONFIG_SENSORS_MAX1111=m
CONFIG_SENSORS_MAX127=m
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
CONFIG_SENSORS_MAX1668=m
CONFIG_SENSORS_MAX197=m
CONFIG_SENSORS_MAX31722=m
CONFIG_SENSORS_MAX31730=m
CONFIG_SENSORS_MAX31760=m
# CONFIG_MAX31827 is not set
CONFIG_SENSORS_MAX6620=m
CONFIG_SENSORS_MAX6621=m
CONFIG_SENSORS_MAX6639=m
CONFIG_SENSORS_MAX6650=m
CONFIG_SENSORS_MAX6697=m
CONFIG_SENSORS_MAX31790=m
# CONFIG_SENSORS_MC34VR500 is not set
CONFIG_SENSORS_MCP3021=m
CONFIG_SENSORS_MLXREG_FAN=m
CONFIG_SENSORS_TC654=m
CONFIG_SENSORS_TPS23861=m
CONFIG_SENSORS_MENF21BMC_HWMON=m
CONFIG_SENSORS_MR75203=m
CONFIG_SENSORS_ADCXX=m
CONFIG_SENSORS_LM63=m
CONFIG_SENSORS_LM70=m
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
CONFIG_SENSORS_LM85=m
CONFIG_SENSORS_LM87=m
CONFIG_SENSORS_LM90=m
CONFIG_SENSORS_LM92=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_LM95234=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_LM95245=m
CONFIG_SENSORS_PC87360=m
CONFIG_SENSORS_PC87427=m
CONFIG_SENSORS_NTC_THERMISTOR=m
CONFIG_SENSORS_NCT6683=m
CONFIG_SENSORS_NCT6775_CORE=m
CONFIG_SENSORS_NCT6775=m
CONFIG_SENSORS_NCT6775_I2C=m
CONFIG_SENSORS_NCT7802=m
CONFIG_SENSORS_NCT7904=m
CONFIG_SENSORS_NPCM7XX=m
CONFIG_SENSORS_NZXT_KRAKEN2=m
CONFIG_SENSORS_NZXT_SMART2=m
CONFIG_SENSORS_OCC_P8_I2C=m
CONFIG_SENSORS_OCC=m
CONFIG_SENSORS_OXP=m
CONFIG_SENSORS_PCF8591=m
CONFIG_SENSORS_PECI_CPUTEMP=m
CONFIG_SENSORS_PECI_DIMMTEMP=m
CONFIG_SENSORS_PECI=m
CONFIG_PMBUS=m
CONFIG_SENSORS_PMBUS=m
# CONFIG_SENSORS_ACBEL_FSG032 is not set
CONFIG_SENSORS_ADM1266=m
CONFIG_SENSORS_ADM1275=m
CONFIG_SENSORS_BEL_PFE=m
CONFIG_SENSORS_BPA_RS600=m
CONFIG_SENSORS_DELTA_AHE50DC_FAN=m
CONFIG_SENSORS_FSP_3Y=m
CONFIG_SENSORS_IBM_CFFPS=m
CONFIG_SENSORS_DPS920AB=m
CONFIG_SENSORS_INSPUR_IPSPS=m
CONFIG_SENSORS_IR35221=m
CONFIG_SENSORS_IR36021=m
CONFIG_SENSORS_IR38064=m
CONFIG_SENSORS_IR38064_REGULATOR=y
CONFIG_SENSORS_IRPS5401=m
CONFIG_SENSORS_ISL68137=m
CONFIG_SENSORS_LM25066=m
CONFIG_SENSORS_LM25066_REGULATOR=y
CONFIG_SENSORS_LT7182S=m
CONFIG_SENSORS_LTC2978=m
CONFIG_SENSORS_LTC2978_REGULATOR=y
CONFIG_SENSORS_LTC3815=m
CONFIG_SENSORS_MAX15301=m
CONFIG_SENSORS_MAX16064=m
CONFIG_SENSORS_MAX16601=m
CONFIG_SENSORS_MAX20730=m
CONFIG_SENSORS_MAX20751=m
CONFIG_SENSORS_MAX31785=m
CONFIG_SENSORS_MAX34440=m
CONFIG_SENSORS_MAX8688=m
CONFIG_SENSORS_MP2888=m
CONFIG_SENSORS_MP2975=m
# CONFIG_SENSORS_MP2975_REGULATOR is not set
CONFIG_SENSORS_MP5023=m
# CONFIG_SENSORS_MPQ7932 is not set
CONFIG_SENSORS_PIM4328=m
CONFIG_SENSORS_PLI1209BC=m
CONFIG_SENSORS_PLI1209BC_REGULATOR=y
CONFIG_SENSORS_PM6764TR=m
CONFIG_SENSORS_PXE1610=m
CONFIG_SENSORS_Q54SJ108A2=m
CONFIG_SENSORS_STPDDC60=m
# CONFIG_SENSORS_TDA38640 is not set
CONFIG_SENSORS_TPS40422=m
CONFIG_SENSORS_TPS53679=m
CONFIG_SENSORS_TPS546D24=m
CONFIG_SENSORS_UCD9000=m
CONFIG_SENSORS_UCD9200=m
CONFIG_SENSORS_XDPE152=m
CONFIG_SENSORS_XDPE122=m
CONFIG_SENSORS_XDPE122_REGULATOR=y
CONFIG_SENSORS_ZL6100=m
CONFIG_SENSORS_SBTSI=m
CONFIG_SENSORS_SBRMI=m
CONFIG_SENSORS_SHT15=m
CONFIG_SENSORS_SHT21=m
CONFIG_SENSORS_SHT3x=m
CONFIG_SENSORS_SHT4x=m
CONFIG_SENSORS_SHTC1=m
CONFIG_SENSORS_SIS5595=m
CONFIG_SENSORS_SY7636A=m
CONFIG_SENSORS_DME1737=m
CONFIG_SENSORS_EMC1403=m
CONFIG_SENSORS_EMC2103=m
CONFIG_SENSORS_EMC2305=m
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SMSC47B397=m
CONFIG_SENSORS_SCH56XX_COMMON=m
CONFIG_SENSORS_SCH5627=m
CONFIG_SENSORS_SCH5636=m
CONFIG_SENSORS_STTS751=m
CONFIG_SENSORS_ADC128D818=m
CONFIG_SENSORS_ADS7828=m
CONFIG_SENSORS_ADS7871=m
CONFIG_SENSORS_AMC6821=m
CONFIG_SENSORS_INA209=m
CONFIG_SENSORS_INA2XX=m
CONFIG_SENSORS_INA238=m
CONFIG_SENSORS_INA3221=m
CONFIG_SENSORS_TC74=m
CONFIG_SENSORS_THMC50=m
CONFIG_SENSORS_TMP102=m
CONFIG_SENSORS_TMP103=m
CONFIG_SENSORS_TMP108=m
CONFIG_SENSORS_TMP401=m
CONFIG_SENSORS_TMP421=m
CONFIG_SENSORS_TMP464=m
CONFIG_SENSORS_TMP513=m
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VIA686A=m
CONFIG_SENSORS_VT1211=m
CONFIG_SENSORS_VT8231=m
CONFIG_SENSORS_W83773G=m
CONFIG_SENSORS_W83781D=m
CONFIG_SENSORS_W83791D=m
CONFIG_SENSORS_W83792D=m
CONFIG_SENSORS_W83793=m
CONFIG_SENSORS_W83795=m
# CONFIG_SENSORS_W83795_FANCTRL is not set
CONFIG_SENSORS_W83L785TS=m
CONFIG_SENSORS_W83L786NG=m
CONFIG_SENSORS_W83627HF=m
CONFIG_SENSORS_W83627EHF=m
CONFIG_SENSORS_WM831X=m
CONFIG_SENSORS_WM8350=m
CONFIG_SENSORS_XGENE=m

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=m
CONFIG_SENSORS_ATK0110=m
CONFIG_SENSORS_ASUS_WMI=m
CONFIG_SENSORS_ASUS_EC=m
# CONFIG_SENSORS_HP_WMI is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_NETLINK=y
CONFIG_THERMAL_STATISTICS=y
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_ACPI=y
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_DEFAULT_GOV_POWER_ALLOCATOR is not set
# CONFIG_THERMAL_DEFAULT_GOV_BANG_BANG is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_BANG_BANG=y
CONFIG_THERMAL_GOV_USER_SPACE=y
CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y
CONFIG_DEVFREQ_THERMAL=y
CONFIG_THERMAL_EMULATION=y

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=m
CONFIG_X86_THERMAL_VECTOR=y
CONFIG_INTEL_TCC=y
CONFIG_X86_PKG_TEMP_THERMAL=m
CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
CONFIG_INTEL_SOC_DTS_THERMAL=m

#
# ACPI INT340X thermal drivers
#
CONFIG_INT340X_THERMAL=m
CONFIG_ACPI_THERMAL_REL=m
CONFIG_INT3406_THERMAL=m
CONFIG_PROC_THERMAL_MMIO_RAPL=m
# end of ACPI INT340X thermal drivers

CONFIG_INTEL_BXT_PMIC_THERMAL=m
CONFIG_INTEL_PCH_THERMAL=m
CONFIG_INTEL_TCC_COOLING=m
CONFIG_INTEL_HFI_THERMAL=y
# end of Intel thermal drivers

CONFIG_GENERIC_ADC_THERMAL=m
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
CONFIG_WATCHDOG_SYSFS=y
# CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT is not set

#
# Watchdog Pretimeout Governors
#
CONFIG_WATCHDOG_PRETIMEOUT_GOV=y
CONFIG_WATCHDOG_PRETIMEOUT_GOV_SEL=m
CONFIG_WATCHDOG_PRETIMEOUT_GOV_NOOP=y
CONFIG_WATCHDOG_PRETIMEOUT_GOV_PANIC=m
CONFIG_WATCHDOG_PRETIMEOUT_DEFAULT_GOV_NOOP=y
# CONFIG_WATCHDOG_PRETIMEOUT_DEFAULT_GOV_PANIC is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
CONFIG_SOFT_WATCHDOG_PRETIMEOUT=y
CONFIG_DA9052_WATCHDOG=m
CONFIG_DA9055_WATCHDOG=m
CONFIG_DA9063_WATCHDOG=m
CONFIG_DA9062_WATCHDOG=m
CONFIG_MENF21BMC_WATCHDOG=m
CONFIG_MENZ069_WATCHDOG=m
CONFIG_WDAT_WDT=m
CONFIG_WM831X_WATCHDOG=m
CONFIG_WM8350_WATCHDOG=m
CONFIG_XILINX_WATCHDOG=m
CONFIG_ZIIRAVE_WATCHDOG=m
CONFIG_RAVE_SP_WATCHDOG=m
CONFIG_MLX_WDT=m
CONFIG_CADENCE_WATCHDOG=m
CONFIG_DW_WATCHDOG=m
CONFIG_TWL4030_WATCHDOG=m
CONFIG_MAX63XX_WATCHDOG=m
CONFIG_RETU_WATCHDOG=m
CONFIG_ACQUIRE_WDT=m
CONFIG_ADVANTECH_WDT=m
CONFIG_ADVANTECH_EC_WDT=m
CONFIG_ALIM1535_WDT=m
CONFIG_ALIM7101_WDT=m
CONFIG_EBC_C384_WDT=m
CONFIG_EXAR_WDT=m
CONFIG_F71808E_WDT=m
CONFIG_SP5100_TCO=m
CONFIG_SBC_FITPC2_WATCHDOG=m
CONFIG_EUROTECH_WDT=m
CONFIG_IB700_WDT=m
CONFIG_IBMASR=m
CONFIG_WAFER_WDT=m
CONFIG_I6300ESB_WDT=m
CONFIG_IE6XX_WDT=m
CONFIG_ITCO_WDT=m
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=m
CONFIG_IT87_WDT=m
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
CONFIG_KEMPLD_WDT=m
CONFIG_SC1200_WDT=m
CONFIG_PC87413_WDT=m
CONFIG_NV_TCO=m
CONFIG_60XX_WDT=m
CONFIG_CPU5_WDT=m
CONFIG_SMSC_SCH311X_WDT=m
CONFIG_SMSC37B787_WDT=m
CONFIG_TQMX86_WDT=m
CONFIG_VIA_WDT=m
CONFIG_W83627HF_WDT=m
CONFIG_W83877F_WDT=m
CONFIG_W83977F_WDT=m
CONFIG_MACHZ_WDT=m
CONFIG_SBC_EPX_C3_WATCHDOG=m
CONFIG_INTEL_MEI_WDT=m
CONFIG_NI903X_WDT=m
CONFIG_NIC7018_WDT=m
CONFIG_SIEMENS_SIMATIC_IPC_WDT=m
CONFIG_MEN_A21_WDT=m
CONFIG_XEN_WDT=m

#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=m
CONFIG_WDTPCI=m

#
# USB-based Watchdog Cards
#
CONFIG_USBPCWATCHDOG=m
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=m
CONFIG_SSB_SPROM=y
CONFIG_SSB_BLOCKIO=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
CONFIG_SSB_PCIHOST=y
CONFIG_SSB_B43_PCI_BRIDGE=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
# CONFIG_SSB_PCMCIAHOST is not set
CONFIG_SSB_SDIOHOST_POSSIBLE=y
CONFIG_SSB_SDIOHOST=y
CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
CONFIG_SSB_DRIVER_PCICORE=y
CONFIG_SSB_DRIVER_GPIO=y
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=m
CONFIG_BCMA_BLOCKIO=y
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
CONFIG_BCMA_HOST_PCI=y
CONFIG_BCMA_HOST_SOC=y
CONFIG_BCMA_DRIVER_PCI=y
CONFIG_BCMA_SFLASH=y
CONFIG_BCMA_DRIVER_GMAC_CMN=y
CONFIG_BCMA_DRIVER_GPIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
CONFIG_MFD_AS3711=y
CONFIG_MFD_SMPRO=m
CONFIG_PMIC_ADP5520=y
CONFIG_MFD_AAT2870_CORE=y
CONFIG_MFD_BCM590XX=m
CONFIG_MFD_BD9571MWV=m
CONFIG_MFD_AXP20X=m
CONFIG_MFD_AXP20X_I2C=m
CONFIG_MFD_CROS_EC_DEV=m
# CONFIG_MFD_CS42L43_I2C is not set
# CONFIG_MFD_CS42L43_SDW is not set
CONFIG_MFD_MADERA=m
CONFIG_MFD_MADERA_I2C=m
CONFIG_MFD_MADERA_SPI=m
CONFIG_MFD_CS47L15=y
CONFIG_MFD_CS47L35=y
CONFIG_MFD_CS47L85=y
CONFIG_MFD_CS47L90=y
CONFIG_MFD_CS47L92=y
CONFIG_PMIC_DA903X=y
CONFIG_PMIC_DA9052=y
CONFIG_MFD_DA9052_SPI=y
CONFIG_MFD_DA9052_I2C=y
CONFIG_MFD_DA9055=y
CONFIG_MFD_DA9062=m
CONFIG_MFD_DA9063=y
CONFIG_MFD_DA9150=m
CONFIG_MFD_DLN2=m
CONFIG_MFD_MC13XXX=m
CONFIG_MFD_MC13XXX_SPI=m
CONFIG_MFD_MC13XXX_I2C=m
CONFIG_MFD_MP2629=m
CONFIG_MFD_INTEL_QUARK_I2C_GPIO=m
CONFIG_LPC_ICH=m
CONFIG_LPC_SCH=m
CONFIG_INTEL_SOC_PMIC=y
CONFIG_INTEL_SOC_PMIC_BXTWC=m
CONFIG_INTEL_SOC_PMIC_CHTWC=y
CONFIG_INTEL_SOC_PMIC_CHTDC_TI=m
CONFIG_INTEL_SOC_PMIC_MRFLD=m
CONFIG_MFD_INTEL_LPSS=m
CONFIG_MFD_INTEL_LPSS_ACPI=m
CONFIG_MFD_INTEL_LPSS_PCI=m
CONFIG_MFD_INTEL_PMC_BXT=m
CONFIG_MFD_IQS62X=m
CONFIG_MFD_JANZ_CMODIO=m
CONFIG_MFD_KEMPLD=m
CONFIG_MFD_88PM800=m
CONFIG_MFD_88PM805=m
CONFIG_MFD_88PM860X=y
CONFIG_MFD_MAX14577=y
# CONFIG_MFD_MAX77541 is not set
CONFIG_MFD_MAX77693=y
CONFIG_MFD_MAX77843=y
CONFIG_MFD_MAX8907=m
CONFIG_MFD_MAX8925=y
CONFIG_MFD_MAX8997=y
CONFIG_MFD_MAX8998=y
CONFIG_MFD_MT6360=m
CONFIG_MFD_MT6370=m
CONFIG_MFD_MT6397=m
CONFIG_MFD_MENF21BMC=m
CONFIG_MFD_OCELOT=m
CONFIG_EZX_PCAP=y
CONFIG_MFD_VIPERBOARD=m
CONFIG_MFD_RETU=m
CONFIG_MFD_PCF50633=m
CONFIG_PCF50633_ADC=m
CONFIG_PCF50633_GPIO=m
CONFIG_MFD_SY7636A=m
CONFIG_MFD_RDC321X=m
CONFIG_MFD_RT4831=m
CONFIG_MFD_RT5033=m
CONFIG_MFD_RT5120=m
CONFIG_MFD_RC5T583=y
CONFIG_MFD_SI476X_CORE=m
CONFIG_MFD_SIMPLE_MFD_I2C=m
CONFIG_MFD_SM501=m
CONFIG_MFD_SM501_GPIO=y
CONFIG_MFD_SKY81452=m
CONFIG_MFD_SYSCON=y
CONFIG_MFD_TI_AM335X_TSCADC=m
CONFIG_MFD_LP3943=m
CONFIG_MFD_LP8788=y
CONFIG_MFD_TI_LMU=m
CONFIG_MFD_PALMAS=y
CONFIG_TPS6105X=m
CONFIG_TPS65010=m
CONFIG_TPS6507X=m
CONFIG_MFD_TPS65086=m
CONFIG_MFD_TPS65090=y
CONFIG_MFD_TI_LP873X=m
CONFIG_MFD_TPS6586X=y
CONFIG_MFD_TPS65910=y
CONFIG_MFD_TPS65912=y
CONFIG_MFD_TPS65912_I2C=y
CONFIG_MFD_TPS65912_SPI=y
# CONFIG_MFD_TPS6594_I2C is not set
# CONFIG_MFD_TPS6594_SPI is not set
CONFIG_TWL4030_CORE=y
CONFIG_MFD_TWL4030_AUDIO=y
CONFIG_TWL6040_CORE=y
CONFIG_MFD_WL1273_CORE=m
CONFIG_MFD_LM3533=m
CONFIG_MFD_TQMX86=m
CONFIG_MFD_VX855=m
CONFIG_MFD_ARIZONA=m
CONFIG_MFD_ARIZONA_I2C=m
CONFIG_MFD_ARIZONA_SPI=m
CONFIG_MFD_CS47L24=y
CONFIG_MFD_WM5102=y
CONFIG_MFD_WM5110=y
CONFIG_MFD_WM8997=y
CONFIG_MFD_WM8998=y
CONFIG_MFD_WM8400=y
CONFIG_MFD_WM831X=y
CONFIG_MFD_WM831X_I2C=y
CONFIG_MFD_WM831X_SPI=y
CONFIG_MFD_WM8350=y
CONFIG_MFD_WM8350_I2C=y
CONFIG_MFD_WM8994=m
CONFIG_MFD_WCD934X=m
CONFIG_MFD_ATC260X=m
CONFIG_MFD_ATC260X_I2C=m
CONFIG_RAVE_SP_CORE=m
# CONFIG_MFD_INTEL_M10_BMC_SPI is not set
# CONFIG_MFD_INTEL_M10_BMC_PMCI is not set
# end of Multifunction device drivers

CONFIG_REGULATOR=y
# CONFIG_REGULATOR_DEBUG is not set
CONFIG_REGULATOR_FIXED_VOLTAGE=m
CONFIG_REGULATOR_VIRTUAL_CONSUMER=m
CONFIG_REGULATOR_USERSPACE_CONSUMER=m
CONFIG_REGULATOR_88PG86X=m
CONFIG_REGULATOR_88PM800=m
CONFIG_REGULATOR_88PM8607=m
CONFIG_REGULATOR_ACT8865=m
CONFIG_REGULATOR_AD5398=m
CONFIG_REGULATOR_AAT2870=m
CONFIG_REGULATOR_ARIZONA_LDO1=m
CONFIG_REGULATOR_ARIZONA_MICSUPP=m
CONFIG_REGULATOR_AS3711=m
CONFIG_REGULATOR_ATC260X=m
# CONFIG_REGULATOR_AW37503 is not set
CONFIG_REGULATOR_AXP20X=m
CONFIG_REGULATOR_BCM590XX=m
CONFIG_REGULATOR_BD9571MWV=m
CONFIG_REGULATOR_DA903X=m
CONFIG_REGULATOR_DA9052=m
CONFIG_REGULATOR_DA9055=m
CONFIG_REGULATOR_DA9062=m
CONFIG_REGULATOR_DA9210=m
CONFIG_REGULATOR_DA9211=m
CONFIG_REGULATOR_FAN53555=m
CONFIG_REGULATOR_GPIO=m
CONFIG_REGULATOR_ISL9305=m
CONFIG_REGULATOR_ISL6271A=m
CONFIG_REGULATOR_LM363X=m
CONFIG_REGULATOR_LP3971=m
CONFIG_REGULATOR_LP3972=m
CONFIG_REGULATOR_LP872X=m
CONFIG_REGULATOR_LP8755=m
CONFIG_REGULATOR_LP8788=m
CONFIG_REGULATOR_LTC3589=m
CONFIG_REGULATOR_LTC3676=m
CONFIG_REGULATOR_MAX14577=m
CONFIG_REGULATOR_MAX1586=m
# CONFIG_REGULATOR_MAX77857 is not set
CONFIG_REGULATOR_MAX8649=m
CONFIG_REGULATOR_MAX8660=m
CONFIG_REGULATOR_MAX8893=m
CONFIG_REGULATOR_MAX8907=m
CONFIG_REGULATOR_MAX8925=m
CONFIG_REGULATOR_MAX8952=m
CONFIG_REGULATOR_MAX8997=m
CONFIG_REGULATOR_MAX8998=m
CONFIG_REGULATOR_MAX20086=m
# CONFIG_REGULATOR_MAX20411 is not set
CONFIG_REGULATOR_MAX77693=m
CONFIG_REGULATOR_MAX77826=m
CONFIG_REGULATOR_MC13XXX_CORE=m
CONFIG_REGULATOR_MC13783=m
CONFIG_REGULATOR_MC13892=m
CONFIG_REGULATOR_MP8859=m
CONFIG_REGULATOR_MT6311=m
CONFIG_REGULATOR_MT6315=m
CONFIG_REGULATOR_MT6323=m
CONFIG_REGULATOR_MT6331=m
CONFIG_REGULATOR_MT6332=m
CONFIG_REGULATOR_MT6357=m
CONFIG_REGULATOR_MT6358=m
CONFIG_REGULATOR_MT6359=m
CONFIG_REGULATOR_MT6360=m
CONFIG_REGULATOR_MT6370=m
CONFIG_REGULATOR_MT6397=m
CONFIG_REGULATOR_PALMAS=m
CONFIG_REGULATOR_PCA9450=m
CONFIG_REGULATOR_PCAP=m
CONFIG_REGULATOR_PCF50633=m
CONFIG_REGULATOR_PV88060=m
CONFIG_REGULATOR_PV88080=m
CONFIG_REGULATOR_PV88090=m
CONFIG_REGULATOR_PWM=m
CONFIG_REGULATOR_QCOM_SPMI=m
CONFIG_REGULATOR_QCOM_USB_VBUS=m
# CONFIG_REGULATOR_RAA215300 is not set
CONFIG_REGULATOR_RC5T583=m
CONFIG_REGULATOR_RT4801=m
# CONFIG_REGULATOR_RT4803 is not set
CONFIG_REGULATOR_RT4831=m
CONFIG_REGULATOR_RT5033=m
CONFIG_REGULATOR_RT5120=m
CONFIG_REGULATOR_RT5190A=m
# CONFIG_REGULATOR_RT5739 is not set
CONFIG_REGULATOR_RT5759=m
CONFIG_REGULATOR_RT6160=m
CONFIG_REGULATOR_RT6190=m
CONFIG_REGULATOR_RT6245=m
CONFIG_REGULATOR_RTQ2134=m
CONFIG_REGULATOR_RTMV20=m
CONFIG_REGULATOR_RTQ6752=m
# CONFIG_REGULATOR_RTQ2208 is not set
CONFIG_REGULATOR_SKY81452=m
CONFIG_REGULATOR_SLG51000=m
CONFIG_REGULATOR_SY7636A=m
CONFIG_REGULATOR_TPS51632=m
CONFIG_REGULATOR_TPS6105X=m
CONFIG_REGULATOR_TPS62360=m
CONFIG_REGULATOR_TPS65023=m
CONFIG_REGULATOR_TPS6507X=m
CONFIG_REGULATOR_TPS65086=m
CONFIG_REGULATOR_TPS65090=m
CONFIG_REGULATOR_TPS65132=m
CONFIG_REGULATOR_TPS6524X=m
CONFIG_REGULATOR_TPS6586X=m
CONFIG_REGULATOR_TPS65910=m
CONFIG_REGULATOR_TPS65912=m
CONFIG_REGULATOR_TPS68470=m
CONFIG_REGULATOR_TWL4030=m
CONFIG_REGULATOR_WM831X=m
CONFIG_REGULATOR_WM8350=m
CONFIG_REGULATOR_WM8400=m
CONFIG_REGULATOR_WM8994=m
CONFIG_REGULATOR_QCOM_LABIBB=m
CONFIG_RC_CORE=m
CONFIG_LIRC=y
CONFIG_RC_MAP=m
CONFIG_RC_DECODERS=y
CONFIG_IR_IMON_DECODER=m
CONFIG_IR_JVC_DECODER=m
CONFIG_IR_MCE_KBD_DECODER=m
CONFIG_IR_NEC_DECODER=m
CONFIG_IR_RC5_DECODER=m
CONFIG_IR_RC6_DECODER=m
CONFIG_IR_RCMM_DECODER=m
CONFIG_IR_SANYO_DECODER=m
CONFIG_IR_SHARP_DECODER=m
CONFIG_IR_SONY_DECODER=m
CONFIG_IR_XMP_DECODER=m
CONFIG_RC_DEVICES=y
CONFIG_IR_ENE=m
CONFIG_IR_FINTEK=m
CONFIG_IR_IGORPLUGUSB=m
CONFIG_IR_IGUANA=m
CONFIG_IR_IMON=m
CONFIG_IR_IMON_RAW=m
CONFIG_IR_ITE_CIR=m
CONFIG_IR_MCEUSB=m
CONFIG_IR_NUVOTON=m
CONFIG_IR_REDRAT3=m
CONFIG_IR_SERIAL=m
CONFIG_IR_SERIAL_TRANSMITTER=y
CONFIG_IR_STREAMZAP=m
CONFIG_IR_TOY=m
CONFIG_IR_TTUSBIR=m
CONFIG_IR_WINBOND_CIR=m
CONFIG_RC_ATI_REMOTE=m
CONFIG_RC_LOOPBACK=m
CONFIG_RC_XBOX_DVD=m
CONFIG_CEC_CORE=m
CONFIG_CEC_NOTIFIER=y
CONFIG_CEC_PIN=y

#
# CEC support
#
CONFIG_MEDIA_CEC_RC=y
# CONFIG_CEC_PIN_ERROR_INJ is not set
CONFIG_MEDIA_CEC_SUPPORT=y
CONFIG_CEC_CH7322=m
CONFIG_CEC_CROS_EC=m
CONFIG_CEC_GPIO=m
CONFIG_CEC_SECO=m
CONFIG_CEC_SECO_RC=y
CONFIG_USB_PULSE8_CEC=m
CONFIG_USB_RAINSHADOW_CEC=m
# end of CEC support

CONFIG_MEDIA_SUPPORT=m
CONFIG_MEDIA_SUPPORT_FILTER=y
CONFIG_MEDIA_SUBDRV_AUTOSELECT=y

#
# Media device types
#
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_RADIO_SUPPORT=y
CONFIG_MEDIA_SDR_SUPPORT=y
CONFIG_MEDIA_PLATFORM_SUPPORT=y
CONFIG_MEDIA_TEST_SUPPORT=y
# end of Media device types

CONFIG_VIDEO_DEV=m
CONFIG_MEDIA_CONTROLLER=y
CONFIG_DVB_CORE=m

#
# Video4Linux options
#
CONFIG_VIDEO_V4L2_I2C=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
CONFIG_VIDEO_TUNER=m
CONFIG_V4L2_MEM2MEM_DEV=m
CONFIG_V4L2_FLASH_LED_CLASS=m
CONFIG_V4L2_FWNODE=m
CONFIG_V4L2_ASYNC=m
CONFIG_V4L2_CCI=m
CONFIG_V4L2_CCI_I2C=m
# end of Video4Linux options

#
# Media controller options
#
CONFIG_MEDIA_CONTROLLER_DVB=y
CONFIG_MEDIA_CONTROLLER_REQUEST_API=y
# end of Media controller options

#
# Digital TV options
#
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
CONFIG_DVB_MAX_ADAPTERS=8
CONFIG_DVB_DYNAMIC_MINORS=y
# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
# CONFIG_DVB_ULE_DEBUG is not set
# end of Digital TV options

#
# Media drivers
#

#
# Drivers filtered as selected at 'Filter media drivers'
#

#
# Media drivers
#
CONFIG_MEDIA_USB_SUPPORT=y

#
# Webcam devices
#
CONFIG_USB_GSPCA=m
CONFIG_USB_GSPCA_BENQ=m
CONFIG_USB_GSPCA_CONEX=m
CONFIG_USB_GSPCA_CPIA1=m
CONFIG_USB_GSPCA_DTCS033=m
CONFIG_USB_GSPCA_ETOMS=m
CONFIG_USB_GSPCA_FINEPIX=m
CONFIG_USB_GSPCA_JEILINJ=m
CONFIG_USB_GSPCA_JL2005BCD=m
CONFIG_USB_GSPCA_KINECT=m
CONFIG_USB_GSPCA_KONICA=m
CONFIG_USB_GSPCA_MARS=m
CONFIG_USB_GSPCA_MR97310A=m
CONFIG_USB_GSPCA_NW80X=m
CONFIG_USB_GSPCA_OV519=m
CONFIG_USB_GSPCA_OV534=m
CONFIG_USB_GSPCA_OV534_9=m
CONFIG_USB_GSPCA_PAC207=m
CONFIG_USB_GSPCA_PAC7302=m
CONFIG_USB_GSPCA_PAC7311=m
CONFIG_USB_GSPCA_SE401=m
CONFIG_USB_GSPCA_SN9C2028=m
CONFIG_USB_GSPCA_SN9C20X=m
CONFIG_USB_GSPCA_SONIXB=m
CONFIG_USB_GSPCA_SONIXJ=m
CONFIG_USB_GSPCA_SPCA1528=m
CONFIG_USB_GSPCA_SPCA500=m
CONFIG_USB_GSPCA_SPCA501=m
CONFIG_USB_GSPCA_SPCA505=m
CONFIG_USB_GSPCA_SPCA506=m
CONFIG_USB_GSPCA_SPCA508=m
CONFIG_USB_GSPCA_SPCA561=m
CONFIG_USB_GSPCA_SQ905=m
CONFIG_USB_GSPCA_SQ905C=m
CONFIG_USB_GSPCA_SQ930X=m
CONFIG_USB_GSPCA_STK014=m
CONFIG_USB_GSPCA_STK1135=m
CONFIG_USB_GSPCA_STV0680=m
CONFIG_USB_GSPCA_SUNPLUS=m
CONFIG_USB_GSPCA_T613=m
CONFIG_USB_GSPCA_TOPRO=m
CONFIG_USB_GSPCA_TOUPTEK=m
CONFIG_USB_GSPCA_TV8532=m
CONFIG_USB_GSPCA_VC032X=m
CONFIG_USB_GSPCA_VICAM=m
CONFIG_USB_GSPCA_XIRLINK_CIT=m
CONFIG_USB_GSPCA_ZC3XX=m
CONFIG_USB_GL860=m
CONFIG_USB_M5602=m
CONFIG_USB_STV06XX=m
CONFIG_USB_PWC=m
# CONFIG_USB_PWC_DEBUG is not set
CONFIG_USB_PWC_INPUT_EVDEV=y
CONFIG_USB_S2255=m
CONFIG_VIDEO_USBTV=m
CONFIG_USB_VIDEO_CLASS=m
CONFIG_USB_VIDEO_CLASS_INPUT_EVDEV=y

#
# Analog TV USB devices
#
CONFIG_VIDEO_GO7007=m
CONFIG_VIDEO_GO7007_USB=m
CONFIG_VIDEO_GO7007_LOADER=m
CONFIG_VIDEO_GO7007_USB_S2250_BOARD=m
CONFIG_VIDEO_HDPVR=m
CONFIG_VIDEO_PVRUSB2=m
CONFIG_VIDEO_PVRUSB2_SYSFS=y
CONFIG_VIDEO_PVRUSB2_DVB=y
# CONFIG_VIDEO_PVRUSB2_DEBUGIFC is not set
CONFIG_VIDEO_STK1160=m

#
# Analog/digital TV USB devices
#
CONFIG_VIDEO_AU0828=m
CONFIG_VIDEO_AU0828_V4L2=y
CONFIG_VIDEO_AU0828_RC=y
CONFIG_VIDEO_CX231XX=m
CONFIG_VIDEO_CX231XX_RC=y
CONFIG_VIDEO_CX231XX_ALSA=m
CONFIG_VIDEO_CX231XX_DVB=m

#
# Digital TV USB devices
#
CONFIG_DVB_AS102=m
CONFIG_DVB_B2C2_FLEXCOP_USB=m
# CONFIG_DVB_B2C2_FLEXCOP_USB_DEBUG is not set
CONFIG_DVB_USB_V2=m
CONFIG_DVB_USB_AF9015=m
CONFIG_DVB_USB_AF9035=m
CONFIG_DVB_USB_ANYSEE=m
CONFIG_DVB_USB_AU6610=m
CONFIG_DVB_USB_AZ6007=m
CONFIG_DVB_USB_CE6230=m
CONFIG_DVB_USB_DVBSKY=m
CONFIG_DVB_USB_EC168=m
CONFIG_DVB_USB_GL861=m
CONFIG_DVB_USB_LME2510=m
CONFIG_DVB_USB_MXL111SF=m
CONFIG_DVB_USB_RTL28XXU=m
CONFIG_DVB_USB_ZD1301=m
CONFIG_DVB_USB=m
# CONFIG_DVB_USB_DEBUG is not set
CONFIG_DVB_USB_A800=m
CONFIG_DVB_USB_AF9005=m
CONFIG_DVB_USB_AF9005_REMOTE=m
CONFIG_DVB_USB_AZ6027=m
CONFIG_DVB_USB_CINERGY_T2=m
CONFIG_DVB_USB_CXUSB=m
CONFIG_DVB_USB_CXUSB_ANALOG=y
CONFIG_DVB_USB_DIB0700=m
CONFIG_DVB_USB_DIB3000MC=m
CONFIG_DVB_USB_DIBUSB_MB=m
# CONFIG_DVB_USB_DIBUSB_MB_FAULTY is not set
CONFIG_DVB_USB_DIBUSB_MC=m
CONFIG_DVB_USB_DIGITV=m
CONFIG_DVB_USB_DTT200U=m
CONFIG_DVB_USB_DTV5100=m
CONFIG_DVB_USB_DW2102=m
CONFIG_DVB_USB_GP8PSK=m
CONFIG_DVB_USB_M920X=m
CONFIG_DVB_USB_NOVA_T_USB2=m
CONFIG_DVB_USB_OPERA1=m
CONFIG_DVB_USB_PCTV452E=m
CONFIG_DVB_USB_TECHNISAT_USB2=m
CONFIG_DVB_USB_TTUSB2=m
CONFIG_DVB_USB_UMT_010=m
CONFIG_DVB_USB_VP702X=m
CONFIG_DVB_USB_VP7045=m
CONFIG_SMS_USB_DRV=m
CONFIG_DVB_TTUSB_BUDGET=m
CONFIG_DVB_TTUSB_DEC=m

#
# Webcam, TV (analog/digital) USB devices
#
CONFIG_VIDEO_EM28XX=m
CONFIG_VIDEO_EM28XX_V4L2=m
CONFIG_VIDEO_EM28XX_ALSA=m
CONFIG_VIDEO_EM28XX_DVB=m
CONFIG_VIDEO_EM28XX_RC=m

#
# Software defined radio USB devices
#
CONFIG_USB_AIRSPY=m
CONFIG_USB_HACKRF=m
CONFIG_USB_MSI2500=m
CONFIG_MEDIA_PCI_SUPPORT=y

#
# Media capture support
#
CONFIG_VIDEO_SOLO6X10=m
CONFIG_VIDEO_TW5864=m
CONFIG_VIDEO_TW68=m
CONFIG_VIDEO_TW686X=m
# CONFIG_VIDEO_ZORAN is not set

#
# Media capture/analog TV support
#
CONFIG_VIDEO_DT3155=m
CONFIG_VIDEO_IVTV=m
CONFIG_VIDEO_IVTV_ALSA=m
CONFIG_VIDEO_FB_IVTV=m
CONFIG_VIDEO_FB_IVTV_FORCE_PAT=y
# CONFIG_VIDEO_HEXIUM_GEMINI is not set
# CONFIG_VIDEO_HEXIUM_ORION is not set
# CONFIG_VIDEO_MXB is not set

#
# Media capture/analog/hybrid TV support
#
CONFIG_VIDEO_BT848=m
CONFIG_DVB_BT8XX=m
CONFIG_VIDEO_COBALT=m
CONFIG_VIDEO_CX18=m
CONFIG_VIDEO_CX18_ALSA=m
CONFIG_VIDEO_CX23885=m
CONFIG_MEDIA_ALTERA_CI=m
CONFIG_VIDEO_CX25821=m
CONFIG_VIDEO_CX25821_ALSA=m
CONFIG_VIDEO_CX88=m
CONFIG_VIDEO_CX88_ALSA=m
CONFIG_VIDEO_CX88_BLACKBIRD=m
CONFIG_VIDEO_CX88_DVB=m
CONFIG_VIDEO_CX88_ENABLE_VP3054=y
CONFIG_VIDEO_CX88_VP3054=m
CONFIG_VIDEO_CX88_MPEG=m
CONFIG_VIDEO_SAA7134=m
CONFIG_VIDEO_SAA7134_ALSA=m
CONFIG_VIDEO_SAA7134_RC=y
CONFIG_VIDEO_SAA7134_DVB=m
CONFIG_VIDEO_SAA7134_GO7007=m
CONFIG_VIDEO_SAA7164=m

#
# Media digital TV PCI Adapters
#
CONFIG_DVB_B2C2_FLEXCOP_PCI=m
# CONFIG_DVB_B2C2_FLEXCOP_PCI_DEBUG is not set
CONFIG_DVB_DDBRIDGE=m
# CONFIG_DVB_DDBRIDGE_MSIENABLE is not set
CONFIG_DVB_DM1105=m
CONFIG_MANTIS_CORE=m
CONFIG_DVB_MANTIS=m
CONFIG_DVB_HOPPER=m
CONFIG_DVB_NETUP_UNIDVB=m
CONFIG_DVB_NGENE=m
CONFIG_DVB_PLUTO2=m
CONFIG_DVB_PT1=m
CONFIG_DVB_PT3=m
CONFIG_DVB_SMIPCIE=m
# CONFIG_DVB_BUDGET_CORE is not set
# CONFIG_VIDEO_PCI_SKELETON is not set
CONFIG_IPU_BRIDGE=m
CONFIG_VIDEO_IPU3_CIO2=m
CONFIG_CIO2_BRIDGE=y
# CONFIG_INTEL_VSC is not set
CONFIG_RADIO_ADAPTERS=m
CONFIG_RADIO_MAXIRADIO=m
CONFIG_RADIO_SAA7706H=m
CONFIG_RADIO_SHARK=m
CONFIG_RADIO_SHARK2=m
CONFIG_RADIO_SI4713=m
CONFIG_RADIO_SI476X=m
CONFIG_RADIO_TEA575X=m
CONFIG_RADIO_TEA5764=m
CONFIG_RADIO_TEF6862=m
CONFIG_RADIO_WL1273=m
CONFIG_USB_DSBR=m
CONFIG_USB_KEENE=m
CONFIG_USB_MA901=m
CONFIG_USB_MR800=m
CONFIG_USB_RAREMONO=m
CONFIG_RADIO_SI470X=m
CONFIG_USB_SI470X=m
CONFIG_I2C_SI470X=m
CONFIG_USB_SI4713=m
CONFIG_PLATFORM_SI4713=m
CONFIG_I2C_SI4713=m
CONFIG_RADIO_WL128X=m
CONFIG_MEDIA_PLATFORM_DRIVERS=y
CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_SDR_PLATFORM_DRIVERS=y
CONFIG_DVB_PLATFORM_DRIVERS=y
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_VIDEO_MEM2MEM_DEINTERLACE=m

#
# Allegro DVT media platform drivers
#

#
# Amlogic media platform drivers
#

#
# Amphion drivers
#

#
# Aspeed media platform drivers
#

#
# Atmel media platform drivers
#

#
# Cadence media platform drivers
#
CONFIG_VIDEO_CADENCE_CSI2RX=m
CONFIG_VIDEO_CADENCE_CSI2TX=m

#
# Chips&Media media platform drivers
#

#
# Intel media platform drivers
#

#
# Marvell media platform drivers
#
CONFIG_VIDEO_CAFE_CCIC=m

#
# Mediatek media platform drivers
#

#
# Microchip Technology, Inc. media platform drivers
#

#
# NVidia media platform drivers
#

#
# NXP media platform drivers
#

#
# Qualcomm media platform drivers
#

#
# Renesas media platform drivers
#

#
# Rockchip media platform drivers
#

#
# Samsung media platform drivers
#

#
# STMicroelectronics media platform drivers
#

#
# Sunxi media platform drivers
#

#
# Texas Instruments drivers
#

#
# Verisilicon media platform drivers
#

#
# VIA media platform drivers
#
CONFIG_VIDEO_VIA_CAMERA=m

#
# Xilinx media platform drivers
#

#
# MMC/SDIO DVB adapters
#
CONFIG_SMS_SDIO_DRV=m
CONFIG_V4L_TEST_DRIVERS=y
CONFIG_VIDEO_VIM2M=m
CONFIG_VIDEO_VICODEC=m
CONFIG_VIDEO_VIMC=m
CONFIG_VIDEO_VIVID=m
CONFIG_VIDEO_VIVID_CEC=y
CONFIG_VIDEO_VIVID_MAX_DEVS=64
CONFIG_VIDEO_VISL=m
# CONFIG_VISL_DEBUGFS is not set
# CONFIG_DVB_TEST_DRIVERS is not set

#
# FireWire (IEEE 1394) Adapters
#
CONFIG_DVB_FIREDTV=m
CONFIG_DVB_FIREDTV_INPUT=y
CONFIG_MEDIA_COMMON_OPTIONS=y

#
# common driver options
#
CONFIG_CYPRESS_FIRMWARE=m
CONFIG_TTPCI_EEPROM=m
CONFIG_UVC_COMMON=m
CONFIG_VIDEO_CX2341X=m
CONFIG_VIDEO_TVEEPROM=m
CONFIG_DVB_B2C2_FLEXCOP=m
CONFIG_SMS_SIANO_MDTV=m
CONFIG_SMS_SIANO_RC=y
CONFIG_SMS_SIANO_DEBUGFS=y
CONFIG_VIDEO_V4L2_TPG=m
CONFIG_VIDEOBUF2_CORE=m
CONFIG_VIDEOBUF2_V4L2=m
CONFIG_VIDEOBUF2_MEMOPS=m
CONFIG_VIDEOBUF2_DMA_CONTIG=m
CONFIG_VIDEOBUF2_VMALLOC=m
CONFIG_VIDEOBUF2_DMA_SG=m
CONFIG_VIDEOBUF2_DVB=m
# end of Media drivers

#
# Media ancillary drivers
#
CONFIG_MEDIA_ATTACH=y

#
# IR I2C driver auto-selected by 'Autoselect ancillary drivers'
#
CONFIG_VIDEO_IR_I2C=m
CONFIG_VIDEO_CAMERA_SENSOR=y
CONFIG_VIDEO_APTINA_PLL=m
CONFIG_VIDEO_CCS_PLL=m
CONFIG_VIDEO_AR0521=m
CONFIG_VIDEO_HI556=m
CONFIG_VIDEO_HI846=m
CONFIG_VIDEO_HI847=m
CONFIG_VIDEO_IMX208=m
CONFIG_VIDEO_IMX214=m
CONFIG_VIDEO_IMX219=m
CONFIG_VIDEO_IMX258=m
CONFIG_VIDEO_IMX274=m
CONFIG_VIDEO_IMX290=m
# CONFIG_VIDEO_IMX296 is not set
CONFIG_VIDEO_IMX319=m
CONFIG_VIDEO_IMX355=m
CONFIG_VIDEO_MAX9271_LIB=m
CONFIG_VIDEO_MT9M001=m
CONFIG_VIDEO_MT9M111=m
CONFIG_VIDEO_MT9P031=m
CONFIG_VIDEO_MT9T112=m
CONFIG_VIDEO_MT9V011=m
CONFIG_VIDEO_MT9V032=m
CONFIG_VIDEO_MT9V111=m
CONFIG_VIDEO_OG01A1B=m
# CONFIG_VIDEO_OV01A10 is not set
CONFIG_VIDEO_OV02A10=m
CONFIG_VIDEO_OV08D10=m
CONFIG_VIDEO_OV08X40=m
CONFIG_VIDEO_OV13858=m
CONFIG_VIDEO_OV13B10=m
CONFIG_VIDEO_OV2640=m
CONFIG_VIDEO_OV2659=m
CONFIG_VIDEO_OV2680=m
CONFIG_VIDEO_OV2685=m
CONFIG_VIDEO_OV2740=m
CONFIG_VIDEO_OV4689=m
CONFIG_VIDEO_OV5647=m
CONFIG_VIDEO_OV5648=m
CONFIG_VIDEO_OV5670=m
CONFIG_VIDEO_OV5675=m
CONFIG_VIDEO_OV5693=m
CONFIG_VIDEO_OV5695=m
CONFIG_VIDEO_OV6650=m
CONFIG_VIDEO_OV7251=m
CONFIG_VIDEO_OV7640=m
CONFIG_VIDEO_OV7670=m
CONFIG_VIDEO_OV772X=m
CONFIG_VIDEO_OV7740=m
CONFIG_VIDEO_OV8856=m
# CONFIG_VIDEO_OV8858 is not set
CONFIG_VIDEO_OV8865=m
CONFIG_VIDEO_OV9640=m
CONFIG_VIDEO_OV9650=m
CONFIG_VIDEO_OV9734=m
CONFIG_VIDEO_RDACM20=m
CONFIG_VIDEO_RDACM21=m
CONFIG_VIDEO_RJ54N1=m
CONFIG_VIDEO_S5C73M3=m
CONFIG_VIDEO_S5K5BAF=m
CONFIG_VIDEO_S5K6A3=m
CONFIG_VIDEO_CCS=m
CONFIG_VIDEO_ET8EK8=m

#
# Lens drivers
#
CONFIG_VIDEO_AD5820=m
CONFIG_VIDEO_AK7375=m
CONFIG_VIDEO_DW9714=m
# CONFIG_VIDEO_DW9719 is not set
CONFIG_VIDEO_DW9768=m
CONFIG_VIDEO_DW9807_VCM=m
# end of Lens drivers

#
# Flash devices
#
CONFIG_VIDEO_ADP1653=m
CONFIG_VIDEO_LM3560=m
CONFIG_VIDEO_LM3646=m
# end of Flash devices

#
# Audio decoders, processors and mixers
#
CONFIG_VIDEO_CS3308=m
CONFIG_VIDEO_CS5345=m
CONFIG_VIDEO_CS53L32A=m
CONFIG_VIDEO_MSP3400=m
CONFIG_VIDEO_SONY_BTF_MPX=m
CONFIG_VIDEO_TDA1997X=m
CONFIG_VIDEO_TDA7432=m
CONFIG_VIDEO_TDA9840=m
CONFIG_VIDEO_TEA6415C=m
CONFIG_VIDEO_TEA6420=m
CONFIG_VIDEO_TLV320AIC23B=m
CONFIG_VIDEO_TVAUDIO=m
CONFIG_VIDEO_UDA1342=m
CONFIG_VIDEO_VP27SMPX=m
CONFIG_VIDEO_WM8739=m
CONFIG_VIDEO_WM8775=m
# end of Audio decoders, processors and mixers

#
# RDS decoders
#
CONFIG_VIDEO_SAA6588=m
# end of RDS decoders

#
# Video decoders
#
CONFIG_VIDEO_ADV7180=m
CONFIG_VIDEO_ADV7183=m
CONFIG_VIDEO_ADV7604=m
CONFIG_VIDEO_ADV7604_CEC=y
CONFIG_VIDEO_ADV7842=m
CONFIG_VIDEO_ADV7842_CEC=y
CONFIG_VIDEO_BT819=m
CONFIG_VIDEO_BT856=m
CONFIG_VIDEO_BT866=m
CONFIG_VIDEO_KS0127=m
CONFIG_VIDEO_ML86V7667=m
CONFIG_VIDEO_SAA7110=m
CONFIG_VIDEO_SAA711X=m
CONFIG_VIDEO_TC358743=m
CONFIG_VIDEO_TC358743_CEC=y
CONFIG_VIDEO_TC358746=m
CONFIG_VIDEO_TVP514X=m
CONFIG_VIDEO_TVP5150=m
CONFIG_VIDEO_TVP7002=m
CONFIG_VIDEO_TW2804=m
CONFIG_VIDEO_TW9903=m
CONFIG_VIDEO_TW9906=m
CONFIG_VIDEO_TW9910=m
CONFIG_VIDEO_VPX3220=m

#
# Video and audio decoders
#
CONFIG_VIDEO_SAA717X=m
CONFIG_VIDEO_CX25840=m
# end of Video decoders

#
# Video encoders
#
CONFIG_VIDEO_ADV7170=m
CONFIG_VIDEO_ADV7175=m
CONFIG_VIDEO_ADV7343=m
CONFIG_VIDEO_ADV7393=m
CONFIG_VIDEO_ADV7511=m
# CONFIG_VIDEO_ADV7511_CEC is not set
CONFIG_VIDEO_AK881X=m
CONFIG_VIDEO_SAA7127=m
CONFIG_VIDEO_SAA7185=m
CONFIG_VIDEO_THS8200=m
# end of Video encoders

#
# Video improvement chips
#
CONFIG_VIDEO_UPD64031A=m
CONFIG_VIDEO_UPD64083=m
# end of Video improvement chips

#
# Audio/Video compression chips
#
CONFIG_VIDEO_SAA6752HS=m
# end of Audio/Video compression chips

#
# SDR tuner chips
#
CONFIG_SDR_MAX2175=m
# end of SDR tuner chips

#
# Miscellaneous helper chips
#
CONFIG_VIDEO_I2C=m
CONFIG_VIDEO_M52790=m
CONFIG_VIDEO_ST_MIPID02=m
CONFIG_VIDEO_THS7303=m
# end of Miscellaneous helper chips

#
# Video serializers and deserializers
#
# end of Video serializers and deserializers

#
# Media SPI Adapters
#
CONFIG_CXD2880_SPI_DRV=m
CONFIG_VIDEO_GS1662=m
# end of Media SPI Adapters

CONFIG_MEDIA_TUNER=m

#
# Customize TV tuners
#
CONFIG_MEDIA_TUNER_E4000=m
CONFIG_MEDIA_TUNER_FC0011=m
CONFIG_MEDIA_TUNER_FC0012=m
CONFIG_MEDIA_TUNER_FC0013=m
CONFIG_MEDIA_TUNER_FC2580=m
CONFIG_MEDIA_TUNER_IT913X=m
CONFIG_MEDIA_TUNER_M88RS6000T=m
CONFIG_MEDIA_TUNER_MAX2165=m
CONFIG_MEDIA_TUNER_MC44S803=m
CONFIG_MEDIA_TUNER_MSI001=m
CONFIG_MEDIA_TUNER_MT2060=m
CONFIG_MEDIA_TUNER_MT2063=m
CONFIG_MEDIA_TUNER_MT20XX=m
CONFIG_MEDIA_TUNER_MT2131=m
CONFIG_MEDIA_TUNER_MT2266=m
CONFIG_MEDIA_TUNER_MXL301RF=m
CONFIG_MEDIA_TUNER_MXL5005S=m
CONFIG_MEDIA_TUNER_MXL5007T=m
CONFIG_MEDIA_TUNER_QM1D1B0004=m
CONFIG_MEDIA_TUNER_QM1D1C0042=m
CONFIG_MEDIA_TUNER_QT1010=m
CONFIG_MEDIA_TUNER_R820T=m
CONFIG_MEDIA_TUNER_SI2157=m
CONFIG_MEDIA_TUNER_SIMPLE=m
CONFIG_MEDIA_TUNER_TDA18212=m
CONFIG_MEDIA_TUNER_TDA18218=m
CONFIG_MEDIA_TUNER_TDA18250=m
CONFIG_MEDIA_TUNER_TDA18271=m
CONFIG_MEDIA_TUNER_TDA827X=m
CONFIG_MEDIA_TUNER_TDA8290=m
CONFIG_MEDIA_TUNER_TDA9887=m
CONFIG_MEDIA_TUNER_TEA5761=m
CONFIG_MEDIA_TUNER_TEA5767=m
CONFIG_MEDIA_TUNER_TUA9001=m
CONFIG_MEDIA_TUNER_XC2028=m
CONFIG_MEDIA_TUNER_XC4000=m
CONFIG_MEDIA_TUNER_XC5000=m
# end of Customize TV tuners

#
# Customise DVB Frontends
#

#
# Multistandard (satellite) frontends
#
CONFIG_DVB_M88DS3103=m
CONFIG_DVB_MXL5XX=m
CONFIG_DVB_STB0899=m
CONFIG_DVB_STB6100=m
CONFIG_DVB_STV090x=m
CONFIG_DVB_STV0910=m
CONFIG_DVB_STV6110x=m
CONFIG_DVB_STV6111=m

#
# Multistandard (cable + terrestrial) frontends
#
CONFIG_DVB_DRXK=m
CONFIG_DVB_MN88472=m
CONFIG_DVB_MN88473=m
CONFIG_DVB_SI2165=m
CONFIG_DVB_TDA18271C2DD=m

#
# DVB-S (satellite) frontends
#
CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_DS3000=m
CONFIG_DVB_MB86A16=m
CONFIG_DVB_MT312=m
CONFIG_DVB_S5H1420=m
CONFIG_DVB_SI21XX=m
CONFIG_DVB_STB6000=m
CONFIG_DVB_STV0288=m
CONFIG_DVB_STV0299=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_STV6110=m
CONFIG_DVB_TDA10071=m
CONFIG_DVB_TDA10086=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA8261=m
CONFIG_DVB_TDA826X=m
CONFIG_DVB_TS2020=m
CONFIG_DVB_TUA6100=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_ZL10036=m
CONFIG_DVB_ZL10039=m

#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_AF9013=m
CONFIG_DVB_AS102_FE=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
CONFIG_DVB_CXD2820R=m
CONFIG_DVB_CXD2841ER=m
CONFIG_DVB_DIB3000MB=m
CONFIG_DVB_DIB3000MC=m
CONFIG_DVB_DIB7000M=m
CONFIG_DVB_DIB7000P=m
CONFIG_DVB_DIB9000=m
CONFIG_DVB_DRXD=m
CONFIG_DVB_EC100=m
CONFIG_DVB_GP8PSK_FE=m
CONFIG_DVB_L64781=m
CONFIG_DVB_MT352=m
CONFIG_DVB_NXT6000=m
CONFIG_DVB_RTL2830=m
CONFIG_DVB_RTL2832=m
CONFIG_DVB_RTL2832_SDR=m
CONFIG_DVB_S5H1432=m
CONFIG_DVB_SI2168=m
CONFIG_DVB_SP887X=m
CONFIG_DVB_STV0367=m
CONFIG_DVB_TDA10048=m
CONFIG_DVB_TDA1004X=m
CONFIG_DVB_ZD1301_DEMOD=m
CONFIG_DVB_ZL10353=m
CONFIG_DVB_CXD2880=m

#
# DVB-C (cable) frontends
#
CONFIG_DVB_STV0297=m
CONFIG_DVB_TDA10021=m
CONFIG_DVB_TDA10023=m
CONFIG_DVB_VES1820=m

#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_AU8522=m
CONFIG_DVB_AU8522_DTV=m
CONFIG_DVB_AU8522_V4L=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LG2160=m
CONFIG_DVB_LGDT3305=m
CONFIG_DVB_LGDT3306A=m
CONFIG_DVB_LGDT330X=m
CONFIG_DVB_MXL692=m
CONFIG_DVB_NXT200X=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_OR51211=m
CONFIG_DVB_S5H1409=m
CONFIG_DVB_S5H1411=m

#
# ISDB-T (terrestrial) frontends
#
CONFIG_DVB_DIB8000=m
CONFIG_DVB_MB86A20S=m
CONFIG_DVB_S921=m

#
# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
#
CONFIG_DVB_MN88443X=m
CONFIG_DVB_TC90522=m

#
# Digital terrestrial only tuners/PLL
#
CONFIG_DVB_PLL=m
CONFIG_DVB_TUNER_DIB0070=m
CONFIG_DVB_TUNER_DIB0090=m

#
# SEC control devices for DVB-S
#
CONFIG_DVB_A8293=m
CONFIG_DVB_AF9033=m
CONFIG_DVB_ASCOT2E=m
CONFIG_DVB_ATBM8830=m
CONFIG_DVB_HELENE=m
CONFIG_DVB_HORUS3A=m
CONFIG_DVB_ISL6405=m
CONFIG_DVB_ISL6421=m
CONFIG_DVB_ISL6423=m
CONFIG_DVB_IX2505V=m
CONFIG_DVB_LGS8GL5=m
CONFIG_DVB_LGS8GXX=m
CONFIG_DVB_LNBH25=m
CONFIG_DVB_LNBH29=m
CONFIG_DVB_LNBP21=m
CONFIG_DVB_LNBP22=m
CONFIG_DVB_M88RS2000=m
CONFIG_DVB_TDA665x=m
CONFIG_DVB_DRX39XYJ=m

#
# Common Interface (EN50221) controller drivers
#
CONFIG_DVB_CXD2099=m
CONFIG_DVB_SP2=m
# end of Customise DVB Frontends

#
# Tools to develop new frontends
#
CONFIG_DVB_DUMMY_FE=m
# end of Media ancillary drivers

#
# Graphics support
#
CONFIG_APERTURE_HELPERS=y
CONFIG_VIDEO_CMDLINE=y
CONFIG_VIDEO_NOMODESET=y
CONFIG_AUXDISPLAY=y
CONFIG_CHARLCD=m
CONFIG_LINEDISP=m
CONFIG_HD44780_COMMON=m
CONFIG_HD44780=m
CONFIG_KS0108=m
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_CFAG12864B=m
CONFIG_CFAG12864B_RATE=20
CONFIG_IMG_ASCII_LCD=m
CONFIG_HT16K33=m
CONFIG_LCD2S=m
CONFIG_PARPORT_PANEL=m
CONFIG_PANEL_PARPORT=0
CONFIG_PANEL_PROFILE=5
# CONFIG_PANEL_CHANGE_MESSAGE is not set
# CONFIG_CHARLCD_BL_OFF is not set
# CONFIG_CHARLCD_BL_ON is not set
CONFIG_CHARLCD_BL_FLASH=y
CONFIG_PANEL=m
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=m
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=m
CONFIG_DRM_MIPI_DBI=m
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_KMS_HELPER=m
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
# CONFIG_DRM_DEBUG_MODESET_LOCK is not set
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
CONFIG_DRM_DISPLAY_HELPER=m
CONFIG_DRM_DISPLAY_DP_HELPER=y
CONFIG_DRM_DISPLAY_HDCP_HELPER=y
CONFIG_DRM_DISPLAY_HDMI_HELPER=y
CONFIG_DRM_DP_AUX_CHARDEV=y
CONFIG_DRM_DP_CEC=y
CONFIG_DRM_TTM=m
CONFIG_DRM_EXEC=m
CONFIG_DRM_BUDDY=m
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_DMA_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=m
CONFIG_DRM_SUBALLOC_HELPER=m
CONFIG_DRM_SCHED=m

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
CONFIG_DRM_I2C_NXP_TDA998X=m
CONFIG_DRM_I2C_NXP_TDA9950=m
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

CONFIG_DRM_RADEON=m
# CONFIG_DRM_RADEON_USERPTR is not set
CONFIG_DRM_AMDGPU=m
CONFIG_DRM_AMDGPU_SI=y
CONFIG_DRM_AMDGPU_CIK=y
CONFIG_DRM_AMDGPU_USERPTR=y
# CONFIG_DRM_AMDGPU_WERROR is not set

#
# ACP (Audio CoProcessor) Configuration
#
CONFIG_DRM_AMD_ACP=y
# end of ACP (Audio CoProcessor) Configuration

#
# Display Engine Configuration
#
CONFIG_DRM_AMD_DC=y
CONFIG_DRM_AMD_DC_FP=y
CONFIG_DRM_AMD_DC_SI=y
# CONFIG_DEBUG_KERNEL_DC is not set
CONFIG_DRM_AMD_SECURE_DISPLAY=y
# end of Display Engine Configuration

CONFIG_HSA_AMD=y
CONFIG_HSA_AMD_SVM=y
CONFIG_HSA_AMD_P2P=y
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_NOUVEAU_DEBUG_MMU is not set
# CONFIG_NOUVEAU_DEBUG_PUSH is not set
CONFIG_DRM_NOUVEAU_BACKLIGHT=y
# CONFIG_DRM_NOUVEAU_SVM is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
CONFIG_DRM_I915_GVT_KVMGT=m
CONFIG_DRM_I915_PXP=y

#
# drm/i915 Debugging
#
# CONFIG_DRM_I915_WERROR is not set
# CONFIG_DRM_I915_DEBUG is not set
# CONFIG_DRM_I915_DEBUG_MMIO is not set
# CONFIG_DRM_I915_SW_FENCE_DEBUG_OBJECTS is not set
# CONFIG_DRM_I915_SW_FENCE_CHECK_DAG is not set
# CONFIG_DRM_I915_DEBUG_GUC is not set
# CONFIG_DRM_I915_SELFTEST is not set
# CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS is not set
# CONFIG_DRM_I915_DEBUG_VBLANK_EVADE is not set
# CONFIG_DRM_I915_DEBUG_RUNTIME_PM is not set
# end of drm/i915 Debugging

#
# drm/i915 Profile Guided Optimisation
#
CONFIG_DRM_I915_REQUEST_TIMEOUT=20000
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_PREEMPT_TIMEOUT_COMPUTE=7500
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# end of drm/i915 Profile Guided Optimisation

CONFIG_DRM_I915_GVT=y
CONFIG_DRM_VGEM=m
CONFIG_DRM_VKMS=m
CONFIG_DRM_VMWGFX=m
# CONFIG_DRM_VMWGFX_MKSSTATS is not set
CONFIG_DRM_GMA500=m
CONFIG_DRM_UDL=m
CONFIG_DRM_AST=m
CONFIG_DRM_MGAG200=m
CONFIG_DRM_QXL=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_VIRTIO_GPU_KMS=y
CONFIG_DRM_PANEL=y

#
# Display Panels
#
# CONFIG_DRM_PANEL_AUO_A030JTN01 is not set
# CONFIG_DRM_PANEL_ORISETECH_OTA5601A is not set
CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN=m
CONFIG_DRM_PANEL_WIDECHIPS_WS2401=m
# end of Display Panels

CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y

#
# Display Interface Bridges
#
CONFIG_DRM_ANALOGIX_ANX78XX=m
CONFIG_DRM_ANALOGIX_DP=m
# end of Display Interface Bridges

# CONFIG_DRM_LOONGSON is not set
# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_BOCHS=m
CONFIG_DRM_CIRRUS_QEMU=m
CONFIG_DRM_GM12U320=m
CONFIG_DRM_PANEL_MIPI_DBI=m
CONFIG_DRM_SIMPLEDRM=m
CONFIG_TINYDRM_HX8357D=m
CONFIG_TINYDRM_ILI9163=m
CONFIG_TINYDRM_ILI9225=m
CONFIG_TINYDRM_ILI9341=m
CONFIG_TINYDRM_ILI9486=m
CONFIG_TINYDRM_MI0283QT=m
CONFIG_TINYDRM_REPAPER=m
CONFIG_TINYDRM_ST7586=m
CONFIG_TINYDRM_ST7735R=m
CONFIG_DRM_XEN=y
CONFIG_DRM_XEN_FRONTEND=m
CONFIG_DRM_VBOXVIDEO=m
CONFIG_DRM_GUD=m
CONFIG_DRM_SSD130X=m
CONFIG_DRM_SSD130X_I2C=m
CONFIG_DRM_SSD130X_SPI=m
CONFIG_DRM_HYPERV=m
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
CONFIG_DRM_PRIVACY_SCREEN=y

#
# Frame buffer Devices
#
CONFIG_FB=y
CONFIG_FB_HECUBA=m
CONFIG_FB_SVGALIB=m
CONFIG_FB_CIRRUS=m
CONFIG_FB_PM2=m
CONFIG_FB_PM2_FIFO_DISCONNECT=y
CONFIG_FB_CYBER2000=m
CONFIG_FB_CYBER2000_DDC=y
CONFIG_FB_ARC=m
CONFIG_FB_ASILIANT=y
CONFIG_FB_IMSTT=y
CONFIG_FB_VGA16=m
CONFIG_FB_UVESA=m
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
CONFIG_FB_N411=m
CONFIG_FB_HGA=m
CONFIG_FB_OPENCORES=m
CONFIG_FB_S1D13XXX=m
CONFIG_FB_NVIDIA=m
CONFIG_FB_NVIDIA_I2C=y
# CONFIG_FB_NVIDIA_DEBUG is not set
CONFIG_FB_NVIDIA_BACKLIGHT=y
CONFIG_FB_RIVA=m
CONFIG_FB_RIVA_I2C=y
# CONFIG_FB_RIVA_DEBUG is not set
CONFIG_FB_RIVA_BACKLIGHT=y
CONFIG_FB_I740=m
CONFIG_FB_LE80578=m
CONFIG_FB_CARILLO_RANCH=m
CONFIG_FB_INTEL=m
# CONFIG_FB_INTEL_DEBUG is not set
CONFIG_FB_INTEL_I2C=y
CONFIG_FB_MATROX=m
CONFIG_FB_MATROX_MILLENIUM=y
CONFIG_FB_MATROX_MYSTIQUE=y
CONFIG_FB_MATROX_G=y
CONFIG_FB_MATROX_I2C=m
CONFIG_FB_MATROX_MAVEN=m
CONFIG_FB_RADEON=m
CONFIG_FB_RADEON_I2C=y
CONFIG_FB_RADEON_BACKLIGHT=y
# CONFIG_FB_RADEON_DEBUG is not set
CONFIG_FB_ATY128=m
CONFIG_FB_ATY128_BACKLIGHT=y
CONFIG_FB_ATY=m
CONFIG_FB_ATY_CT=y
# CONFIG_FB_ATY_GENERIC_LCD is not set
CONFIG_FB_ATY_GX=y
CONFIG_FB_ATY_BACKLIGHT=y
CONFIG_FB_S3=m
CONFIG_FB_S3_DDC=y
CONFIG_FB_SAVAGE=m
CONFIG_FB_SAVAGE_I2C=y
# CONFIG_FB_SAVAGE_ACCEL is not set
CONFIG_FB_SIS=m
CONFIG_FB_SIS_300=y
CONFIG_FB_SIS_315=y
CONFIG_FB_VIA=m
# CONFIG_FB_VIA_DIRECT_PROCFS is not set
CONFIG_FB_VIA_X_COMPATIBILITY=y
CONFIG_FB_NEOMAGIC=m
CONFIG_FB_KYRO=m
CONFIG_FB_3DFX=m
# CONFIG_FB_3DFX_ACCEL is not set
# CONFIG_FB_3DFX_I2C is not set
CONFIG_FB_VOODOO1=m
CONFIG_FB_VT8623=m
CONFIG_FB_TRIDENT=m
CONFIG_FB_ARK=m
CONFIG_FB_PM3=m
CONFIG_FB_CARMINE=m
CONFIG_FB_CARMINE_DRAM_EVAL=y
# CONFIG_CARMINE_DRAM_CUSTOM is not set
CONFIG_FB_SM501=m
CONFIG_FB_SMSCUFX=m
CONFIG_FB_UDL=m
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
CONFIG_FB_METRONOME=m
CONFIG_FB_MB862XX=m
CONFIG_FB_MB862XX_PCI_GDC=y
CONFIG_FB_MB862XX_I2C=y
CONFIG_FB_HYPERV=m
CONFIG_FB_SIMPLE=m
CONFIG_FB_SSD1307=m
CONFIG_FB_SM712=m
CONFIG_FB_CORE=y
CONFIG_FB_NOTIFY=y
CONFIG_FIRMWARE_EDID=y
CONFIG_FB_DEVICE=y
CONFIG_FB_DDC=m
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=y
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_DMAMEM_HELPERS=y
CONFIG_FB_IOMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y
CONFIG_FB_BACKLIGHT=m
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
CONFIG_LCD_L4F00242T03=m
CONFIG_LCD_LMS283GF05=m
CONFIG_LCD_LTV350QV=m
CONFIG_LCD_ILI922X=m
CONFIG_LCD_ILI9320=m
CONFIG_LCD_TDO24M=m
CONFIG_LCD_VGG2432A4=m
CONFIG_LCD_PLATFORM=m
CONFIG_LCD_AMS369FG06=m
CONFIG_LCD_LMS501KF03=m
CONFIG_LCD_HX8357=m
CONFIG_LCD_OTM3225A=m
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_KTD253=m
# CONFIG_BACKLIGHT_KTZ8866 is not set
CONFIG_BACKLIGHT_LM3533=m
CONFIG_BACKLIGHT_CARILLO_RANCH=m
CONFIG_BACKLIGHT_PWM=m
CONFIG_BACKLIGHT_DA903X=m
CONFIG_BACKLIGHT_DA9052=m
CONFIG_BACKLIGHT_MAX8925=m
CONFIG_BACKLIGHT_MT6370=m
CONFIG_BACKLIGHT_APPLE=m
CONFIG_BACKLIGHT_QCOM_WLED=m
CONFIG_BACKLIGHT_RT4831=m
CONFIG_BACKLIGHT_SAHARA=m
CONFIG_BACKLIGHT_WM831X=m
CONFIG_BACKLIGHT_ADP5520=m
CONFIG_BACKLIGHT_ADP8860=m
CONFIG_BACKLIGHT_ADP8870=m
CONFIG_BACKLIGHT_88PM860X=m
CONFIG_BACKLIGHT_PCF50633=m
CONFIG_BACKLIGHT_AAT2870=m
CONFIG_BACKLIGHT_LM3630A=m
CONFIG_BACKLIGHT_LM3639=m
CONFIG_BACKLIGHT_LP855X=m
CONFIG_BACKLIGHT_LP8788=m
CONFIG_BACKLIGHT_PANDORA=m
CONFIG_BACKLIGHT_SKY81452=m
CONFIG_BACKLIGHT_AS3711=m
CONFIG_BACKLIGHT_GPIO=m
CONFIG_BACKLIGHT_LV5207LP=m
CONFIG_BACKLIGHT_BD6107=m
CONFIG_BACKLIGHT_ARCXCNN=m
CONFIG_BACKLIGHT_RAVE_SP=m
# end of Backlight & LCD device support

CONFIG_VGASTATE=m
CONFIG_VIDEOMODE_HELPERS=y
CONFIG_HDMI=y

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION is not set
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER=y
# end of Console display driver support

# CONFIG_LOGO is not set
# end of Graphics support

CONFIG_DRM_ACCEL=y
# CONFIG_DRM_ACCEL_HABANALABS is not set
# CONFIG_DRM_ACCEL_IVPU is not set
# CONFIG_DRM_ACCEL_QAIC is not set
CONFIG_SOUND=m
CONFIG_SOUND_OSS_CORE=y
# CONFIG_SOUND_OSS_CORE_PRECLAIM is not set
CONFIG_SND=m
CONFIG_SND_TIMER=m
CONFIG_SND_PCM=m
CONFIG_SND_PCM_ELD=y
CONFIG_SND_PCM_IEC958=y
CONFIG_SND_DMAENGINE_PCM=m
CONFIG_SND_HWDEP=m
CONFIG_SND_SEQ_DEVICE=m
CONFIG_SND_RAWMIDI=m
CONFIG_SND_COMPRESS_OFFLOAD=m
CONFIG_SND_JACK=y
CONFIG_SND_JACK_INPUT_DEV=y
CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=m
# CONFIG_SND_PCM_OSS is not set
CONFIG_SND_PCM_TIMER=y
CONFIG_SND_HRTIMER=m
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_MAX_CARDS=32
CONFIG_SND_SUPPORT_OLD_API=y
CONFIG_SND_PROC_FS=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
# CONFIG_SND_CTL_FAST_LOOKUP is not set
# CONFIG_SND_DEBUG is not set
# CONFIG_SND_CTL_INPUT_VALIDATION is not set
CONFIG_SND_VMASTER=y
CONFIG_SND_DMA_SGBUF=y
CONFIG_SND_CTL_LED=m
CONFIG_SND_SEQUENCER=m
CONFIG_SND_SEQ_DUMMY=m
# CONFIG_SND_SEQUENCER_OSS is not set
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=m
CONFIG_SND_SEQ_MIDI=m
CONFIG_SND_SEQ_MIDI_EMUL=m
CONFIG_SND_SEQ_VIRMIDI=m
# CONFIG_SND_SEQ_UMP is not set
CONFIG_SND_MPU401_UART=m
CONFIG_SND_OPL3_LIB=m
CONFIG_SND_OPL3_LIB_SEQ=m
CONFIG_SND_VX_LIB=m
CONFIG_SND_AC97_CODEC=m
CONFIG_SND_DRIVERS=y
CONFIG_SND_PCSP=m
CONFIG_SND_DUMMY=m
CONFIG_SND_ALOOP=m
# CONFIG_SND_PCMTEST is not set
CONFIG_SND_VIRMIDI=m
CONFIG_SND_MTPAV=m
CONFIG_SND_MTS64=m
CONFIG_SND_SERIAL_U16550=m
CONFIG_SND_MPU401=m
CONFIG_SND_PORTMAN2X4=m
CONFIG_SND_AC97_POWER_SAVE=y
CONFIG_SND_AC97_POWER_SAVE_DEFAULT=0
CONFIG_SND_SB_COMMON=m
CONFIG_SND_PCI=y
CONFIG_SND_AD1889=m
CONFIG_SND_ALS300=m
CONFIG_SND_ALS4000=m
CONFIG_SND_ALI5451=m
CONFIG_SND_ASIHPI=m
CONFIG_SND_ATIIXP=m
CONFIG_SND_ATIIXP_MODEM=m
CONFIG_SND_AU8810=m
CONFIG_SND_AU8820=m
CONFIG_SND_AU8830=m
CONFIG_SND_AW2=m
CONFIG_SND_AZT3328=m
CONFIG_SND_BT87X=m
# CONFIG_SND_BT87X_OVERCLOCK is not set
CONFIG_SND_CA0106=m
CONFIG_SND_CMIPCI=m
CONFIG_SND_OXYGEN_LIB=m
CONFIG_SND_OXYGEN=m
CONFIG_SND_CS4281=m
CONFIG_SND_CS46XX=m
CONFIG_SND_CS46XX_NEW_DSP=y
CONFIG_SND_CTXFI=m
CONFIG_SND_DARLA20=m
CONFIG_SND_GINA20=m
CONFIG_SND_LAYLA20=m
CONFIG_SND_DARLA24=m
CONFIG_SND_GINA24=m
CONFIG_SND_LAYLA24=m
CONFIG_SND_MONA=m
CONFIG_SND_MIA=m
CONFIG_SND_ECHO3G=m
CONFIG_SND_INDIGO=m
CONFIG_SND_INDIGOIO=m
CONFIG_SND_INDIGODJ=m
CONFIG_SND_INDIGOIOX=m
CONFIG_SND_INDIGODJX=m
CONFIG_SND_EMU10K1=m
CONFIG_SND_EMU10K1_SEQ=m
CONFIG_SND_EMU10K1X=m
CONFIG_SND_ENS1370=m
CONFIG_SND_ENS1371=m
CONFIG_SND_ES1938=m
CONFIG_SND_ES1968=m
CONFIG_SND_ES1968_INPUT=y
CONFIG_SND_ES1968_RADIO=y
CONFIG_SND_FM801=m
CONFIG_SND_FM801_TEA575X_BOOL=y
CONFIG_SND_HDSP=m
CONFIG_SND_HDSPM=m
CONFIG_SND_ICE1712=m
CONFIG_SND_ICE1724=m
CONFIG_SND_INTEL8X0=m
CONFIG_SND_INTEL8X0M=m
CONFIG_SND_KORG1212=m
CONFIG_SND_LOLA=m
CONFIG_SND_LX6464ES=m
CONFIG_SND_MAESTRO3=m
CONFIG_SND_MAESTRO3_INPUT=y
CONFIG_SND_MIXART=m
CONFIG_SND_NM256=m
CONFIG_SND_PCXHR=m
CONFIG_SND_RIPTIDE=m
CONFIG_SND_RME32=m
CONFIG_SND_RME96=m
CONFIG_SND_RME9652=m
CONFIG_SND_SONICVIBES=m
CONFIG_SND_TRIDENT=m
CONFIG_SND_VIA82XX=m
CONFIG_SND_VIA82XX_MODEM=m
CONFIG_SND_VIRTUOSO=m
CONFIG_SND_VX222=m
CONFIG_SND_YMFPCI=m

#
# HD-Audio
#
CONFIG_SND_HDA=m
CONFIG_SND_HDA_GENERIC_LEDS=y
CONFIG_SND_HDA_INTEL=m
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
CONFIG_SND_HDA_INPUT_BEEP=y
CONFIG_SND_HDA_INPUT_BEEP_MODE=0
CONFIG_SND_HDA_PATCH_LOADER=y
CONFIG_SND_HDA_SCODEC_CS35L41=m
CONFIG_SND_HDA_CS_DSP_CONTROLS=m
CONFIG_SND_HDA_SCODEC_CS35L41_I2C=m
CONFIG_SND_HDA_SCODEC_CS35L41_SPI=m
# CONFIG_SND_HDA_SCODEC_CS35L56_I2C is not set
# CONFIG_SND_HDA_SCODEC_CS35L56_SPI is not set
# CONFIG_SND_HDA_SCODEC_TAS2781_I2C is not set
CONFIG_SND_HDA_CODEC_REALTEK=m
CONFIG_SND_HDA_CODEC_ANALOG=m
CONFIG_SND_HDA_CODEC_SIGMATEL=m
CONFIG_SND_HDA_CODEC_VIA=m
CONFIG_SND_HDA_CODEC_HDMI=m
CONFIG_SND_HDA_CODEC_CIRRUS=m
CONFIG_SND_HDA_CODEC_CS8409=m
CONFIG_SND_HDA_CODEC_CONEXANT=m
CONFIG_SND_HDA_CODEC_CA0110=m
CONFIG_SND_HDA_CODEC_CA0132=m
CONFIG_SND_HDA_CODEC_CA0132_DSP=y
CONFIG_SND_HDA_CODEC_CMEDIA=m
CONFIG_SND_HDA_CODEC_SI3054=m
CONFIG_SND_HDA_GENERIC=m
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=1
CONFIG_SND_HDA_INTEL_HDMI_SILENT_STREAM=y
# CONFIG_SND_HDA_CTL_DEV_ID is not set
# end of HD-Audio

CONFIG_SND_HDA_CORE=m
CONFIG_SND_HDA_DSP_LOADER=y
CONFIG_SND_HDA_COMPONENT=y
CONFIG_SND_HDA_I915=y
CONFIG_SND_HDA_EXT_CORE=m
CONFIG_SND_HDA_PREALLOC_SIZE=0
CONFIG_SND_INTEL_NHLT=y
CONFIG_SND_INTEL_DSP_CONFIG=m
CONFIG_SND_INTEL_SOUNDWIRE_ACPI=m
CONFIG_SND_INTEL_BYT_PREFER_SOF=y
CONFIG_SND_SPI=y
CONFIG_SND_USB=y
CONFIG_SND_USB_AUDIO=m
# CONFIG_SND_USB_AUDIO_MIDI_V2 is not set
CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER=y
CONFIG_SND_USB_UA101=m
CONFIG_SND_USB_USX2Y=m
CONFIG_SND_USB_CAIAQ=m
CONFIG_SND_USB_CAIAQ_INPUT=y
CONFIG_SND_USB_US122L=m
CONFIG_SND_USB_6FIRE=m
CONFIG_SND_USB_HIFACE=m
CONFIG_SND_BCD2000=m
CONFIG_SND_USB_LINE6=m
CONFIG_SND_USB_POD=m
CONFIG_SND_USB_PODHD=m
CONFIG_SND_USB_TONEPORT=m
CONFIG_SND_USB_VARIAX=m
CONFIG_SND_FIREWIRE=y
CONFIG_SND_FIREWIRE_LIB=m
CONFIG_SND_DICE=m
CONFIG_SND_OXFW=m
CONFIG_SND_ISIGHT=m
CONFIG_SND_FIREWORKS=m
CONFIG_SND_BEBOB=m
CONFIG_SND_FIREWIRE_DIGI00X=m
CONFIG_SND_FIREWIRE_TASCAM=m
CONFIG_SND_FIREWIRE_MOTU=m
CONFIG_SND_FIREFACE=m
CONFIG_SND_PCMCIA=y
CONFIG_SND_VXPOCKET=m
CONFIG_SND_PDAUDIOCF=m
CONFIG_SND_SOC=m
CONFIG_SND_SOC_AC97_BUS=y
CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM=y
CONFIG_SND_SOC_COMPRESS=y
CONFIG_SND_SOC_TOPOLOGY=y
CONFIG_SND_SOC_ACPI=m
CONFIG_SND_SOC_ADI=m
CONFIG_SND_SOC_ADI_AXI_I2S=m
CONFIG_SND_SOC_ADI_AXI_SPDIF=m
CONFIG_SND_SOC_AMD_ACP=m
CONFIG_SND_SOC_AMD_CZ_DA7219MX98357_MACH=m
CONFIG_SND_SOC_AMD_CZ_RT5645_MACH=m
CONFIG_SND_SOC_AMD_ST_ES8336_MACH=m
CONFIG_SND_SOC_AMD_ACP3x=m
CONFIG_SND_SOC_AMD_RV_RT5682_MACH=m
CONFIG_SND_SOC_AMD_RENOIR=m
CONFIG_SND_SOC_AMD_RENOIR_MACH=m
CONFIG_SND_SOC_AMD_ACP5x=m
CONFIG_SND_SOC_AMD_VANGOGH_MACH=m
CONFIG_SND_SOC_AMD_ACP6x=m
CONFIG_SND_SOC_AMD_YC_MACH=m
CONFIG_SND_AMD_ACP_CONFIG=m
CONFIG_SND_SOC_AMD_ACP_COMMON=m
CONFIG_SND_SOC_AMD_ACP_PDM=m
CONFIG_SND_SOC_AMD_ACP_LEGACY_COMMON=m
CONFIG_SND_SOC_AMD_ACP_I2S=m
CONFIG_SND_SOC_AMD_ACP_PCM=m
CONFIG_SND_SOC_AMD_ACP_PCI=m
CONFIG_SND_AMD_ASOC_RENOIR=m
CONFIG_SND_AMD_ASOC_REMBRANDT=m
CONFIG_SND_SOC_AMD_MACH_COMMON=m
CONFIG_SND_SOC_AMD_LEGACY_MACH=m
CONFIG_SND_SOC_AMD_SOF_MACH=m
CONFIG_SND_SOC_AMD_RPL_ACP6x=m
CONFIG_SND_SOC_AMD_PS=m
CONFIG_SND_SOC_AMD_PS_MACH=m
CONFIG_SND_ATMEL_SOC=m
CONFIG_SND_BCM63XX_I2S_WHISTLER=m
CONFIG_SND_DESIGNWARE_I2S=m
CONFIG_SND_DESIGNWARE_PCM=y

#
# SoC Audio for Freescale CPUs
#

#
# Common SoC Audio options for Freescale CPUs:
#
CONFIG_SND_SOC_FSL_ASRC=m
CONFIG_SND_SOC_FSL_SAI=m
CONFIG_SND_SOC_FSL_MQS=m
CONFIG_SND_SOC_FSL_AUDMIX=m
CONFIG_SND_SOC_FSL_SSI=m
CONFIG_SND_SOC_FSL_SPDIF=m
CONFIG_SND_SOC_FSL_ESAI=m
CONFIG_SND_SOC_FSL_MICFIL=m
CONFIG_SND_SOC_FSL_EASRC=m
CONFIG_SND_SOC_FSL_XCVR=m
CONFIG_SND_SOC_FSL_UTILS=m
CONFIG_SND_SOC_FSL_RPMSG=m
CONFIG_SND_SOC_IMX_AUDMUX=m
# end of SoC Audio for Freescale CPUs

# CONFIG_SND_SOC_CHV3_I2S is not set
CONFIG_SND_I2S_HI6210_I2S=m
CONFIG_SND_SOC_IMG=y
CONFIG_SND_SOC_IMG_I2S_IN=m
CONFIG_SND_SOC_IMG_I2S_OUT=m
CONFIG_SND_SOC_IMG_PARALLEL_OUT=m
CONFIG_SND_SOC_IMG_SPDIF_IN=m
CONFIG_SND_SOC_IMG_SPDIF_OUT=m
CONFIG_SND_SOC_IMG_PISTACHIO_INTERNAL_DAC=m
CONFIG_SND_SOC_INTEL_SST_TOPLEVEL=y
CONFIG_SND_SOC_INTEL_SST=m
CONFIG_SND_SOC_INTEL_CATPT=m
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM=m
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_PCI=m
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_ACPI=m
# CONFIG_SND_SOC_INTEL_SKYLAKE is not set
CONFIG_SND_SOC_INTEL_SKL=m
CONFIG_SND_SOC_INTEL_APL=m
CONFIG_SND_SOC_INTEL_KBL=m
CONFIG_SND_SOC_INTEL_GLK=m
# CONFIG_SND_SOC_INTEL_CNL is not set
# CONFIG_SND_SOC_INTEL_CFL is not set
# CONFIG_SND_SOC_INTEL_CML_H is not set
# CONFIG_SND_SOC_INTEL_CML_LP is not set
CONFIG_SND_SOC_INTEL_SKYLAKE_FAMILY=m
CONFIG_SND_SOC_INTEL_SKYLAKE_SSP_CLK=m
CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC=y
CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON=m
CONFIG_SND_SOC_ACPI_INTEL_MATCH=m
CONFIG_SND_SOC_INTEL_AVS=m

#
# Intel AVS Machine drivers
#

#
# Available DSP configurations
#
CONFIG_SND_SOC_INTEL_AVS_MACH_DA7219=m
CONFIG_SND_SOC_INTEL_AVS_MACH_DMIC=m
# CONFIG_SND_SOC_INTEL_AVS_MACH_ES8336 is not set
CONFIG_SND_SOC_INTEL_AVS_MACH_HDAUDIO=m
CONFIG_SND_SOC_INTEL_AVS_MACH_I2S_TEST=m
CONFIG_SND_SOC_INTEL_AVS_MACH_MAX98927=m
CONFIG_SND_SOC_INTEL_AVS_MACH_MAX98357A=m
CONFIG_SND_SOC_INTEL_AVS_MACH_MAX98373=m
CONFIG_SND_SOC_INTEL_AVS_MACH_NAU8825=m
CONFIG_SND_SOC_INTEL_AVS_MACH_PROBE=m
CONFIG_SND_SOC_INTEL_AVS_MACH_RT274=m
CONFIG_SND_SOC_INTEL_AVS_MACH_RT286=m
CONFIG_SND_SOC_INTEL_AVS_MACH_RT298=m
# CONFIG_SND_SOC_INTEL_AVS_MACH_RT5663 is not set
CONFIG_SND_SOC_INTEL_AVS_MACH_RT5682=m
CONFIG_SND_SOC_INTEL_AVS_MACH_SSM4567=m
# end of Intel AVS Machine drivers

CONFIG_SND_SOC_INTEL_MACH=y
CONFIG_SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES=y
CONFIG_SND_SOC_INTEL_HDA_DSP_COMMON=m
CONFIG_SND_SOC_INTEL_SOF_MAXIM_COMMON=m
CONFIG_SND_SOC_INTEL_SOF_REALTEK_COMMON=m
CONFIG_SND_SOC_INTEL_SOF_CIRRUS_COMMON=m
CONFIG_SND_SOC_INTEL_HASWELL_MACH=m
CONFIG_SND_SOC_INTEL_BDW_RT5650_MACH=m
CONFIG_SND_SOC_INTEL_BDW_RT5677_MACH=m
CONFIG_SND_SOC_INTEL_BROADWELL_MACH=m
CONFIG_SND_SOC_INTEL_BYTCR_RT5640_MACH=m
CONFIG_SND_SOC_INTEL_BYTCR_RT5651_MACH=m
CONFIG_SND_SOC_INTEL_BYTCR_WM5102_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_RT5672_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_RT5645_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_MAX98090_TI_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_NAU8824_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_CX2072X_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_DA7213_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_ES8316_MACH=m
# CONFIG_SND_SOC_INTEL_BYT_CHT_NOCODEC_MACH is not set
CONFIG_SND_SOC_INTEL_SKL_RT286_MACH=m
CONFIG_SND_SOC_INTEL_SKL_NAU88L25_SSM4567_MACH=m
CONFIG_SND_SOC_INTEL_SKL_NAU88L25_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_DA7219_MAX98357A_GENERIC=m
CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_COMMON=m
CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_BXT_RT298_MACH=m
CONFIG_SND_SOC_INTEL_SOF_WM8804_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5663_MAX98927_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5663_RT5514_MAX98927_MACH=m
CONFIG_SND_SOC_INTEL_KBL_DA7219_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_KBL_DA7219_MAX98927_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5660_MACH=m
CONFIG_SND_SOC_INTEL_GLK_DA7219_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH=m
CONFIG_SND_SOC_INTEL_SOF_RT5682_MACH=m
CONFIG_SND_SOC_INTEL_SOF_CS42L42_MACH=m
CONFIG_SND_SOC_INTEL_SOF_PCM512x_MACH=m
CONFIG_SND_SOC_INTEL_SOF_ES8336_MACH=m
CONFIG_SND_SOC_INTEL_SOF_NAU8825_MACH=m
CONFIG_SND_SOC_INTEL_CML_LP_DA7219_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_SOF_CML_RT1011_RT5682_MACH=m
CONFIG_SND_SOC_INTEL_SOF_DA7219_MAX98373_MACH=m
CONFIG_SND_SOC_INTEL_SOF_SSP_AMP_MACH=m
CONFIG_SND_SOC_INTEL_EHL_RT5660_MACH=m
CONFIG_SND_SOC_INTEL_SOUNDWIRE_SOF_MACH=m
CONFIG_SND_SOC_MTK_BTCVSD=m
CONFIG_SND_SOC_SOF_TOPLEVEL=y
CONFIG_SND_SOC_SOF_PCI_DEV=m
CONFIG_SND_SOC_SOF_PCI=m
CONFIG_SND_SOC_SOF_ACPI=m
CONFIG_SND_SOC_SOF_ACPI_DEV=m
CONFIG_SND_SOC_SOF_DEBUG_PROBES=m
CONFIG_SND_SOC_SOF_CLIENT=m
# CONFIG_SND_SOC_SOF_DEVELOPER_SUPPORT is not set
CONFIG_SND_SOC_SOF=m
CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE=y
CONFIG_SND_SOC_SOF_IPC3=y
CONFIG_SND_SOC_SOF_INTEL_IPC4=y
CONFIG_SND_SOC_SOF_AMD_TOPLEVEL=m
CONFIG_SND_SOC_SOF_AMD_COMMON=m
CONFIG_SND_SOC_SOF_AMD_RENOIR=m
# CONFIG_SND_SOC_SOF_AMD_VANGOGH is not set
CONFIG_SND_SOC_SOF_AMD_REMBRANDT=m
CONFIG_SND_SOC_SOF_ACP_PROBES=m
CONFIG_SND_SOC_SOF_INTEL_TOPLEVEL=y
CONFIG_SND_SOC_SOF_INTEL_HIFI_EP_IPC=m
CONFIG_SND_SOC_SOF_INTEL_ATOM_HIFI_EP=m
CONFIG_SND_SOC_SOF_INTEL_COMMON=m
CONFIG_SND_SOC_SOF_BAYTRAIL=m
CONFIG_SND_SOC_SOF_BROADWELL=m
CONFIG_SND_SOC_SOF_MERRIFIELD=m
CONFIG_SND_SOC_SOF_INTEL_SKL=m
CONFIG_SND_SOC_SOF_SKYLAKE=m
CONFIG_SND_SOC_SOF_KABYLAKE=m
CONFIG_SND_SOC_SOF_INTEL_APL=m
CONFIG_SND_SOC_SOF_APOLLOLAKE=m
CONFIG_SND_SOC_SOF_GEMINILAKE=m
CONFIG_SND_SOC_SOF_INTEL_CNL=m
CONFIG_SND_SOC_SOF_CANNONLAKE=m
CONFIG_SND_SOC_SOF_COFFEELAKE=m
CONFIG_SND_SOC_SOF_COMETLAKE=m
CONFIG_SND_SOC_SOF_INTEL_ICL=m
CONFIG_SND_SOC_SOF_ICELAKE=m
CONFIG_SND_SOC_SOF_JASPERLAKE=m
CONFIG_SND_SOC_SOF_INTEL_TGL=m
CONFIG_SND_SOC_SOF_TIGERLAKE=m
CONFIG_SND_SOC_SOF_ELKHARTLAKE=m
CONFIG_SND_SOC_SOF_ALDERLAKE=m
CONFIG_SND_SOC_SOF_INTEL_MTL=m
CONFIG_SND_SOC_SOF_METEORLAKE=m
CONFIG_SND_SOC_SOF_INTEL_LNL=m
CONFIG_SND_SOC_SOF_LUNARLAKE=m
CONFIG_SND_SOC_SOF_HDA_COMMON=m
CONFIG_SND_SOC_SOF_HDA_MLINK=m
CONFIG_SND_SOC_SOF_HDA_LINK=y
CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC=y
CONFIG_SND_SOC_SOF_HDA_LINK_BASELINE=m
CONFIG_SND_SOC_SOF_HDA=m
CONFIG_SND_SOC_SOF_HDA_PROBES=m
CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE=m
CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE=m
CONFIG_SND_SOC_SOF_XTENSA=m

#
# STMicroelectronics STM32 SOC audio support
#
# end of STMicroelectronics STM32 SOC audio support

CONFIG_SND_SOC_XILINX_I2S=m
CONFIG_SND_SOC_XILINX_AUDIO_FORMATTER=m
CONFIG_SND_SOC_XILINX_SPDIF=m
CONFIG_SND_SOC_XTFPGA_I2S=m
CONFIG_SND_SOC_I2C_AND_SPI=m

#
# CODEC drivers
#
CONFIG_SND_SOC_ARIZONA=m
CONFIG_SND_SOC_WM_ADSP=m
CONFIG_SND_SOC_AC97_CODEC=m
CONFIG_SND_SOC_ADAU_UTILS=m
CONFIG_SND_SOC_ADAU1372=m
CONFIG_SND_SOC_ADAU1372_I2C=m
CONFIG_SND_SOC_ADAU1372_SPI=m
CONFIG_SND_SOC_ADAU1701=m
CONFIG_SND_SOC_ADAU17X1=m
CONFIG_SND_SOC_ADAU1761=m
CONFIG_SND_SOC_ADAU1761_I2C=m
CONFIG_SND_SOC_ADAU1761_SPI=m
CONFIG_SND_SOC_ADAU7002=m
CONFIG_SND_SOC_ADAU7118=m
CONFIG_SND_SOC_ADAU7118_HW=m
CONFIG_SND_SOC_ADAU7118_I2C=m
CONFIG_SND_SOC_AK4104=m
CONFIG_SND_SOC_AK4118=m
CONFIG_SND_SOC_AK4375=m
CONFIG_SND_SOC_AK4458=m
CONFIG_SND_SOC_AK4554=m
CONFIG_SND_SOC_AK4613=m
CONFIG_SND_SOC_AK4642=m
CONFIG_SND_SOC_AK5386=m
CONFIG_SND_SOC_AK5558=m
CONFIG_SND_SOC_ALC5623=m
# CONFIG_SND_SOC_AUDIO_IIO_AUX is not set
CONFIG_SND_SOC_AW8738=m
# CONFIG_SND_SOC_AW88395 is not set
# CONFIG_SND_SOC_AW88261 is not set
CONFIG_SND_SOC_BD28623=m
CONFIG_SND_SOC_BT_SCO=m
# CONFIG_SND_SOC_CHV3_CODEC is not set
CONFIG_SND_SOC_CROS_EC_CODEC=m
CONFIG_SND_SOC_CS35L32=m
CONFIG_SND_SOC_CS35L33=m
CONFIG_SND_SOC_CS35L34=m
CONFIG_SND_SOC_CS35L35=m
CONFIG_SND_SOC_CS35L36=m
CONFIG_SND_SOC_CS35L41_LIB=m
CONFIG_SND_SOC_CS35L41=m
CONFIG_SND_SOC_CS35L41_SPI=m
CONFIG_SND_SOC_CS35L41_I2C=m
CONFIG_SND_SOC_CS35L45=m
CONFIG_SND_SOC_CS35L45_SPI=m
CONFIG_SND_SOC_CS35L45_I2C=m
CONFIG_SND_SOC_CS35L56=m
CONFIG_SND_SOC_CS35L56_SHARED=m
# CONFIG_SND_SOC_CS35L56_I2C is not set
# CONFIG_SND_SOC_CS35L56_SPI is not set
CONFIG_SND_SOC_CS35L56_SDW=m
CONFIG_SND_SOC_CS42L42_CORE=m
CONFIG_SND_SOC_CS42L42=m
CONFIG_SND_SOC_CS42L42_SDW=m
CONFIG_SND_SOC_CS42L51=m
CONFIG_SND_SOC_CS42L51_I2C=m
CONFIG_SND_SOC_CS42L52=m
CONFIG_SND_SOC_CS42L56=m
CONFIG_SND_SOC_CS42L73=m
CONFIG_SND_SOC_CS42L83=m
CONFIG_SND_SOC_CS4234=m
CONFIG_SND_SOC_CS4265=m
CONFIG_SND_SOC_CS4270=m
CONFIG_SND_SOC_CS4271=m
CONFIG_SND_SOC_CS4271_I2C=m
CONFIG_SND_SOC_CS4271_SPI=m
CONFIG_SND_SOC_CS42XX8=m
CONFIG_SND_SOC_CS42XX8_I2C=m
CONFIG_SND_SOC_CS43130=m
CONFIG_SND_SOC_CS4341=m
CONFIG_SND_SOC_CS4349=m
CONFIG_SND_SOC_CS53L30=m
CONFIG_SND_SOC_CX2072X=m
CONFIG_SND_SOC_DA7213=m
CONFIG_SND_SOC_DA7219=m
CONFIG_SND_SOC_DMIC=m
CONFIG_SND_SOC_HDMI_CODEC=m
CONFIG_SND_SOC_ES7134=m
CONFIG_SND_SOC_ES7241=m
CONFIG_SND_SOC_ES8316=m
CONFIG_SND_SOC_ES8326=m
CONFIG_SND_SOC_ES8328=m
CONFIG_SND_SOC_ES8328_I2C=m
CONFIG_SND_SOC_ES8328_SPI=m
CONFIG_SND_SOC_GTM601=m
CONFIG_SND_SOC_HDAC_HDMI=m
CONFIG_SND_SOC_HDAC_HDA=m
CONFIG_SND_SOC_HDA=m
CONFIG_SND_SOC_ICS43432=m
# CONFIG_SND_SOC_IDT821034 is not set
CONFIG_SND_SOC_INNO_RK3036=m
CONFIG_SND_SOC_MAX98088=m
CONFIG_SND_SOC_MAX98090=m
CONFIG_SND_SOC_MAX98357A=m
CONFIG_SND_SOC_MAX98504=m
CONFIG_SND_SOC_MAX9867=m
CONFIG_SND_SOC_MAX98927=m
CONFIG_SND_SOC_MAX98520=m
CONFIG_SND_SOC_MAX98363=m
CONFIG_SND_SOC_MAX98373=m
CONFIG_SND_SOC_MAX98373_I2C=m
CONFIG_SND_SOC_MAX98373_SDW=m
CONFIG_SND_SOC_MAX98388=m
CONFIG_SND_SOC_MAX98390=m
CONFIG_SND_SOC_MAX98396=m
CONFIG_SND_SOC_MAX9860=m
CONFIG_SND_SOC_MSM8916_WCD_ANALOG=m
CONFIG_SND_SOC_MSM8916_WCD_DIGITAL=m
CONFIG_SND_SOC_PCM1681=m
CONFIG_SND_SOC_PCM1789=m
CONFIG_SND_SOC_PCM1789_I2C=m
CONFIG_SND_SOC_PCM179X=m
CONFIG_SND_SOC_PCM179X_I2C=m
CONFIG_SND_SOC_PCM179X_SPI=m
CONFIG_SND_SOC_PCM186X=m
CONFIG_SND_SOC_PCM186X_I2C=m
CONFIG_SND_SOC_PCM186X_SPI=m
CONFIG_SND_SOC_PCM3060=m
CONFIG_SND_SOC_PCM3060_I2C=m
CONFIG_SND_SOC_PCM3060_SPI=m
CONFIG_SND_SOC_PCM3168A=m
CONFIG_SND_SOC_PCM3168A_I2C=m
CONFIG_SND_SOC_PCM3168A_SPI=m
CONFIG_SND_SOC_PCM5102A=m
CONFIG_SND_SOC_PCM512x=m
CONFIG_SND_SOC_PCM512x_I2C=m
CONFIG_SND_SOC_PCM512x_SPI=m
# CONFIG_SND_SOC_PEB2466 is not set
CONFIG_SND_SOC_RK3328=m
CONFIG_SND_SOC_RL6231=m
CONFIG_SND_SOC_RL6347A=m
CONFIG_SND_SOC_RT274=m
CONFIG_SND_SOC_RT286=m
CONFIG_SND_SOC_RT298=m
CONFIG_SND_SOC_RT1011=m
CONFIG_SND_SOC_RT1015=m
CONFIG_SND_SOC_RT1015P=m
# CONFIG_SND_SOC_RT1017_SDCA_SDW is not set
CONFIG_SND_SOC_RT1019=m
CONFIG_SND_SOC_RT1308=m
CONFIG_SND_SOC_RT1308_SDW=m
CONFIG_SND_SOC_RT1316_SDW=m
CONFIG_SND_SOC_RT1318_SDW=m
CONFIG_SND_SOC_RT5514=m
CONFIG_SND_SOC_RT5514_SPI=m
CONFIG_SND_SOC_RT5616=m
CONFIG_SND_SOC_RT5631=m
CONFIG_SND_SOC_RT5640=m
CONFIG_SND_SOC_RT5645=m
CONFIG_SND_SOC_RT5651=m
CONFIG_SND_SOC_RT5659=m
CONFIG_SND_SOC_RT5660=m
CONFIG_SND_SOC_RT5663=m
CONFIG_SND_SOC_RT5670=m
CONFIG_SND_SOC_RT5677=m
CONFIG_SND_SOC_RT5677_SPI=m
CONFIG_SND_SOC_RT5682=m
CONFIG_SND_SOC_RT5682_I2C=m
CONFIG_SND_SOC_RT5682_SDW=m
CONFIG_SND_SOC_RT5682S=m
CONFIG_SND_SOC_RT700=m
CONFIG_SND_SOC_RT700_SDW=m
CONFIG_SND_SOC_RT711=m
CONFIG_SND_SOC_RT711_SDW=m
CONFIG_SND_SOC_RT711_SDCA_SDW=m
CONFIG_SND_SOC_RT712_SDCA_SDW=m
CONFIG_SND_SOC_RT712_SDCA_DMIC_SDW=m
# CONFIG_SND_SOC_RT722_SDCA_SDW is not set
CONFIG_SND_SOC_RT715=m
CONFIG_SND_SOC_RT715_SDW=m
CONFIG_SND_SOC_RT715_SDCA_SDW=m
CONFIG_SND_SOC_RT9120=m
CONFIG_SND_SOC_SDW_MOCKUP=m
CONFIG_SND_SOC_SGTL5000=m
CONFIG_SND_SOC_SI476X=m
CONFIG_SND_SOC_SIGMADSP=m
CONFIG_SND_SOC_SIGMADSP_I2C=m
CONFIG_SND_SOC_SIGMADSP_REGMAP=m
CONFIG_SND_SOC_SIMPLE_AMPLIFIER=m
CONFIG_SND_SOC_SIMPLE_MUX=m
# CONFIG_SND_SOC_SMA1303 is not set
CONFIG_SND_SOC_SPDIF=m
CONFIG_SND_SOC_SRC4XXX_I2C=m
CONFIG_SND_SOC_SRC4XXX=m
CONFIG_SND_SOC_SSM2305=m
CONFIG_SND_SOC_SSM2518=m
CONFIG_SND_SOC_SSM2602=m
CONFIG_SND_SOC_SSM2602_SPI=m
CONFIG_SND_SOC_SSM2602_I2C=m
CONFIG_SND_SOC_SSM4567=m
CONFIG_SND_SOC_STA32X=m
CONFIG_SND_SOC_STA350=m
CONFIG_SND_SOC_STI_SAS=m
CONFIG_SND_SOC_TAS2552=m
CONFIG_SND_SOC_TAS2562=m
CONFIG_SND_SOC_TAS2764=m
CONFIG_SND_SOC_TAS2770=m
CONFIG_SND_SOC_TAS2780=m
# CONFIG_SND_SOC_TAS2781_I2C is not set
CONFIG_SND_SOC_TAS5086=m
CONFIG_SND_SOC_TAS571X=m
CONFIG_SND_SOC_TAS5720=m
CONFIG_SND_SOC_TAS5805M=m
CONFIG_SND_SOC_TAS6424=m
CONFIG_SND_SOC_TDA7419=m
CONFIG_SND_SOC_TFA9879=m
CONFIG_SND_SOC_TFA989X=m
CONFIG_SND_SOC_TLV320ADC3XXX=m
CONFIG_SND_SOC_TLV320AIC23=m
CONFIG_SND_SOC_TLV320AIC23_I2C=m
CONFIG_SND_SOC_TLV320AIC23_SPI=m
CONFIG_SND_SOC_TLV320AIC31XX=m
CONFIG_SND_SOC_TLV320AIC32X4=m
CONFIG_SND_SOC_TLV320AIC32X4_I2C=m
CONFIG_SND_SOC_TLV320AIC32X4_SPI=m
CONFIG_SND_SOC_TLV320AIC3X=m
CONFIG_SND_SOC_TLV320AIC3X_I2C=m
CONFIG_SND_SOC_TLV320AIC3X_SPI=m
CONFIG_SND_SOC_TLV320ADCX140=m
CONFIG_SND_SOC_TS3A227E=m
CONFIG_SND_SOC_TSCS42XX=m
CONFIG_SND_SOC_TSCS454=m
CONFIG_SND_SOC_UDA1334=m
CONFIG_SND_SOC_WCD_CLASSH=m
CONFIG_SND_SOC_WCD9335=m
CONFIG_SND_SOC_WCD_MBHC=m
CONFIG_SND_SOC_WCD934X=m
CONFIG_SND_SOC_WCD938X=m
CONFIG_SND_SOC_WCD938X_SDW=m
CONFIG_SND_SOC_WM5102=m
CONFIG_SND_SOC_WM8510=m
CONFIG_SND_SOC_WM8523=m
CONFIG_SND_SOC_WM8524=m
CONFIG_SND_SOC_WM8580=m
CONFIG_SND_SOC_WM8711=m
CONFIG_SND_SOC_WM8728=m
CONFIG_SND_SOC_WM8731=m
CONFIG_SND_SOC_WM8731_I2C=m
CONFIG_SND_SOC_WM8731_SPI=m
CONFIG_SND_SOC_WM8737=m
CONFIG_SND_SOC_WM8741=m
CONFIG_SND_SOC_WM8750=m
CONFIG_SND_SOC_WM8753=m
CONFIG_SND_SOC_WM8770=m
CONFIG_SND_SOC_WM8776=m
CONFIG_SND_SOC_WM8782=m
CONFIG_SND_SOC_WM8804=m
CONFIG_SND_SOC_WM8804_I2C=m
CONFIG_SND_SOC_WM8804_SPI=m
CONFIG_SND_SOC_WM8903=m
CONFIG_SND_SOC_WM8904=m
CONFIG_SND_SOC_WM8940=m
CONFIG_SND_SOC_WM8960=m
CONFIG_SND_SOC_WM8961=m
CONFIG_SND_SOC_WM8962=m
CONFIG_SND_SOC_WM8974=m
CONFIG_SND_SOC_WM8978=m
CONFIG_SND_SOC_WM8985=m
CONFIG_SND_SOC_WSA881X=m
CONFIG_SND_SOC_WSA883X=m
# CONFIG_SND_SOC_WSA884X is not set
CONFIG_SND_SOC_ZL38060=m
CONFIG_SND_SOC_MAX9759=m
CONFIG_SND_SOC_MT6351=m
CONFIG_SND_SOC_MT6358=m
CONFIG_SND_SOC_MT6660=m
CONFIG_SND_SOC_NAU8315=m
CONFIG_SND_SOC_NAU8540=m
CONFIG_SND_SOC_NAU8810=m
CONFIG_SND_SOC_NAU8821=m
CONFIG_SND_SOC_NAU8822=m
CONFIG_SND_SOC_NAU8824=m
CONFIG_SND_SOC_NAU8825=m
CONFIG_SND_SOC_TPA6130A2=m
CONFIG_SND_SOC_LPASS_MACRO_COMMON=m
CONFIG_SND_SOC_LPASS_WSA_MACRO=m
CONFIG_SND_SOC_LPASS_VA_MACRO=m
CONFIG_SND_SOC_LPASS_RX_MACRO=m
CONFIG_SND_SOC_LPASS_TX_MACRO=m
# end of CODEC drivers

CONFIG_SND_SIMPLE_CARD_UTILS=m
CONFIG_SND_SIMPLE_CARD=m
CONFIG_SND_X86=y
CONFIG_HDMI_LPE_AUDIO=m
CONFIG_SND_SYNTH_EMUX=m
CONFIG_SND_XEN_FRONTEND=m
CONFIG_SND_VIRTIO=m
CONFIG_AC97_BUS=m
CONFIG_HID_SUPPORT=y
CONFIG_HID=m
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
CONFIG_HID_ACCUTOUCH=m
CONFIG_HID_ACRUX=m
CONFIG_HID_ACRUX_FF=y
CONFIG_HID_APPLE=m
CONFIG_HID_APPLEIR=m
CONFIG_HID_ASUS=m
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=m
CONFIG_HID_BETOP_FF=m
CONFIG_HID_BIGBEN_FF=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CORSAIR=m
CONFIG_HID_COUGAR=m
CONFIG_HID_MACALLY=m
CONFIG_HID_PRODIKEYS=m
CONFIG_HID_CMEDIA=m
CONFIG_HID_CP2112=m
CONFIG_HID_CREATIVE_SB0540=m
CONFIG_HID_CYPRESS=m
CONFIG_HID_DRAGONRISE=m
CONFIG_DRAGONRISE_FF=y
CONFIG_HID_EMS_FF=m
CONFIG_HID_ELAN=m
CONFIG_HID_ELECOM=m
CONFIG_HID_ELO=m
# CONFIG_HID_EVISION is not set
CONFIG_HID_EZKEY=m
CONFIG_HID_FT260=m
CONFIG_HID_GEMBIRD=m
CONFIG_HID_GFRM=m
CONFIG_HID_GLORIOUS=m
CONFIG_HID_HOLTEK=m
CONFIG_HOLTEK_FF=y
CONFIG_HID_VIVALDI_COMMON=m
CONFIG_HID_GOOGLE_HAMMER=m
# CONFIG_HID_GOOGLE_STADIA_FF is not set
CONFIG_HID_VIVALDI=m
CONFIG_HID_GT683R=m
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
CONFIG_HID_UCLOGIC=m
CONFIG_HID_WALTOP=m
CONFIG_HID_VIEWSONIC=m
CONFIG_HID_VRC2=m
CONFIG_HID_XIAOMI=m
CONFIG_HID_GYRATION=m
CONFIG_HID_ICADE=m
CONFIG_HID_ITE=m
CONFIG_HID_JABRA=m
CONFIG_HID_TWINHAN=m
CONFIG_HID_KENSINGTON=m
CONFIG_HID_LCPOWER=m
CONFIG_HID_LED=m
CONFIG_HID_LENOVO=m
CONFIG_HID_LETSKETCH=m
CONFIG_HID_LOGITECH=m
CONFIG_HID_LOGITECH_DJ=m
CONFIG_HID_LOGITECH_HIDPP=m
CONFIG_LOGITECH_FF=y
CONFIG_LOGIRUMBLEPAD2_FF=y
CONFIG_LOGIG940_FF=y
CONFIG_LOGIWHEELS_FF=y
CONFIG_HID_MAGICMOUSE=m
CONFIG_HID_MALTRON=m
CONFIG_HID_MAYFLASH=m
CONFIG_HID_MEGAWORLD_FF=m
CONFIG_HID_REDRAGON=m
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
CONFIG_HID_MULTITOUCH=m
CONFIG_HID_NINTENDO=m
CONFIG_NINTENDO_FF=y
CONFIG_HID_NTI=m
CONFIG_HID_NTRIG=m
# CONFIG_HID_NVIDIA_SHIELD is not set
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PENMOUNT=m
CONFIG_HID_PETALYNX=m
CONFIG_HID_PICOLCD=m
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=m
CONFIG_HID_PLAYSTATION=m
CONFIG_PLAYSTATION_FF=y
CONFIG_HID_PXRC=m
CONFIG_HID_RAZER=m
CONFIG_HID_PRIMAX=m
CONFIG_HID_RETRODE=m
CONFIG_HID_ROCCAT=m
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SEMITEK=m
CONFIG_HID_SIGMAMICRO=m
CONFIG_HID_SONY=m
CONFIG_SONY_FF=y
CONFIG_HID_SPEEDLINK=m
CONFIG_HID_STEAM=m
# CONFIG_STEAM_FF is not set
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_RMI=m
CONFIG_HID_GREENASIA=m
CONFIG_GREENASIA_FF=y
CONFIG_HID_HYPERV_MOUSE=m
CONFIG_HID_SMARTJOYPLUS=m
CONFIG_SMARTJOYPLUS_FF=y
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
CONFIG_HID_TOPRE=m
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
CONFIG_THRUSTMASTER_FF=y
CONFIG_HID_UDRAW_PS3=m
CONFIG_HID_U2FZERO=m
CONFIG_HID_WACOM=m
CONFIG_HID_WIIMOTE=m
CONFIG_HID_XINMO=m
CONFIG_HID_ZEROPLUS=m
CONFIG_ZEROPLUS_FF=y
CONFIG_HID_ZYDACRON=m
CONFIG_HID_SENSOR_HUB=m
CONFIG_HID_SENSOR_CUSTOM_SENSOR=m
CONFIG_HID_ALPS=m
CONFIG_HID_MCP2221=m
# end of Special HID drivers

#
# HID-BPF support
#
# CONFIG_HID_BPF is not set
# end of HID-BPF support

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# USB HID Boot Protocol drivers
#
CONFIG_USB_KBD=m
CONFIG_USB_MOUSE=m
# end of USB HID Boot Protocol drivers
# end of USB HID support

CONFIG_I2C_HID=m
CONFIG_I2C_HID_ACPI=m
# CONFIG_I2C_HID_OF is not set
CONFIG_I2C_HID_CORE=m

#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=m
CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER=m
# end of Intel ISH HID support

#
# AMD SFH HID Support
#
CONFIG_AMD_SFH_HID=m
# end of AMD SFH HID Support

#
# Surface System Aggregator Module HID support
#
CONFIG_SURFACE_HID=m
CONFIG_SURFACE_KBD=m
# end of Surface System Aggregator Module HID support

CONFIG_SURFACE_HID_CORE=m
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_LED_TRIG=y
CONFIG_USB_ULPI_BUS=m
CONFIG_USB_CONN_GPIO=m
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
CONFIG_USB_DYNAMIC_MINORS=y
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
CONFIG_USB_LEDS_TRIGGER_USBPORT=m
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_MON=m

#
# USB Host Controller Drivers
#
CONFIG_USB_C67X00_HCD=m
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_XHCI_DBGCAP=y
CONFIG_USB_XHCI_PCI=m
CONFIG_USB_XHCI_PCI_RENESAS=m
CONFIG_USB_XHCI_PLATFORM=m
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
CONFIG_USB_EHCI_FSL=m
CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_USB_OXU210HP_HCD=m
CONFIG_USB_ISP116X_HCD=m
CONFIG_USB_MAX3421_HCD=m
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
CONFIG_USB_OHCI_HCD_PLATFORM=y
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_SL811_HCD=m
CONFIG_USB_SL811_HCD_ISO=y
CONFIG_USB_SL811_CS=m
CONFIG_USB_R8A66597_HCD=m
CONFIG_USB_HCD_BCMA=m
CONFIG_USB_HCD_SSB=m
# CONFIG_USB_HCD_TEST_MODE is not set
CONFIG_USB_XEN_HCD=m

#
# USB Device Class drivers
#
CONFIG_USB_ACM=m
CONFIG_USB_PRINTER=m
CONFIG_USB_WDM=m
CONFIG_USB_TMC=m

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
CONFIG_USB_STORAGE_REALTEK=m
CONFIG_REALTEK_AUTOPM=y
CONFIG_USB_STORAGE_DATAFAB=m
CONFIG_USB_STORAGE_FREECOM=m
CONFIG_USB_STORAGE_ISD200=m
CONFIG_USB_STORAGE_USBAT=m
CONFIG_USB_STORAGE_SDDR09=m
CONFIG_USB_STORAGE_SDDR55=m
CONFIG_USB_STORAGE_JUMPSHOT=m
CONFIG_USB_STORAGE_ALAUDA=m
CONFIG_USB_STORAGE_ONETOUCH=m
CONFIG_USB_STORAGE_KARMA=m
CONFIG_USB_STORAGE_CYPRESS_ATACB=m
CONFIG_USB_STORAGE_ENE_UB6250=m
CONFIG_USB_UAS=m

#
# USB Imaging devices
#
CONFIG_USB_MDC800=m
CONFIG_USB_MICROTEK=m
CONFIG_USBIP_CORE=m
CONFIG_USBIP_VHCI_HCD=m
CONFIG_USBIP_VHCI_HC_PORTS=8
CONFIG_USBIP_VHCI_NR_HCS=1
CONFIG_USBIP_HOST=m
CONFIG_USBIP_VUDC=m
# CONFIG_USBIP_DEBUG is not set

#
# USB dual-mode controller drivers
#
CONFIG_USB_CDNS_SUPPORT=m
CONFIG_USB_CDNS_HOST=y
CONFIG_USB_CDNS3=m
CONFIG_USB_CDNS3_GADGET=y
CONFIG_USB_CDNS3_HOST=y
CONFIG_USB_CDNS3_PCI_WRAP=m
CONFIG_USB_CDNSP_PCI=m
CONFIG_USB_CDNSP_GADGET=y
CONFIG_USB_CDNSP_HOST=y
CONFIG_USB_MUSB_HDRC=m
# CONFIG_USB_MUSB_HOST is not set
# CONFIG_USB_MUSB_GADGET is not set
CONFIG_USB_MUSB_DUAL_ROLE=y

#
# Platform Glue Layer
#

#
# MUSB DMA mode
#
CONFIG_MUSB_PIO_ONLY=y
CONFIG_USB_DWC3=m
CONFIG_USB_DWC3_ULPI=y
# CONFIG_USB_DWC3_HOST is not set
# CONFIG_USB_DWC3_GADGET is not set
CONFIG_USB_DWC3_DUAL_ROLE=y

#
# Platform Glue Driver Support
#
CONFIG_USB_DWC3_PCI=m
CONFIG_USB_DWC3_HAPS=m
CONFIG_USB_DWC2=y
CONFIG_USB_DWC2_HOST=y

#
# Gadget/Dual-role mode requires USB Gadget support to be enabled
#
CONFIG_USB_DWC2_PCI=m
# CONFIG_USB_DWC2_DEBUG is not set
# CONFIG_USB_DWC2_TRACK_MISSED_SOFS is not set
CONFIG_USB_CHIPIDEA=m
CONFIG_USB_CHIPIDEA_UDC=y
CONFIG_USB_CHIPIDEA_HOST=y
CONFIG_USB_CHIPIDEA_PCI=m
CONFIG_USB_CHIPIDEA_MSM=m
CONFIG_USB_CHIPIDEA_GENERIC=m
CONFIG_USB_ISP1760=m
CONFIG_USB_ISP1760_HCD=y
CONFIG_USB_ISP1761_UDC=y
# CONFIG_USB_ISP1760_HOST_ROLE is not set
# CONFIG_USB_ISP1760_GADGET_ROLE is not set
CONFIG_USB_ISP1760_DUAL_ROLE=y

#
# USB port drivers
#
CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_SIMPLE=m
CONFIG_USB_SERIAL_AIRCABLE=m
CONFIG_USB_SERIAL_ARK3116=m
CONFIG_USB_SERIAL_BELKIN=m
CONFIG_USB_SERIAL_CH341=m
CONFIG_USB_SERIAL_WHITEHEAT=m
CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
CONFIG_USB_SERIAL_CP210X=m
CONFIG_USB_SERIAL_CYPRESS_M8=m
CONFIG_USB_SERIAL_EMPEG=m
CONFIG_USB_SERIAL_FTDI_SIO=m
CONFIG_USB_SERIAL_VISOR=m
CONFIG_USB_SERIAL_IPAQ=m
CONFIG_USB_SERIAL_IR=m
CONFIG_USB_SERIAL_EDGEPORT=m
CONFIG_USB_SERIAL_EDGEPORT_TI=m
CONFIG_USB_SERIAL_F81232=m
CONFIG_USB_SERIAL_F8153X=m
CONFIG_USB_SERIAL_GARMIN=m
CONFIG_USB_SERIAL_IPW=m
CONFIG_USB_SERIAL_IUU=m
CONFIG_USB_SERIAL_KEYSPAN_PDA=m
CONFIG_USB_SERIAL_KEYSPAN=m
CONFIG_USB_SERIAL_KLSI=m
CONFIG_USB_SERIAL_KOBIL_SCT=m
CONFIG_USB_SERIAL_MCT_U232=m
CONFIG_USB_SERIAL_METRO=m
CONFIG_USB_SERIAL_MOS7720=m
CONFIG_USB_SERIAL_MOS7715_PARPORT=y
CONFIG_USB_SERIAL_MOS7840=m
CONFIG_USB_SERIAL_MXUPORT=m
CONFIG_USB_SERIAL_NAVMAN=m
CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_OTI6858=m
CONFIG_USB_SERIAL_QCAUX=m
CONFIG_USB_SERIAL_QUALCOMM=m
CONFIG_USB_SERIAL_SPCP8X5=m
CONFIG_USB_SERIAL_SAFE=m
# CONFIG_USB_SERIAL_SAFE_PADDED is not set
CONFIG_USB_SERIAL_SIERRAWIRELESS=m
CONFIG_USB_SERIAL_SYMBOL=m
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_WWAN=m
CONFIG_USB_SERIAL_OPTION=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_SERIAL_OPTICON=m
CONFIG_USB_SERIAL_XSENS_MT=m
CONFIG_USB_SERIAL_WISHBONE=m
CONFIG_USB_SERIAL_SSU100=m
CONFIG_USB_SERIAL_QT2=m
CONFIG_USB_SERIAL_UPD78F0730=m
CONFIG_USB_SERIAL_XR=m
CONFIG_USB_SERIAL_DEBUG=m

#
# USB Miscellaneous drivers
#
CONFIG_USB_USS720=m
CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m
CONFIG_USB_ADUTUX=m
CONFIG_USB_SEVSEG=m
CONFIG_USB_LEGOTOWER=m
CONFIG_USB_LCD=m
CONFIG_USB_CYPRESS_CY7C63=m
CONFIG_USB_CYTHERM=m
CONFIG_USB_IDMOUSE=m
CONFIG_USB_APPLEDISPLAY=m
CONFIG_APPLE_MFI_FASTCHARGE=m
CONFIG_USB_SISUSBVGA=m
CONFIG_USB_LD=m
CONFIG_USB_TRANCEVIBRATOR=m
CONFIG_USB_IOWARRIOR=m
CONFIG_USB_TEST=m
CONFIG_USB_EHSET_TEST_FIXTURE=m
CONFIG_USB_ISIGHTFW=m
CONFIG_USB_YUREX=m
CONFIG_USB_EZUSB_FX2=m
CONFIG_USB_HUB_USB251XB=m
CONFIG_USB_HSIC_USB3503=m
CONFIG_USB_HSIC_USB4604=m
CONFIG_USB_LINK_LAYER_TEST=m
CONFIG_USB_CHAOSKEY=m
CONFIG_USB_ATM=m
CONFIG_USB_SPEEDTOUCH=m
CONFIG_USB_CXACRU=m
CONFIG_USB_UEAGLEATM=m
CONFIG_USB_XUSBATM=m

#
# USB Physical Layer drivers
#
CONFIG_USB_PHY=y
CONFIG_NOP_USB_XCEIV=m
CONFIG_USB_GPIO_VBUS=m
CONFIG_TAHVO_USB=m
CONFIG_TAHVO_USB_HOST_BY_DEFAULT=y
CONFIG_USB_ISP1301=m
# end of USB Physical Layer drivers

CONFIG_USB_GADGET=m
# CONFIG_USB_GADGET_DEBUG is not set
# CONFIG_USB_GADGET_DEBUG_FILES is not set
# CONFIG_USB_GADGET_DEBUG_FS is not set
CONFIG_USB_GADGET_VBUS_DRAW=2
CONFIG_USB_GADGET_STORAGE_NUM_BUFFERS=2
CONFIG_U_SERIAL_CONSOLE=y

#
# USB Peripheral Controller
#
CONFIG_USB_GR_UDC=m
CONFIG_USB_R8A66597=m
CONFIG_USB_PXA27X=m
CONFIG_USB_MV_UDC=m
CONFIG_USB_MV_U3D=m
CONFIG_USB_SNP_CORE=m
# CONFIG_USB_M66592 is not set
CONFIG_USB_BDC_UDC=m
CONFIG_USB_AMD5536UDC=m
CONFIG_USB_NET2272=m
CONFIG_USB_NET2272_DMA=y
CONFIG_USB_NET2280=m
CONFIG_USB_GOKU=m
CONFIG_USB_EG20T=m
CONFIG_USB_MAX3420_UDC=m
# CONFIG_USB_CDNS2_UDC is not set
# CONFIG_USB_DUMMY_HCD is not set
# end of USB Peripheral Controller

CONFIG_USB_LIBCOMPOSITE=m
CONFIG_USB_F_ACM=m
CONFIG_USB_F_SS_LB=m
CONFIG_USB_U_SERIAL=m
CONFIG_USB_U_ETHER=m
CONFIG_USB_U_AUDIO=m
CONFIG_USB_F_SERIAL=m
CONFIG_USB_F_OBEX=m
CONFIG_USB_F_NCM=m
CONFIG_USB_F_ECM=m
CONFIG_USB_F_PHONET=m
CONFIG_USB_F_EEM=m
CONFIG_USB_F_SUBSET=m
CONFIG_USB_F_RNDIS=m
CONFIG_USB_F_MASS_STORAGE=m
CONFIG_USB_F_FS=m
CONFIG_USB_F_UAC1=m
CONFIG_USB_F_UAC1_LEGACY=m
CONFIG_USB_F_UAC2=m
CONFIG_USB_F_UVC=m
CONFIG_USB_F_MIDI=m
CONFIG_USB_F_HID=m
CONFIG_USB_F_PRINTER=m
CONFIG_USB_F_TCM=m
CONFIG_USB_CONFIGFS=m
CONFIG_USB_CONFIGFS_SERIAL=y
CONFIG_USB_CONFIGFS_ACM=y
CONFIG_USB_CONFIGFS_OBEX=y
CONFIG_USB_CONFIGFS_NCM=y
CONFIG_USB_CONFIGFS_ECM=y
CONFIG_USB_CONFIGFS_ECM_SUBSET=y
CONFIG_USB_CONFIGFS_RNDIS=y
CONFIG_USB_CONFIGFS_EEM=y
CONFIG_USB_CONFIGFS_PHONET=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_LB_SS=y
CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_F_UAC1=y
CONFIG_USB_CONFIGFS_F_UAC1_LEGACY=y
CONFIG_USB_CONFIGFS_F_UAC2=y
CONFIG_USB_CONFIGFS_F_MIDI=y
# CONFIG_USB_CONFIGFS_F_MIDI2 is not set
CONFIG_USB_CONFIGFS_F_HID=y
CONFIG_USB_CONFIGFS_F_UVC=y
CONFIG_USB_CONFIGFS_F_PRINTER=y
CONFIG_USB_CONFIGFS_F_TCM=y

#
# USB Gadget precomposed configurations
#
CONFIG_USB_ZERO=m
CONFIG_USB_AUDIO=m
CONFIG_GADGET_UAC1=y
# CONFIG_GADGET_UAC1_LEGACY is not set
CONFIG_USB_ETH=m
CONFIG_USB_ETH_RNDIS=y
CONFIG_USB_ETH_EEM=y
CONFIG_USB_G_NCM=m
CONFIG_USB_GADGETFS=m
CONFIG_USB_FUNCTIONFS=m
CONFIG_USB_FUNCTIONFS_ETH=y
CONFIG_USB_FUNCTIONFS_RNDIS=y
CONFIG_USB_FUNCTIONFS_GENERIC=y
CONFIG_USB_MASS_STORAGE=m
CONFIG_USB_GADGET_TARGET=m
CONFIG_USB_G_SERIAL=m
CONFIG_USB_MIDI_GADGET=m
CONFIG_USB_G_PRINTER=m
CONFIG_USB_CDC_COMPOSITE=m
CONFIG_USB_G_NOKIA=m
CONFIG_USB_G_ACM_MS=m
# CONFIG_USB_G_MULTI is not set
CONFIG_USB_G_HID=m
CONFIG_USB_G_DBGP=m
# CONFIG_USB_G_DBGP_PRINTK is not set
CONFIG_USB_G_DBGP_SERIAL=y
CONFIG_USB_G_WEBCAM=m
CONFIG_USB_RAW_GADGET=m
# end of USB Gadget precomposed configurations

CONFIG_TYPEC=m
CONFIG_TYPEC_TCPM=m
CONFIG_TYPEC_TCPCI=m
CONFIG_TYPEC_RT1711H=m
CONFIG_TYPEC_MT6360=m
CONFIG_TYPEC_TCPCI_MT6370=m
CONFIG_TYPEC_TCPCI_MAXIM=m
CONFIG_TYPEC_FUSB302=m
CONFIG_TYPEC_WCOVE=m
CONFIG_TYPEC_UCSI=m
CONFIG_UCSI_CCG=m
CONFIG_UCSI_ACPI=m
CONFIG_UCSI_STM32G0=m
CONFIG_TYPEC_TPS6598X=m
CONFIG_TYPEC_ANX7411=m
CONFIG_TYPEC_RT1719=m
CONFIG_TYPEC_HD3SS3220=m
CONFIG_TYPEC_STUSB160X=m
CONFIG_TYPEC_WUSB3801=m

#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
CONFIG_TYPEC_MUX_FSA4480=m
# CONFIG_TYPEC_MUX_GPIO_SBU is not set
CONFIG_TYPEC_MUX_PI3USB30532=m
CONFIG_TYPEC_MUX_INTEL_PMC=m
# CONFIG_TYPEC_MUX_NB7VPQ904M is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support

#
# USB Type-C Alternate Mode drivers
#
CONFIG_TYPEC_DP_ALTMODE=m
CONFIG_TYPEC_NVIDIA_ALTMODE=m
# end of USB Type-C Alternate Mode drivers

CONFIG_USB_ROLE_SWITCH=y
CONFIG_USB_ROLES_INTEL_XHCI=m
CONFIG_MMC=y
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set
CONFIG_MMC_CRYPTO=y

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_IO_ACCESSORS=y
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_SDHCI_ACPI=m
CONFIG_MMC_SDHCI_PLTFM=m
CONFIG_MMC_SDHCI_F_SDH30=m
CONFIG_MMC_WBSD=m
CONFIG_MMC_ALCOR=m
CONFIG_MMC_TIFM_SD=m
CONFIG_MMC_SPI=m
CONFIG_MMC_SDRICOH_CS=m
CONFIG_MMC_CB710=m
CONFIG_MMC_VIA_SDMMC=m
CONFIG_MMC_VUB300=m
CONFIG_MMC_USHC=m
CONFIG_MMC_USDHI6ROL0=m
CONFIG_MMC_REALTEK_PCI=m
CONFIG_MMC_REALTEK_USB=m
CONFIG_MMC_CQHCI=m
# CONFIG_MMC_HSQ is not set
CONFIG_MMC_TOSHIBA_PCI=m
CONFIG_MMC_MTK=m
CONFIG_MMC_SDHCI_XENON=m
CONFIG_SCSI_UFSHCD=m
CONFIG_SCSI_UFS_BSG=y
CONFIG_SCSI_UFS_CRYPTO=y
# CONFIG_SCSI_UFS_HWMON is not set
CONFIG_SCSI_UFSHCD_PCI=m
CONFIG_SCSI_UFS_DWC_TC_PCI=m
CONFIG_SCSI_UFSHCD_PLATFORM=m
CONFIG_SCSI_UFS_CDNS_PLATFORM=m
CONFIG_MEM