All of lore.kernel.org
 help / color / mirror / Atom feed
* v4.11-rc blk-mq lockup?
@ 2017-03-27 21:44 Bart Van Assche
  2017-03-28 14:06 ` Jens Axboe
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Van Assche @ 2017-03-27 21:44 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

Hello Jens,

If I leave the srp-test software running for a few minutes using the
following command:

# while ~bart/software/infiniband/srp-test/run_tests -d -r 30; do :; done

then after some time the following complaint appears for multiple
kworkers:

INFO: task kworker/9:0:65 blocked for more than 480 seconds.
=A0=A0=A0=A0=A0=A0Tainted: G=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0I=A0=A0=A0=A0=A04=
.11.0-rc4-dbg+ #5
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/9:0=A0=A0=A0=A0=A0D=A0=A0=A0=A00=A0=A0=A0=A065=A0=A0=A0=A0=A0=A02 0=
x00000000
Workqueue: dio/dm-0 dio_aio_complete_work
Call Trace:
=A0__schedule+0x3df/0xc10
=A0schedule+0x38/0x90
=A0rwsem_down_write_failed+0x2c4/0x4c0
=A0call_rwsem_down_write_failed+0x17/0x30
=A0down_write+0x5a/0x70
=A0__generic_file_fsync+0x43/0x90
=A0ext4_sync_file+0x2d0/0x550
=A0vfs_fsync_range+0x46/0xa0
=A0dio_complete+0x181/0x1b0
=A0dio_aio_complete_work+0x17/0x20
=A0process_one_work+0x208/0x6a0
=A0worker_thread+0x49/0x4a0
=A0kthread+0x107/0x140
=A0ret_from_fork+0x2e/0x40

I had not yet observed this behavior with kernel v4.10 or older. If this
happens and I check the queue state with the following script:

#!/bin/bash

cd /sys/class/block || exit $?
for dev in *; do
=A0=A0=A0=A0if [ -e "$dev/mq" ]; then
	echo "$dev"
	for f in "$dev"/mq/*/{pending,*/rq_list}; do
	=A0=A0=A0=A0[ -e "$f" ] || continue
	=A0=A0=A0=A0if { read -r line1 && read -r line2; } <"$f"; then
		echo "$f"
		echo "$line1 $line2" >/dev/null
		head -n 9 "$f"
	=A0=A0=A0=A0fi
	done
	(
	=A0=A0=A0=A0cd /sys/kernel/debug/block >&/dev/null &&
	=A0=A0=A0=A0for d in "$dev"/mq/*; do
		[ ! -d "$d" ] && continue
		grep -q '^busy=3D0$' "$d/tags" && continue
	=A0=A0=A0=A0=A0=A0=A0=A0for f in "$d"/{dispatch,tags*,cpu*/rq_list}; do
		=A0=A0=A0=A0[ -e "$f" ] && grep -aH '' "$f"
		done
	=A0=A0=A0=A0done
	)
=A0=A0=A0=A0fi
done

then the following output appears:

sda
sdb
sdc
sdd
sdd/mq/3/dispatch:ffff880401655d00 {.cmd_flags=3D0xca01, .rq_flags=3D0x2040=
, .tag=3D59, .internal_tag=3D-1}
sdd/mq/3/tags:nr_tags=3D62
sdd/mq/3/tags:nr_reserved_tags=3D0
sdd/mq/3/tags:active_queues=3D0
sdd/mq/3/tags:
sdd/mq/3/tags:bitmap_tags:
sdd/mq/3/tags:depth=3D62
sdd/mq/3/tags:busy=3D31
sdd/mq/3/tags:bits_per_word=3D8
sdd/mq/3/tags:map_nr=3D8
sdd/mq/3/tags:alloc_hint=3D{23, 23, 52, 1, 55, 29, 17, 22, 34, 48, 25, 49, =
37, 43, 58, 25, 6, 20, 50, 14, 55, 7, 21, 17, 26, 36, 43, 43, 4, 6, 3, 47}
sdd/mq/3/tags:wake_batch=3D7
sdd/mq/3/tags:wake_index=3D0
sdd/mq/3/tags:ws=3D{
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdd/mq/3/tags:}
sdd/mq/3/tags:round_robin=3D0
sdd/mq/3/tags_bitmap:00000000: ffff ff1f 0000 0018
sdd/mq/3/cpu5/rq_list:ffff880401657440 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D60, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba0000 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D0, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba1740 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D1, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba2e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D2, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba45c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D3, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba5d00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D4, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba7440 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D5, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037aba8b80 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D6, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037abaa2c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D7, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037ababa00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D8, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037abad140 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D9, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88037abae880 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D10, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff880369900000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D11, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff880369901740 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D12, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff880369902e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D13, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8803699045c0 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D14, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff880369905d00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D15, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff880369907440 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D16, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff880369908b80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D17, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88036990a2c0 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D18, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88036990ba00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D19, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88036990d140 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D20, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff88036990e880 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D21, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b0000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D22, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b1740 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D23, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b2e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D24, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b45c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D25, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b5d00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D26, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b7440 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D27, .internal_tag=3D-1}
sdd/mq/3/cpu5/rq_list:ffff8804001b8b80 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D28, .internal_tag=3D-1}
sde
sde/mq/3/tags:nr_tags=3D62
sde/mq/3/tags:nr_reserved_tags=3D0
sde/mq/3/tags:active_queues=3D0
sde/mq/3/tags:
sde/mq/3/tags:bitmap_tags:
sde/mq/3/tags:depth=3D62
sde/mq/3/tags:busy=3D31
sde/mq/3/tags:bits_per_word=3D8
sde/mq/3/tags:map_nr=3D8
sde/mq/3/tags:alloc_hint=3D{23, 23, 52, 1, 55, 29, 17, 22, 34, 48, 25, 49, =
37, 43, 58, 25, 6, 20, 50, 14, 55, 7, 21, 17, 26, 36, 43, 43, 4, 6, 3, 47}
sde/mq/3/tags:wake_batch=3D7
sde/mq/3/tags:wake_index=3D0
sde/mq/3/tags:ws=3D{
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/3/tags:}
sde/mq/3/tags:round_robin=3D0
sde/mq/3/tags_bitmap:00000000: ffff ff1f 0000 0018
sdf
sdg
sdh
sdi
sdj
sr0

I am using the "none" scheduler:

# cat /sys/class/block/sdd/queue/scheduler =A0
[none]=A0
# cat /sys/class/block/sde/queue/scheduler =A0=A0
[none]

What is remarkable is that I see pending requests for the sd* devices
but not for any dm* device and also that the number of busy requests (31)
is exactly half of the queue depth (62). Could this indicate that the
block layer stopped processing these blk-mq queues?

If this happens and I run the following command to trigger SRP logout:

# for p in /sys/class/srp_remote_ports/*; do echo 1 >$p/delete;=A0done

then the test that was running finishes, reports that removing the
multipath device failed and echo w >/proc/sysrq-trigger produces the
following output:

sysrq: SysRq : Show Blocked State
=A0 task=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0PC stack=A0=A0=A0pid father
systemd-udevd=A0=A0=A0D=A0=A0=A0=A00 14490=A0=A0=A0=A0508 0x00000106
Call Trace:
=A0__schedule+0x3df/0xc10
=A0schedule+0x38/0x90
=A0io_schedule+0x11/0x40
=A0__lock_page+0x10c/0x140
=A0truncate_inode_pages_range+0x45d/0x780
=A0truncate_inode_pages+0x10/0x20
=A0kill_bdev+0x30/0x40
=A0__blkdev_put+0x71/0x220
=A0blkdev_put+0x49/0x170
=A0blkdev_close+0x20/0x30
=A0__fput+0xe8/0x1f0
=A0____fput+0x9/0x10
=A0task_work_run+0x80/0xb0
=A0do_exit+0x30c/0xc70
=A0do_group_exit+0x4b/0xc0
=A0get_signal+0x2c2/0x930
=A0do_signal+0x23/0x670
=A0exit_to_usermode_loop+0x5d/0xa0
=A0do_syscall_64+0xd5/0x140
=A0entry_SYSCALL64_slow_path+0x25/0x25

Does this indicate that truncate_inode_pages_range() is waiting
because a block layer queue got stuck?

The kernel tree I used in my tests is the result of merging the
following commits:
* commit 3dca2c2f3d3b=A0from git://git.kernel.dk/linux-block.git
  ("Merge branch 'for-4.12/block' into for-next")
* commit f88ab0c4b481 from git://git.kernel.org/pub/scm/linux/kernel/git/mk=
p/scsi.git
  ("scsi: libsas: fix ata xfer length")
* commit ad0376eb1483 from git://git.kernel.org/pub/scm/linux/kernel/git/to=
rvalds/linux.git
  ("Merge tag 'edac_for_4.11_2' of git://git.kernel.org/pub/scm/linux/kerne=
l/git/bp/bp")

Please let me know if you need more information.

Thanks,

Bart.=

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v4.11-rc blk-mq lockup?
  2017-03-27 21:44 v4.11-rc blk-mq lockup? Bart Van Assche
@ 2017-03-28 14:06 ` Jens Axboe
  2017-03-28 16:25   ` Bart Van Assche
  0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2017-03-28 14:06 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-block

On Mon, Mar 27 2017, Bart Van Assche wrote:
> Hello Jens,
> 
> If I leave the srp-test software running for a few minutes using the
> following command:
> 
> # while ~bart/software/infiniband/srp-test/run_tests -d -r 30; do :; done
> 
> then after some time the following complaint appears for multiple
> kworkers:
> 
> INFO: task kworker/9:0:65 blocked for more than 480 seconds.
> ������Tainted: G����������I�����4.11.0-rc4-dbg+ #5
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> kworker/9:0�����D����0����65������2 0x00000000
> Workqueue: dio/dm-0 dio_aio_complete_work
> Call Trace:
> �__schedule+0x3df/0xc10
> �schedule+0x38/0x90
> �rwsem_down_write_failed+0x2c4/0x4c0
> �call_rwsem_down_write_failed+0x17/0x30
> �down_write+0x5a/0x70
> �__generic_file_fsync+0x43/0x90
> �ext4_sync_file+0x2d0/0x550
> �vfs_fsync_range+0x46/0xa0
> �dio_complete+0x181/0x1b0
> �dio_aio_complete_work+0x17/0x20
> �process_one_work+0x208/0x6a0
> �worker_thread+0x49/0x4a0
> �kthread+0x107/0x140
> �ret_from_fork+0x2e/0x40
> 
> I had not yet observed this behavior with kernel v4.10 or older. If this
> happens and I check the queue state with the following script:

Can you include the 'state' file in your script?

Do you know when this started happening? You say it doesn't happen in
4.10, but did it pass earlier in the 4.11-rc cycle?

Does it reproduce with dm?

I can't tell from your report if this is new in the 4.11 series,

> The kernel tree I used in my tests is the result of merging the
> following commits:
> * commit 3dca2c2f3d3b�from git://git.kernel.dk/linux-block.git
>   ("Merge branch 'for-4.12/block' into for-next")
> * commit f88ab0c4b481 from git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git
>   ("scsi: libsas: fix ata xfer length")
> * commit ad0376eb1483 from git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
>   ("Merge tag 'edac_for_4.11_2' of git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp")

Can we try and isolate it a bit - -rc4 alone, for instance?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v4.11-rc blk-mq lockup?
  2017-03-28 14:06 ` Jens Axboe
@ 2017-03-28 16:25   ` Bart Van Assche
  2017-03-28 16:30     ` Jens Axboe
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Van Assche @ 2017-03-28 16:25 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

On Tue, 2017-03-28 at 08:06 -0600, Jens Axboe wrote:
> On Mon, Mar 27 2017, Bart Van Assche wrote:
> > Hello Jens,
> >=20
> > If I leave the srp-test software running for a few minutes using the
> > following command:
> >=20
> > # while ~bart/software/infiniband/srp-test/run_tests -d -r 30; do :; do=
ne
> >=20
> > then after some time the following complaint appears for multiple
> > kworkers:
> >=20
> > INFO: task kworker/9:0:65 blocked for more than 480 seconds.
> > =A0=A0=A0=A0=A0=A0Tainted: G=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0I=A0=A0=A0=A0=
=A04.11.0-rc4-dbg+ #5
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this messag=
e.
> > kworker/9:0=A0=A0=A0=A0=A0D=A0=A0=A0=A00=A0=A0=A0=A065=A0=A0=A0=A0=A0=
=A02 0x00000000
> > Workqueue: dio/dm-0 dio_aio_complete_work
> > Call Trace:
> > =A0__schedule+0x3df/0xc10
> > =A0schedule+0x38/0x90
> > =A0rwsem_down_write_failed+0x2c4/0x4c0
> > =A0call_rwsem_down_write_failed+0x17/0x30
> > =A0down_write+0x5a/0x70
> > =A0__generic_file_fsync+0x43/0x90
> > =A0ext4_sync_file+0x2d0/0x550
> > =A0vfs_fsync_range+0x46/0xa0
> > =A0dio_complete+0x181/0x1b0
> > =A0dio_aio_complete_work+0x17/0x20
> > =A0process_one_work+0x208/0x6a0
> > =A0worker_thread+0x49/0x4a0
> > =A0kthread+0x107/0x140
> > =A0ret_from_fork+0x2e/0x40
> >=20
> > I had not yet observed this behavior with kernel v4.10 or older. If thi=
s
> > happens and I check the queue state with the following script:
>=20
> Can you include the 'state' file in your script?
>=20
> Do you know when this started happening? You say it doesn't happen in
> 4.10, but did it pass earlier in the 4.11-rc cycle?
>=20
> Does it reproduce with dm?
>=20
> I can't tell from your report if this is new in the 4.11 series,
>=20
> > The kernel tree I used in my tests is the result of merging the
> > following commits:
> > * commit 3dca2c2f3d3b=A0from git://git.kernel.dk/linux-block.git
> >   ("Merge branch 'for-4.12/block' into for-next")
> > * commit f88ab0c4b481 from git://git.kernel.org/pub/scm/linux/kernel/gi=
t/mkp/scsi.git
> >   ("scsi: libsas: fix ata xfer length")
> > * commit ad0376eb1483 from git://git.kernel.org/pub/scm/linux/kernel/gi=
t/torvalds/linux.git
> >   ("Merge tag 'edac_for_4.11_2' of git://git.kernel.org/pub/scm/linux/k=
ernel/git/bp/bp")
>=20
> Can we try and isolate it a bit - -rc4 alone, for instance?

Hello Jens,

Sorry but performing a bisect would be hard: without recent SCSI and block
layer fixes this test triggers other failures before the lockup reported in
this e-mail is triggered. See e.g.
https://marc.info/?l=3Dlinux-scsi&m=3D148979716822799.

I do not know whether it would be possible to modify the test such that onl=
y
the dm driver is involved but no SCSI code.

When I reran the test this morning the hang was triggered by the 02-sq-on-m=
q
test. This means that dm was used in blk-sq mode and that blk-mq was used f=
or
the ib_srp SCSI device instances.

Please find below the updated script and its output.

---

#!/bin/bash

show_state() {
=A0=A0=A0=A0local a dev=3D$1

=A0=A0=A0=A0for a in device/state queue/scheduler; do
	[ -e "$dev/$a" ] && grep -aH '' "$dev/$a"
=A0=A0=A0=A0done
}

cd /sys/class/block || exit $?
for dev in *; do
=A0=A0=A0=A0if [ -e "$dev/mq" ]; then
	echo "$dev"
	pending=3D0
	for f in "$dev"/mq/*/{pending,*/rq_list}; do
	=A0=A0=A0=A0[ -e "$f" ] || continue
	=A0=A0=A0=A0if { read -r line1 && read -r line2; } <"$f"; then
		echo "$f"
		echo "$line1 $line2" >/dev/null
		head -n 9 "$f"
		((pending++))
	=A0=A0=A0=A0fi
	done
	(
	=A0=A0=A0=A0busy=3D0
	=A0=A0=A0=A0cd /sys/kernel/debug/block >&/dev/null &&
	=A0=A0=A0=A0for d in "$dev"/mq/*; do
		[ ! -d "$d" ] && continue
		grep -q '^busy=3D0$' "$d/tags" && continue
		((busy++))
	=A0=A0=A0=A0=A0=A0=A0=A0for f in "$d"/{dispatch,tags*,cpu*/rq_list}; do
		=A0=A0=A0=A0[ -e "$f" ] && grep -aH '' "$f"
		done
	=A0=A0=A0=A0done
	=A0=A0=A0=A0exit $busy
	)
	pending=3D$((pending+$?))
	[ "$pending" -gt 0 ] && show_state "$dev"
=A0=A0=A0=A0fi
done

---

sda
sdb
sdc
sdd
sde
sde/mq/2/dispatch:ffff8803f5b65d00 {.cmd_flags=3D0xca01, .rq_flags=3D0x2040=
, .tag=3D37, .internal_tag=3D-1}
sde/mq/2/tags:nr_tags=3D62
sde/mq/2/tags:nr_reserved_tags=3D0
sde/mq/2/tags:active_queues=3D0
sde/mq/2/tags:
sde/mq/2/tags:bitmap_tags:
sde/mq/2/tags:depth=3D62
sde/mq/2/tags:busy=3D31
sde/mq/2/tags:bits_per_word=3D8
sde/mq/2/tags:map_nr=3D8
sde/mq/2/tags:alloc_hint=3D{54, 43, 44, 43, 22, 42, 52, 4, 10, 7, 16, 32, 1=
1, 17, 44, 26, 51, 59, 9, 45, 9, 55, 10, 44, 22, 46, 25, 25, 21, 18, 52, 32=
}
sde/mq/2/tags:wake_batch=3D7
sde/mq/2/tags:wake_index=3D0
sde/mq/2/tags:ws=3D{
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sde/mq/2/tags:}
sde/mq/2/tags:round_robin=3D0
sde/mq/2/tags_bitmap:00000000: 7f00 0000 e0ff ff1f
sde/mq/2/cpu9/rq_list:ffff8803f5b67440 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D38, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f5b68b80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D39, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f5b6a2c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D40, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f5b6ba00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D41, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f5b6d140 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D42, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f5b6e880 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D43, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac0000 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D44, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac1740 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D45, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac2e80 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D46, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac45c0 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D47, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac5d00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D48, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac7440 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D49, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ac8b80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D50, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373aca2c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D51, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373acba00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D52, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373acd140 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D53, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff880373ace880 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D54, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f4950000 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D55, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f4951740 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D56, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f4952e80 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D57, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f49545c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D58, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f4955d00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D59, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff8803f4957440 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D60, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe0000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D0, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe1740 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D1, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe2e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D2, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe45c0 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D3, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe5d00 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D4, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe7440 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D5, .internal_tag=3D-1}
sde/mq/2/cpu9/rq_list:ffff88036bfe8b80 {.cmd_flags=3D0x7a01, .rq_flags=3D0x=
2040, .tag=3D6, .internal_tag=3D-1}
sde/device/state:running
sde/queue/scheduler:[none]=A0
sdf
sdf/mq/2/tags:nr_tags=3D62
sdf/mq/2/tags:nr_reserved_tags=3D0
sdf/mq/2/tags:active_queues=3D0
sdf/mq/2/tags:
sdf/mq/2/tags:bitmap_tags:
sdf/mq/2/tags:depth=3D62
sdf/mq/2/tags:busy=3D31
sdf/mq/2/tags:bits_per_word=3D8
sdf/mq/2/tags:map_nr=3D8
sdf/mq/2/tags:alloc_hint=3D{54, 43, 44, 43, 22, 42, 52, 4, 10, 7, 16, 32, 1=
1, 17, 44, 26, 51, 59, 9, 45, 9, 55, 10, 44, 22, 46, 25, 25, 21, 18, 52, 32=
}
sdf/mq/2/tags:wake_batch=3D7
sdf/mq/2/tags:wake_index=3D0
sdf/mq/2/tags:ws=3D{
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:	{.wait_cnt=3D7, .wait=3Dinactive},
sdf/mq/2/tags:}
sdf/mq/2/tags:round_robin=3D0
sdf/mq/2/tags_bitmap:00000000: 7f00 0000 e0ff ff1f
sdf/device/state:running
sdf/queue/scheduler:[none]=A0
sdg
sdh
sdi
sdj
sr0

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v4.11-rc blk-mq lockup?
  2017-03-28 16:25   ` Bart Van Assche
@ 2017-03-28 16:30     ` Jens Axboe
  2017-03-29 20:36       ` Bart Van Assche
  0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2017-03-28 16:30 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-block

On 03/28/2017 10:25 AM, Bart Van Assche wrote:
> On Tue, 2017-03-28 at 08:06 -0600, Jens Axboe wrote:
>> On Mon, Mar 27 2017, Bart Van Assche wrote:
>>> Hello Jens,
>>>
>>> If I leave the srp-test software running for a few minutes using the
>>> following command:
>>>
>>> # while ~bart/software/infiniband/srp-test/run_tests -d -r 30; do :; done
>>>
>>> then after some time the following complaint appears for multiple
>>> kworkers:
>>>
>>> INFO: task kworker/9:0:65 blocked for more than 480 seconds.
>>>       Tainted: G          I     4.11.0-rc4-dbg+ #5
>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/9:0     D    0    65      2 0x00000000
>>> Workqueue: dio/dm-0 dio_aio_complete_work
>>> Call Trace:
>>>  __schedule+0x3df/0xc10
>>>  schedule+0x38/0x90
>>>  rwsem_down_write_failed+0x2c4/0x4c0
>>>  call_rwsem_down_write_failed+0x17/0x30
>>>  down_write+0x5a/0x70
>>>  __generic_file_fsync+0x43/0x90
>>>  ext4_sync_file+0x2d0/0x550
>>>  vfs_fsync_range+0x46/0xa0
>>>  dio_complete+0x181/0x1b0
>>>  dio_aio_complete_work+0x17/0x20
>>>  process_one_work+0x208/0x6a0
>>>  worker_thread+0x49/0x4a0
>>>  kthread+0x107/0x140
>>>  ret_from_fork+0x2e/0x40
>>>
>>> I had not yet observed this behavior with kernel v4.10 or older. If this
>>> happens and I check the queue state with the following script:
>>
>> Can you include the 'state' file in your script?
>>
>> Do you know when this started happening? You say it doesn't happen in
>> 4.10, but did it pass earlier in the 4.11-rc cycle?
>>
>> Does it reproduce with dm?
>>
>> I can't tell from your report if this is new in the 4.11 series,
>>
>>> The kernel tree I used in my tests is the result of merging the
>>> following commits:
>>> * commit 3dca2c2f3d3b from git://git.kernel.dk/linux-block.git
>>>   ("Merge branch 'for-4.12/block' into for-next")
>>> * commit f88ab0c4b481 from git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git
>>>   ("scsi: libsas: fix ata xfer length")
>>> * commit ad0376eb1483 from git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
>>>   ("Merge tag 'edac_for_4.11_2' of git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp")
>>
>> Can we try and isolate it a bit - -rc4 alone, for instance?
> 
> Hello Jens,
> 
> Sorry but performing a bisect would be hard: without recent SCSI and block
> layer fixes this test triggers other failures before the lockup reported in
> this e-mail is triggered. See e.g.
> https://marc.info/?l=linux-scsi&m=148979716822799.

Yeah, I realize that. Not necessarily a huge problem. If I can reproduce
it here, then I can poke enough at it to find out wtf is going on here.

> I do not know whether it would be possible to modify the test such that only
> the dm driver is involved but no SCSI code.

How about the other way around? Just SCSI, but no dm?

> When I reran the test this morning the hang was triggered by the 02-sq-on-mq
> test. This means that dm was used in blk-sq mode and that blk-mq was used for
> the ib_srp SCSI device instances.
> 
> Please find below the updated script and its output.

Thanks for running it again, but it's the wrong state file. I should have
been more clear. The one I'm interested in is in the mq/<num>/ directories,
like the 'tags' etc files.

> 
> ---
> 
> #!/bin/bash
> 
> show_state() {
>     local a dev=$1
> 
>     for a in device/state queue/scheduler; do
> 	[ -e "$dev/$a" ] && grep -aH '' "$dev/$a"
>     done
> }
> 
> cd /sys/class/block || exit $?
> for dev in *; do
>     if [ -e "$dev/mq" ]; then
> 	echo "$dev"
> 	pending=0
> 	for f in "$dev"/mq/*/{pending,*/rq_list}; do
> 	    [ -e "$f" ] || continue
> 	    if { read -r line1 && read -r line2; } <"$f"; then
> 		echo "$f"
> 		echo "$line1 $line2" >/dev/null
> 		head -n 9 "$f"
> 		((pending++))
> 	    fi
> 	done
> 	(
> 	    busy=0
> 	    cd /sys/kernel/debug/block >&/dev/null &&
> 	    for d in "$dev"/mq/*; do
> 		[ ! -d "$d" ] && continue
> 		grep -q '^busy=0$' "$d/tags" && continue
> 		((busy++))
> 	        for f in "$d"/{dispatch,tags*,cpu*/rq_list}; do

Ala:


 	        for f in "$d"/{dispatch,state,tags*,cpu*/rq_list}; do

Also, can you include the involved dm devices as well for this state
dump?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v4.11-rc blk-mq lockup?
  2017-03-28 16:30     ` Jens Axboe
@ 2017-03-29 20:36       ` Bart Van Assche
  0 siblings, 0 replies; 5+ messages in thread
From: Bart Van Assche @ 2017-03-29 20:36 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

On 03/28/2017 09:30 AM, Jens Axboe wrote:=0A=
> On 03/28/2017 10:25 AM, Bart Van Assche wrote:=0A=
>> I do not know whether it would be possible to modify the test such that =
only=0A=
>> the dm driver is involved but no SCSI code.=0A=
> =0A=
> How about the other way around? Just SCSI, but no dm?=0A=
=0A=
Hello Jens,=0A=
=0A=
Sorry but it could take a long time to figure out how to reproduce this=0A=
issue if I start modifying the test. BTW, the patch I just posted =0A=
("blk-mq: Export queue state through /sys/kernel/debug/block/*/state") =0A=
allows me to trigger a blk-mq queue run from user space. If the lockup=0A=
occurs and I use that facility to trigger a queue run the test proceeds.=0A=
The command I used to trigger a queue run is as follows:=0A=
=0A=
for a in /sys/kernel/debug/block/*/state; do echo 1 >$a; wait=0A=
=0A=
> Thanks for running it again, but it's the wrong state file. I should have=
=0A=
> been more clear. The one I'm interested in is in the mq/<num>/ directorie=
s,=0A=
> like the 'tags' etc files.=0A=
> =0A=
> Ala:=0A=
>  =0A=
>  	        for f in "$d"/{dispatch,state,tags*,cpu*/rq_list}; do=0A=
=0A=
Ah, thanks, that makes it clear :-)=0A=
 =0A=
> Also, can you include the involved dm devices as well for this state=0A=
> dump?=0A=
=0A=
I would like to, but the 02-sq-on-mq test configures the dm device =0A=
nodes in single queue mode and there is only information available =0A=
under /sys/kernel/debug/block/ for blk-mq devices ...=0A=
=0A=
Anyway, the updated script:=0A=
=0A=
#!/bin/bash=0A=
=0A=
show_state() {=0A=
    local a dev=3D$1=0A=
=0A=
    for a in device/state queue/scheduler; do=0A=
	[ -e "$dev/$a" ] && grep -aH '' "$dev/$a"=0A=
    done=0A=
}=0A=
=0A=
cd /sys/class/block || exit $?=0A=
for dev in *; do=0A=
    if [ -e "$dev/mq" ]; then=0A=
	echo "$dev"=0A=
	pending=3D0=0A=
	for f in "$dev"/mq/*/{pending,*/rq_list}; do=0A=
	    [ -e "$f" ] || continue=0A=
	    if { read -r line1 && read -r line2; } <"$f"; then=0A=
		echo "$f"=0A=
		echo "$line1 $line2" >/dev/null=0A=
		head -n 9 "$f"=0A=
		((pending++))=0A=
	    fi=0A=
	done=0A=
	(=0A=
	    busy=3D0=0A=
	    cd /sys/kernel/debug/block >&/dev/null &&=0A=
	    for d in "$dev"/mq/*; do=0A=
		[ ! -d "$d" ] && continue=0A=
		grep -q '^busy=3D0$' "$d/tags" && continue=0A=
		((busy++))=0A=
	        for f in "$d"/{dispatch,state,tags*,cpu*/rq_list}; do=0A=
		    [ -e "$f" ] && grep -aH '' "$f"=0A=
		done=0A=
	    done=0A=
	    exit $busy=0A=
	)=0A=
	pending=3D$((pending+$?))=0A=
	if [ "$pending" -gt 0 ]; then=0A=
	    grep -aH '' /sys/kernel/debug/block/"$dev"/state=0A=
	    show_state "$dev"=0A=
	fi=0A=
    fi=0A=
done=0A=
=0A=
=0A=
And the output for the test run of today:=0A=
=0A=
sda=0A=
sdb=0A=
sdd=0A=
sdd/mq/0/dispatch:ffff88036437d140 {.cmd_flags=3D0xca01, .rq_flags=3D0x2040=
, .tag=3D53, .internal_tag=3D-1}=0A=
sdd/mq/0/state:0x4=0A=
sdd/mq/0/tags:nr_tags=3D62=0A=
sdd/mq/0/tags:nr_reserved_tags=3D0=0A=
sdd/mq/0/tags:active_queues=3D0=0A=
sdd/mq/0/tags:=0A=
sdd/mq/0/tags:bitmap_tags:=0A=
sdd/mq/0/tags:depth=3D62=0A=
sdd/mq/0/tags:busy=3D31=0A=
sdd/mq/0/tags:bits_per_word=3D8=0A=
sdd/mq/0/tags:map_nr=3D8=0A=
sdd/mq/0/tags:alloc_hint=3D{48, 48, 38, 44, 54, 6, 52, 23, 30, 6, 51, 26, 6=
1, 45, 9, 56, 55, 13, 44, 45, 12, 12, 23, 42, 44, 24, 41, 0, 54, 4, 4, 45}=
=0A=
sdd/mq/0/tags:wake_batch=3D7=0A=
sdd/mq/0/tags:wake_index=3D0=0A=
sdd/mq/0/tags:ws=3D{=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sdd/mq/0/tags:}=0A=
sdd/mq/0/tags:round_robin=3D0=0A=
sdd/mq/0/tags_bitmap:00000000: ffff 7f00 0000 e01f=0A=
sdd/mq/0/cpu7/rq_list:ffff88036437e880 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D54, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f7ef0000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D55, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f7ef1740 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D56, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f7ef2e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D57, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f7ef45c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D58, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f7ef5d00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D59, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f7ef7440 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D60, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff880386760000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D0, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff880386761740 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D1, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff880386762e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D2, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803867645c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D3, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff880386765d00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D4, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff880386767440 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D5, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff880386768b80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D6, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff88038676a2c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D7, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff88038676ba00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D8, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff88038676d140 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D9, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff88038676e880 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D10, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f8650000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D11, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f8651740 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D12, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f8652e80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D13, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f86545c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D14, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f8655d00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D15, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f8657440 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D16, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f8658b80 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D17, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f865a2c0 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D18, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f865ba00 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D19, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f865d140 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D20, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803f865e880 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D21, .internal_tag=3D-1}=0A=
sdd/mq/0/cpu7/rq_list:ffff8803fb630000 {.cmd_flags=3D0xca01, .rq_flags=3D0x=
2040, .tag=3D22, .internal_tag=3D-1}=0A=
/sys/kernel/debug/block/sdd/state:SAME_COMP STACKABLE IO_STAT INIT_DONE POL=
L=0A=
sdd/device/state:running=0A=
sdd/queue/scheduler:[none] =0A=
sde=0A=
sde/mq/0/state:0x0=0A=
sde/mq/0/tags:nr_tags=3D62=0A=
sde/mq/0/tags:nr_reserved_tags=3D0=0A=
sde/mq/0/tags:active_queues=3D0=0A=
sde/mq/0/tags:=0A=
sde/mq/0/tags:bitmap_tags:=0A=
sde/mq/0/tags:depth=3D62=0A=
sde/mq/0/tags:busy=3D31=0A=
sde/mq/0/tags:bits_per_word=3D8=0A=
sde/mq/0/tags:map_nr=3D8=0A=
sde/mq/0/tags:alloc_hint=3D{48, 48, 38, 44, 54, 6, 52, 23, 30, 6, 51, 26, 6=
1, 45, 9, 56, 55, 13, 44, 45, 12, 12, 23, 42, 44, 24, 41, 0, 54, 4, 4, 45}=
=0A=
sde/mq/0/tags:wake_batch=3D7=0A=
sde/mq/0/tags:wake_index=3D0=0A=
sde/mq/0/tags:ws=3D{=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:	{.wait_cnt=3D7, .wait=3Dinactive},=0A=
sde/mq/0/tags:}=0A=
sde/mq/0/tags:round_robin=3D0=0A=
sde/mq/0/tags_bitmap:00000000: ffff 7f00 0000 e01f=0A=
/sys/kernel/debug/block/sde/state:SAME_COMP STACKABLE IO_STAT INIT_DONE POL=
L=0A=
sde/device/state:running=0A=
sde/queue/scheduler:[none] =0A=
sdf=0A=
sdg=0A=
sdh=0A=
sdi=0A=
sdj=0A=
sdk=0A=
sr0=0A=

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-03-29 20:36 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-27 21:44 v4.11-rc blk-mq lockup? Bart Van Assche
2017-03-28 14:06 ` Jens Axboe
2017-03-28 16:25   ` Bart Van Assche
2017-03-28 16:30     ` Jens Axboe
2017-03-29 20:36       ` Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.