All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yi Zhang <yi.zhang@redhat.com>
To: linux-block <linux-block@vger.kernel.org>,
	"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>
Cc: Sagi Grimberg <sagi@grimberg.me>
Subject: [bug report] blktests nvme/tcp triggered WARNING at kernel/workqueue.c:2628 check_flush_dependency+0x110/0x14c
Date: Sat, 9 Jul 2022 00:03:23 +0800	[thread overview]
Message-ID: <CAHj4cs86Dm577NK-C+bW6=+mv2V3KOpQCG0Vg6xZrSGzNijX4g@mail.gmail.com> (raw)

Hello

I reproduced this issue on the linux-block/for-next, pls help check
it, feel free to let me know if you need info/test, thanks.

[ 6026.144114] run blktests nvme/012 at 2022-07-08 08:15:09
[ 6026.271866] loop0: detected capacity change from 0 to 2097152
[ 6026.294403] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 6026.322827] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 6026.347984] nvmet: creating nvm controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:90390a00-4597-11e9-b935-3c18a0043981.
[ 6026.364007] nvme nvme0: creating 32 I/O queues.
[ 6026.380279] nvme nvme0: mapped 32/0/0 default/read/poll queues.
[ 6026.398481] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420
[ 6027.653759] XFS (nvme0n1): Mounting V5 Filesystem
[ 6027.677423] XFS (nvme0n1): Ending clean mount
[ 6173.064201] XFS (nvme0n1): Unmounting Filesystem
[ 6173.656286] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
[ 6174.005589] ------------[ cut here ]------------
[ 6174.010200] workqueue: WQ_MEM_RECLAIM
nvmet-wq:nvmet_tcp_release_queue_work [nvmet_tcp] is flushing
!WQ_MEM_RECLAIM nvmet_tcp_wq:nvmet_tcp_io_work [nvmet_tcp]
[ 6174.010216] WARNING: CPU: 20 PID: 14456 at kernel/workqueue.c:2628
check_flush_dependency+0x110/0x14c
[ 6174.033579] Modules linked in: nvme_tcp nvme_fabrics nvmet_tcp
nvmet nvme nvme_core loop tls mlx4_ib ib_uverbs ib_core mlx4_en rfkill
sunrpc vfat fat joydev acpi_ipmi mlx4_core igb ipmi_ssif cppc_cpufreq
fuse zram xfs uas usb_storage dwc3 ulpi udc_core ast crct10dif_ce
drm_vram_helper ghash_ce drm_ttm_helper sbsa_gwdt ttm
i2c_xgene_slimpro ahci_platform gpio_dwapb xgene_hwmon xhci_plat_hcd
ipmi_devintf ipmi_msghandler [last unloaded: nvmet]
[ 6174.072622] CPU: 20 PID: 14456 Comm: kworker/20:8 Not tainted 5.19.0-rc5+ #1
[ 6174.079660] Hardware name: Lenovo HR350A            7X35CTO1WW
/FALCON     , BIOS hve104q-1.14 06/25/2020
[ 6174.089474] Workqueue: nvmet-wq nvmet_tcp_release_queue_work [nvmet_tcp]
[ 6174.096168] pstate: 004000c5 (nzcv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 6174.103117] pc : check_flush_dependency+0x110/0x14c
[ 6174.107985] lr : check_flush_dependency+0x110/0x14c
[ 6174.112851] sp : ffff800026b2bb10
[ 6174.116153] x29: ffff800026b2bb10 x28: 0000000000000000 x27: ffff80000a94f240
[ 6174.123279] x26: ffff800009304a90 x25: 0000000000000001 x24: ffff80000a570448
[ 6174.130405] x23: ffff009f6c6d82a8 x22: fffffbffee9cea00 x21: ffff800001395430
[ 6174.137532] x20: ffff0008c0fb3000 x19: ffff00087b7dda00 x18: ffffffffffffffff
[ 6174.144657] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000006
[ 6174.151783] x14: 0000000000000001 x13: 204d49414c434552 x12: 5f4d454d5f515721
[ 6174.158909] x11: 00000000ffffdfff x10: ffff80000a53eb70 x9 : ffff80000824f754
[ 6174.166034] x8 : 000000000002ffe8 x7 : c0000000ffffdfff x6 : 00000000000affa8
[ 6174.173160] x5 : 0000000000001fff x4 : 0000000000000000 x3 : 0000000000000027
[ 6174.180286] x2 : 0000000000000002 x1 : ffff0008c6080000 x0 : 0000000000000092
[ 6174.187412] Call trace:
[ 6174.189847]  check_flush_dependency+0x110/0x14c
[ 6174.194367]  start_flush_work+0xd8/0x410
[ 6174.198278]  __flush_work+0x88/0xe0
[ 6174.201755]  __cancel_work_timer+0x118/0x194
[ 6174.206014]  cancel_work_sync+0x20/0x2c
[ 6174.209837]  nvmet_tcp_release_queue_work+0xcc/0x300 [nvmet_tcp]
[ 6174.215834]  process_one_work+0x2b8/0x704
[ 6174.219832]  worker_thread+0x80/0x42c
[ 6174.223483]  kthread+0xfc/0x110
[ 6174.226613]  ret_from_fork+0x10/0x20
[ 6174.230179] irq event stamp: 0
[ 6174.233221] hardirqs last  enabled at (0): [<0000000000000000>] 0x0
[ 6174.239478] hardirqs last disabled at (0): [<ffff800008198c44>]
copy_process+0x674/0x14a0
[ 6174.247644] softirqs last  enabled at (0): [<ffff800008198c44>]
copy_process+0x674/0x14a0
[ 6174.255809] softirqs last disabled at (0): [<0000000000000000>] 0x0
[ 6174.262063] ---[ end trace 0000000000000000 ]-----

Best Regards,
  Yi Zhang


             reply	other threads:[~2022-07-08 16:03 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-08 16:03 Yi Zhang [this message]
2022-07-10  9:41 ` [bug report] blktests nvme/tcp triggered WARNING at kernel/workqueue.c:2628 check_flush_dependency+0x110/0x14c Sagi Grimberg
2022-07-19 16:08   ` Yi Zhang
2022-07-20  6:18   ` Christoph Hellwig
2022-07-21 22:13     ` Sagi Grimberg
2022-07-22  4:46       ` Christoph Hellwig
2022-07-24  8:21         ` Sagi Grimberg
2022-07-29  7:37           ` Yi Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHj4cs86Dm577NK-C+bW6=+mv2V3KOpQCG0Vg6xZrSGzNijX4g@mail.gmail.com' \
    --to=yi.zhang@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.