All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Keith Busch <kbusch@kernel.org>
Subject: Re: Deadlock on failure to read NVMe namespace
Date: Tue, 19 Oct 2021 15:31:04 +0300	[thread overview]
Message-ID: <e13114d1-b204-b89a-12bc-53ae049dfaa7@grimberg.me> (raw)
In-Reply-To: <87df4b26-19e7-6cc5-e973-23b6e98f4e19@suse.de>


> Hi all,
> 
> during testing my test target (https://github.com/hreinecke/nofuse) I've 
> managed to run into a deadlock (with nvme-5.16):
> 
> c481:~ # dmesg
> [102885.483587] nvme nvme0: sqsize 128 > ctrl maxcmd 32, clamping down
> [102885.483711] nvme nvme0: creating 2 I/O queues.
> [102885.484130] nvme nvme0: mapped 2/0/0 default/read/poll queues.
> [102885.485527] nvme nvme0: new ctrl: NQN 
> "nqn.2014-08.org.nvmexpress:uuid:62f37f51-0cc7-46d5-9865-4de22e81bd9d", 
> addr 127.0.0.1:8009
> [102885.487351] nvme nvme0: request 0x0 genctr mismatch (got 0x0 
> expected 0x1)
> [102885.487354] nvme nvme0: got bad c2hdata.command_id 0x0 on queue 2

Nice! the genctr is catching bugs already...

> [102885.487356] nvme nvme0: receive failed:  -2
> [102885.487357] nvme nvme0: starting error recovery
> [102885.487444] block nvme0n1: no usable path - requeuing I/O
> [102885.502051] nvme nvme0: Reconnecting in 10 seconds...
> [102893.291877] nvme nvme0: Removing ctrl: NQN 

Did you trigger this removal? its 2 seconds before
the reconnect attempt was supposed to run.

> "nqn.2014-08.org.nvmexpress:uuid:62f37f51-0cc7-46d5-9865-4de22e81bd9d"
> [102895.535149] nvme nvme0: queue_size 128 > ctrl sqsize 32, clamping down
> [102895.535214] nvme nvme0: creating 2 I/O queues.
> [102895.535943] nvme nvme0: mapped 2/0/0 default/read/poll queues.
> [102895.536187] nvme nvme0: Failed reconnect attempt 1
> [103700.406060] INFO: task kworker/u4:0:14965 blocked for more than 491 
> seconds.
> [103700.406068]       Tainted: G            E     5.15.0-rc3-54-default+ 
> #862
> [103700.406071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
> disables this message.
> [103700.406073] task:kworker/u4:0    state:D stack:    0 pid:14965 ppid: 
>      2 flags:0x00004000
> [103700.406117] Workqueue: nvme-wq nvme_scan_work [nvme_core]
> [103700.406143] Call Trace:
> [103700.406146]  __schedule+0x302/0x13a0
> [103700.406157]  ? block_read_full_page+0x21e/0x3b0
> [103700.406164]  ? blkdev_direct_IO+0x4a0/0x4a0
> [103700.406172]  schedule+0x3a/0xa0
> [103700.406177]  io_schedule+0x12/0x40
> [103700.406182]  do_read_cache_page+0x49b/0x790
> 
> 481:~ # cat /proc/15761/stack
> [<0>] nvme_mpath_clear_ctrl_paths+0x25/0x80 [nvme_core]
> [<0>] nvme_remove_namespaces+0x31/0xf0 [nvme_core]
> [<0>] nvme_do_delete_ctrl+0x4b/0x80 [nvme_core]
> [<0>] nvme_sysfs_delete+0x42/0x60 [nvme_core]
> [<0>] kernfs_fop_write_iter+0x12f/0x1a0
> [<0>] new_sync_write+0x122/0x1b0
> [<0>] vfs_write+0x1eb/0x250
> [<0>] ksys_write+0xa1/0xe0
> [<0>] do_syscall_64+0x3a/0x80
> [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae
> c481:~ # cat /proc/14965/stack
> [<0>] do_read_cache_page+0x49b/0x790
> [<0>] read_part_sector+0x39/0xe0
> [<0>] read_lba+0xf9/0x1d0
> [<0>] efi_partition+0xf1/0x7f0
> [<0>] bdev_disk_changed+0x1ee/0x550
> [<0>] blkdev_get_whole+0x81/0x90
> [<0>] blkdev_get_by_dev+0x128/0x2e0
> [<0>] device_add_disk+0x377/0x3c0
> [<0>] nvme_mpath_set_live+0x130/0x1b0 [nvme_core]
> [<0>] nvme_mpath_add_disk+0x150/0x160 [nvme_core]
> [<0>] nvme_alloc_ns+0x417/0x950 [nvme_core]
> [<0>] nvme_validate_or_alloc_ns+0xe9/0x1e0 [nvme_core]
> [<0>] nvme_scan_work+0x168/0x310 [nvme_core]
> [<0>] process_one_work+0x231/0x420
> [<0>] worker_thread+0x2d/0x3f0
> [<0>] kthread+0x11a/0x140
> [<0>] ret_from_fork+0x22/0x30
> 
> My theory here is that the partition scanning code just calls into the 
> pagecache, which doesn't set a timeout for any I/O operation.
> As this is done under the scan_mutex we cannot clear the active paths, 
> and consequently we hang.

But the controller removal should have cancelled all inflight
commands...

Maybe we're missing unfreeze? Hannes, can you try this one?
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index e29c47114739..783fde36d2ba 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1974,8 +1974,11 @@ static void nvme_tcp_teardown_io_queues(struct 
nvme_ctrl *ctrl,
         nvme_sync_io_queues(ctrl);
         nvme_tcp_stop_io_queues(ctrl);
         nvme_cancel_tagset(ctrl);
-       if (remove)
+       if (remove) {
                 nvme_start_queues(ctrl);
+               nvme_wait_freeze_timeout(ctrl, NVME_IO_TIMEOUT);
+               nvme_unfreeze(ctrl);
+       }
         nvme_tcp_destroy_io_queues(ctrl, remove);
  }
--


  reply	other threads:[~2021-10-19 12:32 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-19 11:22 Deadlock on failure to read NVMe namespace Hannes Reinecke
2021-10-19 12:31 ` Sagi Grimberg [this message]
2021-10-19 12:40   ` Hannes Reinecke
2021-10-19 14:27     ` Sagi Grimberg
2021-10-19 15:04       ` Hannes Reinecke
2021-10-19 15:09         ` Sagi Grimberg
2021-10-19 15:49           ` Hannes Reinecke
2021-10-19 15:06       ` Sagi Grimberg
2021-10-19 15:41         ` Hannes Reinecke
2021-10-19 20:13           ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e13114d1-b204-b89a-12bc-53ae049dfaa7@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.