linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Christoph Hellwig <hch@lst.de>, Keith Busch <kbusch@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Cc: Chao Leng <lengchao@huawei.com>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: Re: [PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device
Date: Mon, 22 Mar 2021 19:57:27 -0700	[thread overview]
Message-ID: <34e574dc-5e80-4afe-b858-71e6ff5014d6@grimberg.me> (raw)
In-Reply-To: <20210322073726.788347-3-hch@lst.de>


> When we reset/teardown a controller, we must freeze and quiesce the
> namespaces request queues to make sure that we safely stop inflight I/O
> submissions. Freeze is mandatory because if our hctx map changed between
> reconnects, blk_mq_update_nr_hw_queues will immediately attempt to freeze
> the queue, and if it still has pending submissions (that are still
> quiesced) it will hang.
> 
> However, by freezing the namespaces request queues, and only unfreezing
> them when we successfully reconnect, inflight submissions that are
> running concurrently can now block grabbing the nshead srcu until either
> we successfully reconnect or ctrl_loss_tmo expired (or the user
> explicitly disconnected).
> 
> This caused a deadlock when a different controller (different path on the
> same subsystem) became live (i.e. optimized/non-optimized). This is
> because nvme_mpath_set_live needs to synchronize the nshead srcu before
> requeueing I/O in order to make sure that current_path is visible to
> future (re-)submisions. However the srcu lock is taken by a blocked
> submission on a frozen request queue, and we have a deadlock.
> 
> In order to fix this use the blk_mq_submit_bio_direct API to submit the
> bio to the low-level driver, which does not block on the queue free
> but instead allows nvme-multipath to pick another path or queue up the
> bio.

Almost...

This still has the same issue but instead of blocking on
blk_queue_enter() it is blocked on blk_mq_get_tag():
--
  __schedule+0x22b/0x6e0
  schedule+0x46/0xb0
  io_schedule+0x42/0x70
  blk_mq_get_tag+0x11d/0x270
  ? blk_bio_segment_split+0x235/0x2a0
  ? finish_wait+0x80/0x80
  __blk_mq_alloc_request+0x65/0xe0
  blk_mq_submit_bio+0x144/0x500
  blk_mq_submit_bio_direct+0x78/0xa0
  nvme_ns_head_submit_bio+0xc3/0x2f0 [nvme_core]
  __submit_bio_noacct+0xcf/0x2e0
  __blkdev_direct_IO+0x413/0x440
  ? __io_complete_rw.constprop.0+0x150/0x150
  generic_file_read_iter+0x92/0x160
  io_iter_do_read+0x1a/0x40
  io_read+0xc5/0x350
  ? common_interrupt+0x14/0xa0
  ? update_load_avg+0x7a/0x5e0
  io_issue_sqe+0xa28/0x1020
  ? lock_timer_base+0x61/0x80
  io_wq_submit_work+0xaa/0x120
  io_worker_handle_work+0x121/0x330
  io_wqe_worker+0xb6/0x190
  ? io_worker_handle_work+0x330/0x330
  ret_from_fork+0x22/0x30
--

--
  ? usleep_range+0x80/0x80
  __schedule+0x22b/0x6e0
  ? usleep_range+0x80/0x80
  schedule+0x46/0xb0
  schedule_timeout+0xff/0x140
  ? del_timer_sync+0x67/0xb0
  ? __prepare_to_swait+0x4b/0x70
  __wait_for_common+0xb3/0x160
  __synchronize_srcu.part.0+0x75/0xe0
  ? __bpf_trace_rcu_utilization+0x10/0x10
  nvme_mpath_set_live+0x61/0x130 [nvme_core]
  nvme_update_ana_state+0xd7/0x100 [nvme_core]
  nvme_parse_ana_log+0xa5/0x160 [nvme_core]
  ? nvme_mpath_set_live+0x130/0x130 [nvme_core]
  nvme_read_ana_log+0x7b/0xe0 [nvme_core]
  process_one_work+0x1e6/0x380
  worker_thread+0x49/0x300
--



If I were to always start the queues in nvme_tcp_teardown_ctrl
right after I cancel the tagset inflights like:
--
@@ -1934,8 +1934,7 @@ static void nvme_tcp_teardown_io_queues(struct 
nvme_ctrl *ctrl,
         nvme_sync_io_queues(ctrl);
         nvme_tcp_stop_io_queues(ctrl);
         nvme_cancel_tagset(ctrl);
-       if (remove)
-               nvme_start_queues(ctrl);
+       nvme_start_queues(ctrl);
         nvme_tcp_destroy_io_queues(ctrl, remove);
--

then a simple reset during traffic bricks the host on infinite loop
because in the setup sequence we freeze the queue in
nvme_update_ns_info, so the queue is frozen but we still have an
available path (because the controller is back to live!) so nvme-mpath
keeps calling blk_mq_submit_bio_direct and fails, and
nvme_update_ns_info cannot properly freeze the queue..
-> deadlock.

So this is obviously incorrect.

Also, if we make nvme-mpath submit a REQ_NOWAIT we basically
will fail as soon as we run out of tags, even in the normal path...

So I'm not exactly sure what we should do to fix this...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-03-23  2:58 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-22  7:37 fix nvme-tcp and nvme-rdma controller reset hangs when using multipath Christoph Hellwig
2021-03-22  7:37 ` [PATCH 1/2] blk-mq: add a blk_mq_submit_bio_direct API Christoph Hellwig
2021-03-22 11:23   ` Hannes Reinecke
2021-03-22 15:30   ` Keith Busch
2021-03-22  7:37 ` [PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device Christoph Hellwig
2021-03-22 11:22   ` Hannes Reinecke
2021-03-22 15:31   ` Keith Busch
2021-03-23  2:57   ` Sagi Grimberg [this message]
2021-03-23  3:23     ` Sagi Grimberg
2021-03-23  7:04       ` Chao Leng
2021-03-23  7:36         ` Sagi Grimberg
2021-03-23  8:13           ` Chao Leng
2021-03-23 16:17             ` Christoph Hellwig
2021-03-23 16:15           ` Christoph Hellwig
2021-03-23 18:13             ` Sagi Grimberg
2021-03-23 18:22               ` Christoph Hellwig
2021-03-23 19:00                 ` Sagi Grimberg
2021-03-23 19:01                   ` Christoph Hellwig
2021-03-23 19:10                     ` Sagi Grimberg
2021-03-23  7:28     ` Hannes Reinecke
2021-03-23  7:31       ` Sagi Grimberg
2021-03-23  8:36         ` Hannes Reinecke
2021-03-23 14:53           ` Keith Busch
2021-03-23 16:19             ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=34e574dc-5e80-4afe-b858-71e6ff5014d6@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=lengchao@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).