linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Wagner <dwagner@suse.de>
To: linux-nvme@lists.infradead.org
Cc: linux-kernel@vger.kernel.org,
	James Smart <james.smart@broadcom.com>,
	Keith Busch <kbusch@kernel.org>, Ming Lei <ming.lei@redhat.com>,
	Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@suse.de>,
	Wen Xiong <wenxiong@us.ibm.com>
Subject: Re: [PATCH v3 0/6] Handle update hardware queues and queue freeze more carefully
Date: Fri, 30 Jul 2021 11:49:07 +0200	[thread overview]
Message-ID: <20210730094907.5vg7qebggttibogz@beryllium.lan> (raw)
In-Reply-To: <20210726172704.j6cbv2qmox2cl2dz@beryllium.lan>

On Mon, Jul 26, 2021 at 07:27:04PM +0200, Daniel Wagner wrote:
> FTR, I've tested the 'prior_ioq_cnt != nr_io_queues' case. In this
> scenario the series works. Though in the case of 'prior_ioq_cnt ==
> nr_io_queues' I see hanging I/Os.

Back on starring on this issue. So the hanging I/Os happen in this path
after a remote port has been disabled:

 nvme nvme1: NVME-FC{1}: new ctrl: NQN "nqn.1992-08.com.netapp:sn.d646dc63336511e995cb00a0988fb732:subsystem.nvme-svm-dolin-ana_subsystem"
 nvme nvme1: NVME-FC{1}: controller connectivity lost. Awaiting Reconnect
 nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
 nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
 nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
 nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
 nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
 nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
 nvme nvme1: NVME-FC{1}: connectivity re-established. Attempting reconnect
 nvme nvme1: NVME-FC{1}: create association : host wwpn 0x100000109b579ef6  rport wwpn 0x201900a09890f5bf: NQN "nqn.1992-08.com.netapp:sn.d646dc63336511e995cb00a0988fb732:subsystem.nvme-svm-dolin-ana_subsystem"
 nvme nvme1: NVME-FC{1}: controller connect complete

and all hanging tasks have the same call trace:

 task:fio             state:D stack:    0 pid:13545 ppid: 13463 flags:0x00000000
 Call Trace:
  __schedule+0x2d7/0x8f0
  schedule+0x3c/0xa0
  blk_queue_enter+0x106/0x1f0
  ? wait_woken+0x80/0x80
  submit_bio_noacct+0x116/0x4b0
  ? submit_bio+0x4b/0x1a0
  submit_bio+0x4b/0x1a0
  __blkdev_direct_IO_simple+0x20c/0x350
  ? update_load_avg+0x1ac/0x5e0
  ? blkdev_iopoll+0x30/0x30
  ? blkdev_direct_IO+0x4a2/0x520
  blkdev_direct_IO+0x4a2/0x520
  ? update_load_avg+0x1ac/0x5e0
  ? update_load_avg+0x1ac/0x5e0
  ? generic_file_read_iter+0x84/0x140
  ? __blkdev_direct_IO_simple+0x350/0x350
  generic_file_read_iter+0x84/0x140
  blkdev_read_iter+0x41/0x50
  new_sync_read+0x118/0x1a0
  vfs_read+0x15a/0x180
  ksys_pread64+0x71/0x90
  do_syscall_64+0x3c/0x80
  entry_SYSCALL_64_after_hwframe+0x44/0xae

(gdb) l *blk_queue_enter+0x106
0xffffffff81473736 is in blk_queue_enter (block/blk-core.c:469).
464                      * queue dying flag, otherwise the following wait may
465                      * never return if the two reads are reordered.
466                      */
467                     smp_rmb();
468
469                     wait_event(q->mq_freeze_wq,
470                                (!q->mq_freeze_depth &&
471                                 blk_pm_resume_queue(pm, q)) ||
472                                blk_queue_dying(q));
473                     if (blk_queue_dying(q))


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-07-30  9:49 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-20 12:43 [PATCH v3 0/6] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-07-20 12:43 ` [PATCH v3 1/6] nvme-fc: Update hardware queues before using them Daniel Wagner
2021-07-20 17:54   ` Hannes Reinecke
2021-07-20 12:43 ` [PATCH v3 2/6] nvme-tcp: Update number of " Daniel Wagner
2021-07-20 12:43 ` [PATCH v3 3/6] nvme-rdma: " Daniel Wagner
2021-07-20 17:54   ` Hannes Reinecke
2021-07-20 12:43 ` [PATCH v3 4/6] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
2021-07-20 17:55   ` Hannes Reinecke
2021-07-20 12:43 ` [PATCH v3 5/6] nvme-fc: avoid race between time out and tear down Daniel Wagner
2021-07-20 17:56   ` Hannes Reinecke
2021-07-20 12:43 ` [PATCH v3 6/6] nvme-fc: fix controller reset hang during traffic Daniel Wagner
2021-07-20 12:48 ` [PATCH v3 0/6] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-07-26 17:27   ` Daniel Wagner
2021-07-30  9:49     ` Daniel Wagner [this message]
2021-07-30 11:34       ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210730094907.5vg7qebggttibogz@beryllium.lan \
    --to=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=wenxiong@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).