All of lore.kernel.org
 help / color / mirror / Atom feed
From: swise@opengridcomputing.com (Steve Wise)
Subject: nvmf/rdma host crash during heavy load and keep alive recovery
Date: Thu, 15 Sep 2016 15:58:25 -0500	[thread overview]
Message-ID: <02c001d20f93$e6a88a60$b3f99f20$@opengridcomputing.com> (raw)
In-Reply-To: <020001d20f70$9998fde0$cccaf9a0$@opengridcomputing.com>

> > And I see that 2 sets of blk_mq_hw_ctx structs get assigned to the same 32
> > queues.  Here is the output for 1 target connect with 32 cores.  So is it
> > expected that the 32 nvme_rdma IO queues get assigned to 2 sets of hw_ctx
> > structs?  The 2nd set is getting initialized as part of namespace
scanning...
> 
> 
> So here is the stack for the first time the nvme_rdma_queue structs are bound
to
> an hctx:
> 
> [ 2006.826941]  [<ffffffffa066c452>] nvme_rdma_init_hctx+0x102/0x110
> [nvme_rdma]
> [ 2006.835409]  [<ffffffff8133a52e>] blk_mq_init_hctx+0x21e/0x2e0
> [ 2006.842530]  [<ffffffff8133a6ea>] blk_mq_realloc_hw_ctxs+0xfa/0x240
> [ 2006.850097]  [<ffffffff8133b342>] blk_mq_init_allocated_queue+0x92/0x410
> [ 2006.858107]  [<ffffffff8132a969>] ? blk_alloc_queue_node+0x259/0x2c0
> [ 2006.865765]  [<ffffffff8133b6ff>] blk_mq_init_queue+0x3f/0x70
> [ 2006.872829]  [<ffffffffa066d9f9>] nvme_rdma_create_io_queues+0x189/0x210
> [nvme_rdma]
> [ 2006.881917]  [<ffffffffa066e813>] ?
> nvme_rdma_configure_admin_queue+0x1e3/0x290 [nvme_rdma]
> [ 2006.891611]  [<ffffffffa066ec65>] nvme_rdma_create_ctrl+0x3a5/0x4c0
> [nvme_rdma]
> [ 2006.900260]  [<ffffffffa0654d33>] ? nvmf_create_ctrl+0x33/0x210
> [nvme_fabrics]
> [ 2006.908799]  [<ffffffffa0654e82>] nvmf_create_ctrl+0x182/0x210
[nvme_fabrics]
> [ 2006.917228]  [<ffffffffa0654fbc>] nvmf_dev_write+0xac/0x110 [nvme_fabrics]
> 

The above stack is creating hctx queues for the nvme_rdma_ctrl->ctrl.connect_q
request queue.

> And here is the 2nd time the same nvme_rdma_queue is bound to a different
hctx::
> 
> [ 2007.263068]  [<ffffffffa066c40c>] nvme_rdma_init_hctx+0xbc/0x110
> [nvme_rdma]
> [ 2007.271656]  [<ffffffff8133a52e>] blk_mq_init_hctx+0x21e/0x2e0
> [ 2007.279027]  [<ffffffff8133a6ea>] blk_mq_realloc_hw_ctxs+0xfa/0x240
> [ 2007.286829]  [<ffffffff8133b342>] blk_mq_init_allocated_queue+0x92/0x410
> [ 2007.295066]  [<ffffffff8132a969>] ? blk_alloc_queue_node+0x259/0x2c0
> [ 2007.302962]  [<ffffffff8135ce84>] ? ida_pre_get+0xb4/0xe0
> [ 2007.309894]  [<ffffffff8133b6ff>] blk_mq_init_queue+0x3f/0x70
> [ 2007.317164]  [<ffffffffa0272998>] nvme_alloc_ns+0x88/0x240 [nvme_core]
> [ 2007.325218]  [<ffffffffa02728bc>] ? nvme_find_get_ns+0x5c/0xb0 [nvme_core]
> [ 2007.333612]  [<ffffffffa0273059>] nvme_validate_ns+0x79/0x90 [nvme_core]
> [ 2007.341825]  [<ffffffffa0273166>] nvme_scan_ns_list+0xf6/0x1f0 [nvme_core]
> [ 2007.350214]  [<ffffffffa027338b>] nvme_scan_work+0x12b/0x140 [nvme_core]
> [ 2007.358427]  [<ffffffff810a1613>] process_one_work+0x183/0x4d0
>

This stack is creating hctx queues for the namespace created for this target
device.

Sagi,

Should nvme_rdma_error_recovery_work() be stopping the hctx queues for
ctrl->ctrl.connect_q too?

Something like:

@@ -781,6 +790,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct
*work)
        if (ctrl->queue_count > 1)
                nvme_stop_queues(&ctrl->ctrl);
        blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+       blk_mq_stop_hw_queues(ctrl->ctrl.connect_q);

        /* We must take care of fastfail/requeue all our inflight requests */
        if (ctrl->queue_count > 1)

And then restart these after the nvme_rdma_queue rdma resources are reallocated?

  reply	other threads:[~2016-09-15 20:58 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-29 21:40 nvmf/rdma host crash during heavy load and keep alive recovery Steve Wise
2016-08-01 11:06 ` Christoph Hellwig
2016-08-01 14:26   ` Steve Wise
2016-08-01 21:38     ` Steve Wise
     [not found]     ` <015801d1ec3d$0ca07ea0$25e17be0$@opengridcomputing.com>
2016-08-10 15:46       ` Steve Wise
     [not found]       ` <010f01d1f31e$50c8cb40$f25a61c0$@opengridcomputing.com>
2016-08-10 16:00         ` Steve Wise
     [not found]         ` <013701d1f320$57b185d0$07149170$@opengridcomputing.com>
2016-08-10 17:20           ` Steve Wise
2016-08-10 18:59             ` Steve Wise
2016-08-11  6:27               ` Sagi Grimberg
2016-08-11 13:58                 ` Steve Wise
2016-08-11 14:19                   ` Steve Wise
2016-08-11 14:40                   ` Steve Wise
2016-08-11 15:53                     ` Steve Wise
     [not found]                     ` <00fe01d1f3e8$8992b330$9cb81990$@opengridcomputing.com>
2016-08-15 14:39                       ` Steve Wise
2016-08-16  9:26                         ` Sagi Grimberg
2016-08-16 21:17                           ` Steve Wise
2016-08-17 18:57                             ` Sagi Grimberg
2016-08-17 19:07                               ` Steve Wise
2016-09-01 19:14                                 ` Steve Wise
2016-09-04  9:17                                   ` Sagi Grimberg
2016-09-07 21:08                                     ` Steve Wise
2016-09-08  7:45                                       ` Sagi Grimberg
2016-09-08 20:47                                         ` Steve Wise
2016-09-08 21:00                                         ` Steve Wise
     [not found]                                       ` <7f09e373-6316-26a3-ae81-dab1205d88ab@grimbe rg.me>
     [not found]                                         ` <021201d20a14$0 f203b80$2d60b280$@opengridcomputing.com>
     [not found]                                           ` <021201d20a14$0f203b80$2d60b280$@opengridcomputing.com>
2016-09-08 21:21                                             ` Steve Wise
     [not found]                                           ` <021401d20a16$ed60d470$c8227d50$@opengridcomputing.com>
     [not found]                                             ` <021501d20a19$327ba5b0$9772f110$@opengrid computing.com>
2016-09-08 21:37                                             ` Steve Wise
2016-09-09 15:50                                               ` Steve Wise
2016-09-12 20:10                                                 ` Steve Wise
     [not found]                                                   ` <da2e918b-0f18-e032-272d-368c6ec49c62@gri mberg.me>
2016-09-15  9:53                                                   ` Sagi Grimberg
2016-09-15 14:44                                                     ` Steve Wise
2016-09-15 15:10                                                       ` Steve Wise
2016-09-15 15:53                                                         ` Steve Wise
2016-09-15 16:45                                                           ` Steve Wise
2016-09-15 20:58                                                             ` Steve Wise [this message]
2016-09-16 11:04                                                               ` 'Christoph Hellwig'
2016-09-18 17:02                                                                 ` Sagi Grimberg
2016-09-19 15:38                                                                   ` Steve Wise
2016-09-21 21:20                                                                     ` Steve Wise
2016-09-23 23:57                                                                       ` Sagi Grimberg
2016-09-26 15:12                                                                         ` 'Christoph Hellwig'
2016-09-26 22:29                                                                           ` 'Christoph Hellwig'
2016-09-27 15:11                                                                             ` Steve Wise
2016-09-27 15:31                                                                               ` Steve Wise
2016-09-27 14:07                                                                         ` Steve Wise
2016-09-15 14:00                                                   ` Gabriel Krisman Bertazi
2016-09-15 14:31                                                     ` Steve Wise
2016-09-07 21:33                                     ` Steve Wise
2016-09-08  8:22                                       ` Sagi Grimberg
2016-09-08 17:19                                         ` Steve Wise
2016-09-09 15:57                                           ` Steve Wise
     [not found]                                       ` <9fd1f090-3b86-b496-d8c0-225ac0815fbe@grimbe rg.me>
     [not found]                                         ` <01bc01d209f5$1 b7d7510$52785f30$@opengridcomputing.com>
     [not found]                                           ` <01bc01d209f5$1b7d7510$52785f30$@opengridcomputing.com>
2016-09-08 19:15                                             ` Steve Wise
     [not found]                                           ` <01f201d20a05$6abde5f0$4039b1d0$@opengridcomputing.com>
2016-09-08 19:26                                             ` Steve Wise
     [not found]                                             ` <01f401d20a06$d4cc8360$7e658a20$@opengridcomputing.com>
2016-09-08 20:44                                               ` Steve Wise

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='02c001d20f93$e6a88a60$b3f99f20$@opengridcomputing.com' \
    --to=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.