linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Handle update hardware queues and queue freeze more carefully
@ 2021-06-25 10:16 Daniel Wagner
  2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Daniel Wagner @ 2021-06-25 10:16 UTC (permalink / raw)
  To: linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg, Daniel Wagner

Hi,

this is a followup on the crash I reported in

  https://lore.kernel.org/linux-block/20210608183339.70609-1-dwagner@suse.de/

By moving the hardware check up the crash was gone. Unfortuntatly, I
don't understand why this fixes the crash. The per-cpu access is
crashing but I can't see why the blk_mq_update_nr_hw_queues() is
fixing this problem.

Even though I can't explain why it fixes it, I think it makes sense to
update the hardware queue mapping bevore we recreate the IO
queues. Thus I avoided in the commit message to say it fixes
something.

Also during testing I observed the we hang indivinetly in
blk_mq_freeze_queue_wait(). Again I can't explain why we get stuck
there but given a common pattern for the nvme_wait_freeze() is to use
it with a timeout I think the timeout should be used too :)

Anyway, someone with more undertanding of the stack can explain the
problems.

Thanks,
Daniel


Daniel Wagner (2):
  nvme-fc: Update hardware queues before using them
  nvme-fc: Wait with a timeout for queue to freeze

 drivers/nvme/host/fc.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/2] nvme-fc: Update hardware queues before using them
  2021-06-25 10:16 [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
@ 2021-06-25 10:16 ` Daniel Wagner
  2021-06-27 13:47   ` James Smart
                     ` (2 more replies)
  2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
  2021-06-25 12:21 ` [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
  2 siblings, 3 replies; 20+ messages in thread
From: Daniel Wagner @ 2021-06-25 10:16 UTC (permalink / raw)
  To: linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg, Daniel Wagner

In case the number of hardware queues changes, do the update the
tagset and ctx to hctx first before using the mapping to recreate and
connnect the IO queues.

Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/nvme/host/fc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 8a3c4814d21b..a9645cd89eca 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2951,14 +2951,6 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
 	if (ctrl->ctrl.queue_count == 1)
 		return 0;
 
-	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
-	if (ret)
-		goto out_free_io_queues;
-
-	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
-	if (ret)
-		goto out_delete_hw_queues;
-
 	if (prior_ioq_cnt != nr_io_queues) {
 		dev_info(ctrl->ctrl.device,
 			"reconnect: revising io queue count from %d to %d\n",
@@ -2968,6 +2960,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
 		nvme_unfreeze(&ctrl->ctrl);
 	}
 
+	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
+	if (ret)
+		goto out_free_io_queues;
+
+	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
+	if (ret)
+		goto out_delete_hw_queues;
+
 	return 0;
 
 out_delete_hw_queues:
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-06-25 10:16 [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
  2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
@ 2021-06-25 10:16 ` Daniel Wagner
  2021-06-27 14:04   ` James Smart
                     ` (2 more replies)
  2021-06-25 12:21 ` [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
  2 siblings, 3 replies; 20+ messages in thread
From: Daniel Wagner @ 2021-06-25 10:16 UTC (permalink / raw)
  To: linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg, Daniel Wagner

Do not wait indifinitly for all queues to freeze. Instead use a
timeout and abort the operation if we get stuck.

Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
 drivers/nvme/host/fc.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index a9645cd89eca..d8db85aa5417 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2955,7 +2955,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
 		dev_info(ctrl->ctrl.device,
 			"reconnect: revising io queue count from %d to %d\n",
 			prior_ioq_cnt, nr_io_queues);
-		nvme_wait_freeze(&ctrl->ctrl);
+		if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
+			/*
+			 * If we timed out waiting for freeze we are likely to
+			 * be stuck.  Fail the controller initialization just
+			 * to be safe.
+			 */
+			return -ENODEV;
+		}
 		blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
 		nvme_unfreeze(&ctrl->ctrl);
 	}
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] Handle update hardware queues and queue freeze more carefully
  2021-06-25 10:16 [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
  2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
  2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
@ 2021-06-25 12:21 ` Daniel Wagner
  2021-06-25 13:00   ` Ming Lei
  2 siblings, 1 reply; 20+ messages in thread
From: Daniel Wagner @ 2021-06-25 12:21 UTC (permalink / raw)
  To: linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg

On Fri, Jun 25, 2021 at 12:16:47PM +0200, Daniel Wagner wrote:
> this is a followup on the crash I reported in
> 
>   https://lore.kernel.org/linux-block/20210608183339.70609-1-dwagner@suse.de/
> 
> By moving the hardware check up the crash was gone. Unfortuntatly, I
> don't understand why this fixes the crash. The per-cpu access is
> crashing but I can't see why the blk_mq_update_nr_hw_queues() is
> fixing this problem.
> 
> Even though I can't explain why it fixes it, I think it makes sense to
> update the hardware queue mapping bevore we recreate the IO
> queues. Thus I avoided in the commit message to say it fixes
> something.

I just discussed this with Hannes and we figured out how the crash is
fixed by moving the blk_mq_update_nr_hw_queues() before the
nvme_fc_create_hw_io_queues()/nvme_fc_connect_io_queues().

First of all, blk_mq_update_nr_hw_queues() operates on the normal
tag_set and not the admin_tag_set. That means when we move the
blk_mq_update_nr_hw_queues() before the nvme_fc_connect_io_queues(), we
update the mapping to only CPUs and hwctx which are available. When we
then do the connect call nvmf_connect_io_queue() we will only allocate
tags from queues which are not in the BLK_MQ_S_INACTIVE anymore. Hence
we skip the blk_mq_put_tag() call.

> Also during testing I observed the we hang indivinetly in
> blk_mq_freeze_queue_wait(). Again I can't explain why we get stuck
> there but given a common pattern for the nvme_wait_freeze() is to use
> it with a timeout I think the timeout should be used too :)

The nvme_wait_freeeze() is probably not needed at all,
__blk_mq_update_nr_hw_queues() already calls blk_mq_freeze_queue(). So
there this is not needed at all. Furthermore, if we move
blk_mq_update_nr_hw_queues() in front of nvme_fc_create_hw_io_queues()
there can't be any pending I/Os because there are not queues.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] Handle update hardware queues and queue freeze more carefully
  2021-06-25 12:21 ` [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
@ 2021-06-25 13:00   ` Ming Lei
  2021-06-29  1:31     ` Ming Lei
  0 siblings, 1 reply; 20+ messages in thread
From: Ming Lei @ 2021-06-25 13:00 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Fri, Jun 25, 2021 at 02:21:56PM +0200, Daniel Wagner wrote:
> On Fri, Jun 25, 2021 at 12:16:47PM +0200, Daniel Wagner wrote:
> > this is a followup on the crash I reported in
> > 
> >   https://lore.kernel.org/linux-block/20210608183339.70609-1-dwagner@suse.de/
> > 
> > By moving the hardware check up the crash was gone. Unfortuntatly, I
> > don't understand why this fixes the crash. The per-cpu access is
> > crashing but I can't see why the blk_mq_update_nr_hw_queues() is
> > fixing this problem.
> > 
> > Even though I can't explain why it fixes it, I think it makes sense to
> > update the hardware queue mapping bevore we recreate the IO
> > queues. Thus I avoided in the commit message to say it fixes
> > something.
> 
> I just discussed this with Hannes and we figured out how the crash is
> fixed by moving the blk_mq_update_nr_hw_queues() before the
> nvme_fc_create_hw_io_queues()/nvme_fc_connect_io_queues().
> 
> First of all, blk_mq_update_nr_hw_queues() operates on the normal
> tag_set and not the admin_tag_set. That means when we move the
> blk_mq_update_nr_hw_queues() before the nvme_fc_connect_io_queues(), we
> update the mapping to only CPUs and hwctx which are available. When we
> then do the connect call nvmf_connect_io_queue() we will only allocate
> tags from queues which are not in the BLK_MQ_S_INACTIVE anymore. Hence
> we skip the blk_mq_put_tag() call.

Your patch just reduces the race window, what if all cpus in
hctx->cpumask become offline when calling blk_mq_alloc_request_hctx()?


Thanks,
Ming


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] nvme-fc: Update hardware queues before using them
  2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
@ 2021-06-27 13:47   ` James Smart
  2021-06-29  1:32   ` Ming Lei
  2021-06-29 12:31   ` Hannes Reinecke
  2 siblings, 0 replies; 20+ messages in thread
From: James Smart @ 2021-06-27 13:47 UTC (permalink / raw)
  To: Daniel Wagner, linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg

On 6/25/2021 3:16 AM, Daniel Wagner wrote:
> In case the number of hardware queues changes, do the update the
> tagset and ctx to hctx first before using the mapping to recreate and
> connnect the IO queues.
> 
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>   drivers/nvme/host/fc.c | 16 ++++++++--------
>   1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 8a3c4814d21b..a9645cd89eca 100644

Makes sense.  Thanks.   Although it does bring up that perhaps, if the 
hw queue count changes, thus it no longer matches what was set on the 
target, the new value should be set on the target to release resources 
on the target.

Note: the same behavior exists in the other transports as we all started 
from the same lineage. So those should be updated as well.  Granted 
you'll need to break out the queue count set and checking which was done 
on fc but not on the other transports.

Reviewed-by: James Smart <jsmart2021@gmail.com>

-- james


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
@ 2021-06-27 14:04   ` James Smart
  2021-06-29  1:39   ` Ming Lei
  2021-06-29 12:31   ` Hannes Reinecke
  2 siblings, 0 replies; 20+ messages in thread
From: James Smart @ 2021-06-27 14:04 UTC (permalink / raw)
  To: Daniel Wagner, linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg

On 6/25/2021 3:16 AM, Daniel Wagner wrote:
> Do not wait indifinitly for all queues to freeze. Instead use a
> timeout and abort the operation if we get stuck.
> 
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>   drivers/nvme/host/fc.c | 9 ++++++++-
>   1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index a9645cd89eca..d8db85aa5417 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2955,7 +2955,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
>   		dev_info(ctrl->ctrl.device,
>   			"reconnect: revising io queue count from %d to %d\n",
>   			prior_ioq_cnt, nr_io_queues);
> -		nvme_wait_freeze(&ctrl->ctrl);
> +		if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
> +			/*
> +			 * If we timed out waiting for freeze we are likely to
> +			 * be stuck.  Fail the controller initialization just
> +			 * to be safe.
> +			 */
> +			return -ENODEV;
> +		}
>   		blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
>   		nvme_unfreeze(&ctrl->ctrl);
>   	}
> 

Looks fine. This is one of those that changed in the other transports 
but fc wasn't part of the patch set.

Reviewed-by: James Smart <jsmart2021@gmail.com>

-- james


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] Handle update hardware queues and queue freeze more carefully
  2021-06-25 13:00   ` Ming Lei
@ 2021-06-29  1:31     ` Ming Lei
  0 siblings, 0 replies; 20+ messages in thread
From: Ming Lei @ 2021-06-29  1:31 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Fri, Jun 25, 2021 at 9:00 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Fri, Jun 25, 2021 at 02:21:56PM +0200, Daniel Wagner wrote:
> > On Fri, Jun 25, 2021 at 12:16:47PM +0200, Daniel Wagner wrote:
> > > this is a followup on the crash I reported in
> > >
> > >   https://lore.kernel.org/linux-block/20210608183339.70609-1-dwagner@suse.de/
> > >
> > > By moving the hardware check up the crash was gone. Unfortuntatly, I
> > > don't understand why this fixes the crash. The per-cpu access is
> > > crashing but I can't see why the blk_mq_update_nr_hw_queues() is
> > > fixing this problem.
> > >
> > > Even though I can't explain why it fixes it, I think it makes sense to
> > > update the hardware queue mapping bevore we recreate the IO
> > > queues. Thus I avoided in the commit message to say it fixes
> > > something.
> >
> > I just discussed this with Hannes and we figured out how the crash is
> > fixed by moving the blk_mq_update_nr_hw_queues() before the
> > nvme_fc_create_hw_io_queues()/nvme_fc_connect_io_queues().
> >
> > First of all, blk_mq_update_nr_hw_queues() operates on the normal
> > tag_set and not the admin_tag_set. That means when we move the
> > blk_mq_update_nr_hw_queues() before the nvme_fc_connect_io_queues(), we
> > update the mapping to only CPUs and hwctx which are available. When we
> > then do the connect call nvmf_connect_io_queue() we will only allocate
> > tags from queues which are not in the BLK_MQ_S_INACTIVE anymore. Hence
> > we skip the blk_mq_put_tag() call.
>
> Your patch just reduces the race window, what if all cpus in
> hctx->cpumask become offline when calling blk_mq_alloc_request_hctx()?

connect io queues after updating nr_hw_queues can cause correct hctx_idx
to be passed to blk_mq_alloc_request_hctx(), so this patch is good, so the patch
looks good.

Yeah, there is still other issue not covered during cpu hotplug, but
that is different
with this one.

Thanks,


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] nvme-fc: Update hardware queues before using them
  2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
  2021-06-27 13:47   ` James Smart
@ 2021-06-29  1:32   ` Ming Lei
  2021-06-29 12:31   ` Hannes Reinecke
  2 siblings, 0 replies; 20+ messages in thread
From: Ming Lei @ 2021-06-29  1:32 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Fri, Jun 25, 2021 at 12:16:48PM +0200, Daniel Wagner wrote:
> In case the number of hardware queues changes, do the update the
> tagset and ctx to hctx first before using the mapping to recreate and
> connnect the IO queues.
> 
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>  drivers/nvme/host/fc.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 8a3c4814d21b..a9645cd89eca 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2951,14 +2951,6 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
>  	if (ctrl->ctrl.queue_count == 1)
>  		return 0;
>  
> -	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> -	if (ret)
> -		goto out_free_io_queues;
> -
> -	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> -	if (ret)
> -		goto out_delete_hw_queues;
> -
>  	if (prior_ioq_cnt != nr_io_queues) {
>  		dev_info(ctrl->ctrl.device,
>  			"reconnect: revising io queue count from %d to %d\n",
> @@ -2968,6 +2960,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
>  		nvme_unfreeze(&ctrl->ctrl);
>  	}
>  
> +	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> +	if (ret)
> +		goto out_free_io_queues;
> +
> +	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> +	if (ret)
> +		goto out_delete_hw_queues;
> +
>  	return 0;
>  
>  out_delete_hw_queues:
> -- 
> 2.29.2
> 

This way may cause correct hctx_idx to be passed to blk_mq_alloc_request_hctx(), so:

Reviewed-by: Ming Lei <ming.lei@redhat.com>


Thanks,
Ming


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
  2021-06-27 14:04   ` James Smart
@ 2021-06-29  1:39   ` Ming Lei
  2021-06-29  7:48     ` Daniel Wagner
  2021-07-05 16:34     ` Daniel Wagner
  2021-06-29 12:31   ` Hannes Reinecke
  2 siblings, 2 replies; 20+ messages in thread
From: Ming Lei @ 2021-06-29  1:39 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Fri, Jun 25, 2021 at 12:16:49PM +0200, Daniel Wagner wrote:
> Do not wait indifinitly for all queues to freeze. Instead use a
> timeout and abort the operation if we get stuck.
> 
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>  drivers/nvme/host/fc.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index a9645cd89eca..d8db85aa5417 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2955,7 +2955,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
>  		dev_info(ctrl->ctrl.device,
>  			"reconnect: revising io queue count from %d to %d\n",
>  			prior_ioq_cnt, nr_io_queues);
> -		nvme_wait_freeze(&ctrl->ctrl);
> +		if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
> +			/*
> +			 * If we timed out waiting for freeze we are likely to
> +			 * be stuck.  Fail the controller initialization just
> +			 * to be safe.
> +			 */
> +			return -ENODEV;
> +		}
>  		blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
>  		nvme_unfreeze(&ctrl->ctrl);

Can you investigate a bit on why there is the hang? FC shouldn't use
managed IRQ, so the interrupt won't be shutdown.

blk-mq debugfs may help to dump the requests after the hang is triggered,
or you still can add debug code in nvme_wait_freeze_timeout() to dump
all requests if blk-mq debugfs doesn't work.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-06-29  1:39   ` Ming Lei
@ 2021-06-29  7:48     ` Daniel Wagner
  2021-07-05 16:34     ` Daniel Wagner
  1 sibling, 0 replies; 20+ messages in thread
From: Daniel Wagner @ 2021-06-29  7:48 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Tue, Jun 29, 2021 at 09:39:30AM +0800, Ming Lei wrote:
> On Fri, Jun 25, 2021 at 12:16:49PM +0200, Daniel Wagner wrote:
> > Do not wait indifinitly for all queues to freeze. Instead use a
> > timeout and abort the operation if we get stuck.
> > 
> > Signed-off-by: Daniel Wagner <dwagner@suse.de>
> > ---
> >  drivers/nvme/host/fc.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> > index a9645cd89eca..d8db85aa5417 100644
> > --- a/drivers/nvme/host/fc.c
> > +++ b/drivers/nvme/host/fc.c
> > @@ -2955,7 +2955,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
> >  		dev_info(ctrl->ctrl.device,
> >  			"reconnect: revising io queue count from %d to %d\n",
> >  			prior_ioq_cnt, nr_io_queues);
> > -		nvme_wait_freeze(&ctrl->ctrl);
> > +		if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
> > +			/*
> > +			 * If we timed out waiting for freeze we are likely to
> > +			 * be stuck.  Fail the controller initialization just
> > +			 * to be safe.
> > +			 */
> > +			return -ENODEV;
> > +		}
> >  		blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
> >  		nvme_unfreeze(&ctrl->ctrl);
> 
> Can you investigate a bit on why there is the hang? FC shouldn't use
> managed IRQ, so the interrupt won't be shutdown.
> 
> blk-mq debugfs may help to dump the requests after the hang is triggered,
> or you still can add debug code in nvme_wait_freeze_timeout() to dump
> all requests if blk-mq debugfs doesn't work.

Sure thing, I'll try to find out why it hangs. The good thing is that I
was able to reliable reproduce it. So let's see.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] nvme-fc: Update hardware queues before using them
  2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
  2021-06-27 13:47   ` James Smart
  2021-06-29  1:32   ` Ming Lei
@ 2021-06-29 12:31   ` Hannes Reinecke
  2 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2021-06-29 12:31 UTC (permalink / raw)
  To: Daniel Wagner, linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg

On 6/25/21 12:16 PM, Daniel Wagner wrote:
> In case the number of hardware queues changes, do the update the
> tagset and ctx to hctx first before using the mapping to recreate and
> connnect the IO queues.
> 
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>   drivers/nvme/host/fc.c | 16 ++++++++--------
>   1 file changed, 8 insertions(+), 8 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
  2021-06-27 14:04   ` James Smart
  2021-06-29  1:39   ` Ming Lei
@ 2021-06-29 12:31   ` Hannes Reinecke
  2 siblings, 0 replies; 20+ messages in thread
From: Hannes Reinecke @ 2021-06-29 12:31 UTC (permalink / raw)
  To: Daniel Wagner, linux-nvme
  Cc: linux-kernel, James Smart, Keith Busch, Jens Axboe, Ming Lei,
	Sagi Grimberg

On 6/25/21 12:16 PM, Daniel Wagner wrote:
> Do not wait indifinitly for all queues to freeze. Instead use a
> timeout and abort the operation if we get stuck.
> 
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>   drivers/nvme/host/fc.c | 9 ++++++++-
>   1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index a9645cd89eca..d8db85aa5417 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2955,7 +2955,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
>   		dev_info(ctrl->ctrl.device,
>   			"reconnect: revising io queue count from %d to %d\n",
>   			prior_ioq_cnt, nr_io_queues);
> -		nvme_wait_freeze(&ctrl->ctrl);
> +		if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
> +			/*
> +			 * If we timed out waiting for freeze we are likely to
> +			 * be stuck.  Fail the controller initialization just
> +			 * to be safe.
> +			 */
> +			return -ENODEV;
> +		}
>   		blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
>   		nvme_unfreeze(&ctrl->ctrl);
>   	}
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-06-29  1:39   ` Ming Lei
  2021-06-29  7:48     ` Daniel Wagner
@ 2021-07-05 16:34     ` Daniel Wagner
  2021-07-06  7:29       ` Ming Lei
  1 sibling, 1 reply; 20+ messages in thread
From: Daniel Wagner @ 2021-07-05 16:34 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Tue, Jun 29, 2021 at 09:39:30AM +0800, Ming Lei wrote:
> Can you investigate a bit on why there is the hang? FC shouldn't use
> managed IRQ, so the interrupt won't be shutdown.

So far, I was not able to figure out why this hangs. In my test setup I
don't have to do any I/O, I just toggle the remote port.

  grep busy /sys/kernel/debug/block/*/hctx*/tags | grep -v busy=0

and this seems to confirm, no I/O in flight.

So I started to look at the q_usage_counter. The obvious observational
is that counter is not 0. The least bit is set, thus we are in atomic
mode. 

(gdb) p/x *((struct request_queue*)0xffff8ac992fbef20)->q_usage_counter->data
$10 = {
  count = {
    counter = 0x8000000000000001
  }, 
  release = 0xffffffffa02e78b0, 
  confirm_switch = 0x0, 
  force_atomic = 0x0, 
  allow_reinit = 0x1, 
  rcu = {
    next = 0x0, 
    func = 0x0
  }, 
  ref = 0xffff8ac992fbef30
}

I am a bit confused about the percpu-refcount API. My naive
interpretation is that when we are in atomic mode percpu_ref_is_zero()
can't be used. But this seems rather strange. I must miss something.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-07-05 16:34     ` Daniel Wagner
@ 2021-07-06  7:29       ` Ming Lei
  2021-07-06  8:10         ` Daniel Wagner
  0 siblings, 1 reply; 20+ messages in thread
From: Ming Lei @ 2021-07-06  7:29 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Mon, Jul 05, 2021 at 06:34:00PM +0200, Daniel Wagner wrote:
> On Tue, Jun 29, 2021 at 09:39:30AM +0800, Ming Lei wrote:
> > Can you investigate a bit on why there is the hang? FC shouldn't use
> > managed IRQ, so the interrupt won't be shutdown.
> 
> So far, I was not able to figure out why this hangs. In my test setup I
> don't have to do any I/O, I just toggle the remote port.
> 
>   grep busy /sys/kernel/debug/block/*/hctx*/tags | grep -v busy=0
> 
> and this seems to confirm, no I/O in flight.

What is the output of the following command after the hang is triggered?

(cd /sys/kernel/debug/block/nvme0n1 && find . -type f -exec grep -aH . {} \;)

Suppose the hang disk is nvme0n1.

> 
> So I started to look at the q_usage_counter. The obvious observational
> is that counter is not 0. The least bit is set, thus we are in atomic
> mode. 
> 
> (gdb) p/x *((struct request_queue*)0xffff8ac992fbef20)->q_usage_counter->data
> $10 = {
>   count = {
>     counter = 0x8000000000000001
>   }, 
>   release = 0xffffffffa02e78b0, 
>   confirm_switch = 0x0, 
>   force_atomic = 0x0, 
>   allow_reinit = 0x1, 
>   rcu = {
>     next = 0x0, 
>     func = 0x0
>   }, 
>   ref = 0xffff8ac992fbef30
> }
> 
> I am a bit confused about the percpu-refcount API. My naive
> interpretation is that when we are in atomic mode percpu_ref_is_zero()
> can't be used. But this seems rather strange. I must miss something.

No, percpu_ref_is_zero() is fine to be called in atomic mode.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-07-06  7:29       ` Ming Lei
@ 2021-07-06  8:10         ` Daniel Wagner
  2021-07-06  8:45           ` Ming Lei
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Wagner @ 2021-07-06  8:10 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

[-- Attachment #1: Type: text/plain, Size: 424 bytes --]

On Tue, Jul 06, 2021 at 03:29:11PM +0800, Ming Lei wrote:
> > and this seems to confirm, no I/O in flight.
> 
> What is the output of the following command after the hang is triggered?
> 
> (cd /sys/kernel/debug/block/nvme0n1 && find . -type f -exec grep -aH . {} \;)
> 
> Suppose the hang disk is nvme0n1.

see attachement

> No, percpu_ref_is_zero() is fine to be called in atomic mode.

Okay, that is what I hoped for :)

[-- Attachment #2: blk-debug.txt --]
[-- Type: text/plain, Size: 17831 bytes --]

/sys/kernel/debug/block/nvme0c0n1# find . -type f -exec grep -aH . {} \;
./rqos/wbt/wb_background:4
./rqos/wbt/wb_normal:8
./rqos/wbt/unknown_cnt:0
./rqos/wbt/min_lat_nsec:2000000
./rqos/wbt/inflight:0: inflight 0
./rqos/wbt/inflight:1: inflight 0
./rqos/wbt/inflight:2: inflight 0
./rqos/wbt/id:0
./rqos/wbt/enabled:1
./rqos/wbt/curr_win_nsec:0
./hctx0/cpu39/completed:0 0
./hctx0/cpu39/merged:0
./hctx0/cpu39/dispatched:0 0
./hctx0/cpu38/completed:0 0
./hctx0/cpu38/merged:0
./hctx0/cpu38/dispatched:0 0
./hctx0/cpu37/completed:0 0
./hctx0/cpu37/merged:0
./hctx0/cpu37/dispatched:0 0
./hctx0/cpu36/completed:0 0
./hctx0/cpu36/merged:0
./hctx0/cpu36/dispatched:0 0
./hctx0/cpu35/completed:0 0
./hctx0/cpu35/merged:0
./hctx0/cpu35/dispatched:0 0
./hctx0/cpu34/completed:0 0
./hctx0/cpu34/merged:0
./hctx0/cpu34/dispatched:0 0
./hctx0/cpu33/completed:0 0
./hctx0/cpu33/merged:0
./hctx0/cpu33/dispatched:0 0
./hctx0/cpu32/completed:0 0
./hctx0/cpu32/merged:0
./hctx0/cpu32/dispatched:0 0
./hctx0/cpu31/completed:0 0
./hctx0/cpu31/merged:0
./hctx0/cpu31/dispatched:0 0
./hctx0/cpu30/completed:0 0
./hctx0/cpu30/merged:0
./hctx0/cpu30/dispatched:0 0
./hctx0/cpu29/completed:0 0
./hctx0/cpu29/merged:0
./hctx0/cpu29/dispatched:0 0
./hctx0/cpu28/completed:0 0
./hctx0/cpu28/merged:0
./hctx0/cpu28/dispatched:0 0
./hctx0/cpu27/completed:0 0
./hctx0/cpu27/merged:0
./hctx0/cpu27/dispatched:0 0
./hctx0/cpu26/completed:0 0
./hctx0/cpu26/merged:0
./hctx0/cpu26/dispatched:0 0
./hctx0/cpu25/completed:0 0
./hctx0/cpu25/merged:0
./hctx0/cpu25/dispatched:0 0
./hctx0/cpu24/completed:0 0
./hctx0/cpu24/merged:0
./hctx0/cpu24/dispatched:0 0
./hctx0/cpu23/completed:0 0
./hctx0/cpu23/merged:0
./hctx0/cpu23/dispatched:0 0
./hctx0/cpu22/completed:0 0
./hctx0/cpu22/merged:0
./hctx0/cpu22/dispatched:0 0
./hctx0/cpu21/completed:0 0
./hctx0/cpu21/merged:0
./hctx0/cpu21/dispatched:0 0
./hctx0/cpu20/completed:0 0
./hctx0/cpu20/merged:0
./hctx0/cpu20/dispatched:0 0
./hctx0/cpu19/completed:0 0
./hctx0/cpu19/merged:0
./hctx0/cpu19/dispatched:0 0
./hctx0/cpu18/completed:0 0
./hctx0/cpu18/merged:0
./hctx0/cpu18/dispatched:0 0
./hctx0/cpu17/completed:0 0
./hctx0/cpu17/merged:0
./hctx0/cpu17/dispatched:0 0
./hctx0/cpu16/completed:0 0
./hctx0/cpu16/merged:0
./hctx0/cpu16/dispatched:0 0
./hctx0/cpu15/completed:0 0
./hctx0/cpu15/merged:0
./hctx0/cpu15/dispatched:0 0
./hctx0/cpu14/completed:0 0
./hctx0/cpu14/merged:0
./hctx0/cpu14/dispatched:0 0
./hctx0/cpu13/completed:0 0
./hctx0/cpu13/merged:0
./hctx0/cpu13/dispatched:0 0
./hctx0/cpu12/completed:0 0
./hctx0/cpu12/merged:0
./hctx0/cpu12/dispatched:0 0
./hctx0/cpu11/completed:0 0
./hctx0/cpu11/merged:0
./hctx0/cpu11/dispatched:0 0
./hctx0/cpu10/completed:0 0
./hctx0/cpu10/merged:0
./hctx0/cpu10/dispatched:0 0
./hctx0/cpu9/completed:0 0
./hctx0/cpu9/merged:0
./hctx0/cpu9/dispatched:0 0
./hctx0/cpu8/completed:0 0
./hctx0/cpu8/merged:0
./hctx0/cpu8/dispatched:0 0
./hctx0/cpu7/completed:0 0
./hctx0/cpu7/merged:0
./hctx0/cpu7/dispatched:0 0
./hctx0/cpu6/completed:0 0
./hctx0/cpu6/merged:0
./hctx0/cpu6/dispatched:0 0
./hctx0/cpu5/completed:0 0
./hctx0/cpu5/merged:0
./hctx0/cpu5/dispatched:0 0
./hctx0/cpu4/completed:0 0
./hctx0/cpu4/merged:0
./hctx0/cpu4/dispatched:0 0
./hctx0/cpu3/completed:0 0
./hctx0/cpu3/merged:0
./hctx0/cpu3/dispatched:0 0
./hctx0/cpu2/completed:0 0
./hctx0/cpu2/merged:0
./hctx0/cpu2/dispatched:0 0
./hctx0/cpu1/completed:0 0
./hctx0/cpu1/merged:0
./hctx0/cpu1/dispatched:0 0
./hctx0/cpu0/completed:0 0
./hctx0/cpu0/merged:0
./hctx0/cpu0/dispatched:0 0
./hctx0/type:default
./hctx0/dispatch_busy:0
./hctx0/active:0
./hctx0/run:0
./hctx0/queued:0
./hctx0/dispatched:       0     0
./hctx0/dispatched:       1     0
./hctx0/dispatched:       2     0
./hctx0/dispatched:       4     0
./hctx0/dispatched:       8     0
./hctx0/dispatched:      16     0
./hctx0/dispatched:      32+    0
./hctx0/io_poll:considered=0
./hctx0/io_poll:invoked=0
./hctx0/io_poll:success=0
./hctx0/sched_tags_bitmap:00000000: 0000 0000 0000 0000
./hctx0/sched_tags:nr_tags=64
./hctx0/sched_tags:nr_reserved_tags=1
./hctx0/sched_tags:active_queues=0
./hctx0/sched_tags:bitmap_tags:
./hctx0/sched_tags:depth=63
./hctx0/sched_tags:busy=0
./hctx0/sched_tags:cleared=0
./hctx0/sched_tags:bits_per_word=8
./hctx0/sched_tags:map_nr=8
./hctx0/sched_tags:alloc_hint={28, 37, 21, 45, 61, 51, 42, 10, 43, 53, 40, 31, 17, 8, 28, 43, 47, 61, 51, 48, 53, 62, 15, 21, 52, 1, 2, 41, 50, 14, 24, 4, 58}
./hctx0/sched_tags:wake_batch=7
./hctx0/sched_tags:wake_index=0
./hctx0/sched_tags:ws_active=0
./hctx0/sched_tags:ws={
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:}
./hctx0/sched_tags:round_robin=0
./hctx0/sched_tags:min_shallow_depth=4294967295
./hctx0/sched_tags:breserved_tags:
./hctx0/sched_tags:depth=1
./hctx0/sched_tags:busy=0
./hctx0/sched_tags:cleared=0
./hctx0/sched_tags:bits_per_word=64
./hctx0/sched_tags:map_nr=1
./hctx0/sched_tags:alloc_hint={0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
./hctx0/sched_tags:wake_batch=1
./hctx0/sched_tags:wake_index=0
./hctx0/sched_tags:ws_active=0
./hctx0/sched_tags:ws={
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:}
./hctx0/sched_tags:round_robin=0
./hctx0/sched_tags:min_shallow_depth=4294967295
./hctx0/tags_bitmap:00000000: 0000 0000
./hctx0/tags:nr_tags=32
./hctx0/tags:nr_reserved_tags=1
./hctx0/tags:active_queues=0
./hctx0/tags:bitmap_tags:
./hctx0/tags:depth=31
./hctx0/tags:busy=0
./hctx0/tags:cleared=0
./hctx0/tags:bits_per_word=4
./hctx0/tags:map_nr=8
./hctx0/tags:alloc_hint={12, 9, 9, 18, 27, 3, 7, 0, 28, 6, 28, 12, 21, 19, 1, 23, 27, 24, 6, 17, 15, 1, 10, 19, 27, 2, 24, 26, 30, 2, 26, 20, 18, 22, 19, 3, }
./hctx0/tags:wake_batch=3
./hctx0/tags:wake_index=0
./hctx0/tags:ws_active=0
./hctx0/tags:ws={
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:}
./hctx0/tags:round_robin=0
./hctx0/tags:min_shallow_depth=4294967295
./hctx0/tags:breserved_tags:
./hctx0/tags:depth=1
./hctx0/tags:busy=0
./hctx0/tags:cleared=1
./hctx0/tags:bits_per_word=64
./hctx0/tags:map_nr=1
./hctx0/tags:alloc_hint={0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
./hctx0/tags:wake_batch=1
./hctx0/tags:wake_index=0
./hctx0/tags:ws_active=0
./hctx0/tags:ws={
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:}
./hctx0/tags:round_robin=0
./hctx0/tags:min_shallow_depth=4294967295
./hctx0/ctx_map:00000000: 0000 0000 00
./hctx0/flags:alloc_policy=FIFO SHOULD_MERGE|TAG_QUEUE_SHARED|4
./sched/starved:0
./sched/batching:0
./write_hints:hint0: 0
./write_hints:hint1: 0
./write_hints:hint2: 0
./write_hints:hint3: 0
./write_hints:hint4: 0
./state:SAME_COMP|NONROT|IO_STAT|INIT_DONE|STATS|REGISTERED|NOWAIT
./pm_only:0
./poll_stat:read  (512 Bytes): samples=0
./poll_stat:write (512 Bytes): samples=0
./poll_stat:read  (1024 Bytes): samples=0
./poll_stat:write (1024 Bytes): samples=0
./poll_stat:read  (2048 Bytes): samples=0
./poll_stat:write (2048 Bytes): samples=0
./poll_stat:read  (4096 Bytes): samples=0
./poll_stat:write (4096 Bytes): samples=0
./poll_stat:read  (8192 Bytes): samples=0
./poll_stat:write (8192 Bytes): samples=0
./poll_stat:read  (16384 Bytes): samples=0
./poll_stat:write (16384 Bytes): samples=0
./poll_stat:read  (32768 Bytes): samples=0
./poll_stat:write (32768 Bytes): samples=0
./poll_stat:read  (65536 Bytes): samples=0
./poll_stat:write (65536 Bytes): samples=0


/sys/kernel/debug/block/nvme0c0n2# find . -type f -exec grep -aH . {} \;
./rqos/wbt/wb_background:4
./rqos/wbt/wb_normal:8
./rqos/wbt/unknown_cnt:0
./rqos/wbt/min_lat_nsec:2000000
./rqos/wbt/inflight:0: inflight 0
./rqos/wbt/inflight:1: inflight 0
./rqos/wbt/inflight:2: inflight 0
./rqos/wbt/id:0
./rqos/wbt/enabled:1
./rqos/wbt/curr_win_nsec:0
./hctx0/cpu39/completed:0 0
./hctx0/cpu39/merged:0
./hctx0/cpu39/dispatched:0 0
./hctx0/cpu38/completed:0 0
./hctx0/cpu38/merged:0
./hctx0/cpu38/dispatched:0 0
./hctx0/cpu37/completed:0 0
./hctx0/cpu37/merged:0
./hctx0/cpu37/dispatched:0 0
./hctx0/cpu36/completed:0 0
./hctx0/cpu36/merged:0
./hctx0/cpu36/dispatched:0 0
./hctx0/cpu35/completed:0 0
./hctx0/cpu35/merged:0
./hctx0/cpu35/dispatched:0 0
./hctx0/cpu34/completed:0 0
./hctx0/cpu34/merged:0
./hctx0/cpu34/dispatched:0 0
./hctx0/cpu33/completed:0 0
./hctx0/cpu33/merged:0
./hctx0/cpu33/dispatched:0 0
./hctx0/cpu32/completed:0 0
./hctx0/cpu32/merged:0
./hctx0/cpu32/dispatched:0 0
./hctx0/cpu31/completed:0 0
./hctx0/cpu31/merged:0
./hctx0/cpu31/dispatched:0 0
./hctx0/cpu30/completed:0 0
./hctx0/cpu30/merged:0
./hctx0/cpu30/dispatched:0 0
./hctx0/cpu29/completed:0 0
./hctx0/cpu29/merged:0
./hctx0/cpu29/dispatched:0 0
./hctx0/cpu28/completed:0 0
./hctx0/cpu28/merged:0
./hctx0/cpu28/dispatched:0 0
./hctx0/cpu27/completed:0 0
./hctx0/cpu27/merged:0
./hctx0/cpu27/dispatched:0 0
./hctx0/cpu26/completed:0 0
./hctx0/cpu26/merged:0
./hctx0/cpu26/dispatched:0 0
./hctx0/cpu25/completed:0 0
./hctx0/cpu25/merged:0
./hctx0/cpu25/dispatched:0 0
./hctx0/cpu24/completed:0 0
./hctx0/cpu24/merged:0
./hctx0/cpu24/dispatched:0 0
./hctx0/cpu23/completed:0 0
./hctx0/cpu23/merged:0
./hctx0/cpu23/dispatched:0 0
./hctx0/cpu22/completed:0 0
./hctx0/cpu22/merged:0
./hctx0/cpu22/dispatched:0 0
./hctx0/cpu21/completed:0 0
./hctx0/cpu21/merged:0
./hctx0/cpu21/dispatched:0 0
./hctx0/cpu20/completed:0 0
./hctx0/cpu20/merged:0
./hctx0/cpu20/dispatched:0 0
./hctx0/cpu19/completed:0 0
./hctx0/cpu19/merged:0
./hctx0/cpu19/dispatched:0 0
./hctx0/cpu18/completed:0 0
./hctx0/cpu18/merged:0
./hctx0/cpu18/dispatched:0 0
./hctx0/cpu17/completed:0 0
./hctx0/cpu17/merged:0
./hctx0/cpu17/dispatched:0 0
./hctx0/cpu16/completed:0 0
./hctx0/cpu16/merged:0
./hctx0/cpu16/dispatched:0 0
./hctx0/cpu15/completed:0 0
./hctx0/cpu15/merged:0
./hctx0/cpu15/dispatched:0 0
./hctx0/cpu14/completed:0 0
./hctx0/cpu14/merged:0
./hctx0/cpu14/dispatched:0 0
./hctx0/cpu13/completed:0 0
./hctx0/cpu13/merged:0
./hctx0/cpu13/dispatched:0 0
./hctx0/cpu12/completed:0 0
./hctx0/cpu12/merged:0
./hctx0/cpu12/dispatched:0 0
./hctx0/cpu11/completed:0 0
./hctx0/cpu11/merged:0
./hctx0/cpu11/dispatched:0 0
./hctx0/cpu10/completed:0 0
./hctx0/cpu10/merged:0
./hctx0/cpu10/dispatched:0 0
./hctx0/cpu9/completed:0 0
./hctx0/cpu9/merged:0
./hctx0/cpu9/dispatched:0 0
./hctx0/cpu8/completed:0 0
./hctx0/cpu8/merged:0
./hctx0/cpu8/dispatched:0 0
./hctx0/cpu7/completed:0 0
./hctx0/cpu7/merged:0
./hctx0/cpu7/dispatched:0 0
./hctx0/cpu6/completed:0 0
./hctx0/cpu6/merged:0
./hctx0/cpu6/dispatched:0 0
./hctx0/cpu5/completed:0 0
./hctx0/cpu5/merged:0
./hctx0/cpu5/dispatched:0 0
./hctx0/cpu4/completed:0 0
./hctx0/cpu4/merged:0
./hctx0/cpu4/dispatched:0 0
./hctx0/cpu3/completed:0 0
./hctx0/cpu3/merged:0
./hctx0/cpu3/dispatched:0 0
./hctx0/cpu2/completed:0 0
./hctx0/cpu2/merged:0
./hctx0/cpu2/dispatched:0 0
./hctx0/cpu1/completed:0 0
./hctx0/cpu1/merged:0
./hctx0/cpu1/dispatched:0 0
./hctx0/cpu0/completed:0 0
./hctx0/cpu0/merged:0
./hctx0/cpu0/dispatched:0 0
./hctx0/type:default
./hctx0/dispatch_busy:0
./hctx0/active:0
./hctx0/run:0
./hctx0/queued:0
./hctx0/dispatched:       0     0
./hctx0/dispatched:       1     0
./hctx0/dispatched:       2     0
./hctx0/dispatched:       4     0
./hctx0/dispatched:       8     0
./hctx0/dispatched:      16     0
./hctx0/dispatched:      32+    0
./hctx0/io_poll:considered=0
./hctx0/io_poll:invoked=0
./hctx0/io_poll:success=0
./hctx0/sched_tags_bitmap:00000000: 0000 0000 0000 0000
./hctx0/sched_tags:nr_tags=64
./hctx0/sched_tags:nr_reserved_tags=1
./hctx0/sched_tags:active_queues=0
./hctx0/sched_tags:bitmap_tags:
./hctx0/sched_tags:depth=63
./hctx0/sched_tags:busy=0
./hctx0/sched_tags:cleared=0
./hctx0/sched_tags:bits_per_word=8
./hctx0/sched_tags:map_nr=8
./hctx0/sched_tags:alloc_hint={7, 2, 30, 8, 28, 25, 10, 60, 21, 58, 59, 43, 12, 22, 1, 0, 37, 7, 8, 28, 10, 53, 6, 28, 16, 47, 11, 29, 28, 12, 21, 59, 37, 25}
./hctx0/sched_tags:wake_batch=7
./hctx0/sched_tags:wake_index=0
./hctx0/sched_tags:ws_active=0
./hctx0/sched_tags:ws={
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=7, .wait=inactive},
./hctx0/sched_tags:}
./hctx0/sched_tags:round_robin=0
./hctx0/sched_tags:min_shallow_depth=4294967295
./hctx0/sched_tags:breserved_tags:
./hctx0/sched_tags:depth=1
./hctx0/sched_tags:busy=0
./hctx0/sched_tags:cleared=0
./hctx0/sched_tags:bits_per_word=64
./hctx0/sched_tags:map_nr=1
./hctx0/sched_tags:alloc_hint={0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
./hctx0/sched_tags:wake_batch=1
./hctx0/sched_tags:wake_index=0
./hctx0/sched_tags:ws_active=0
./hctx0/sched_tags:ws={
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:     {.wait_cnt=1, .wait=inactive},
./hctx0/sched_tags:}
./hctx0/sched_tags:round_robin=0
./hctx0/sched_tags:min_shallow_depth=4294967295
./hctx0/tags_bitmap:00000000: 0000 0000
./hctx0/tags:nr_tags=32
./hctx0/tags:nr_reserved_tags=1
./hctx0/tags:active_queues=0
./hctx0/tags:bitmap_tags:
./hctx0/tags:depth=31
./hctx0/tags:busy=0
./hctx0/tags:cleared=0
./hctx0/tags:bits_per_word=4
./hctx0/tags:map_nr=8
./hctx0/tags:alloc_hint={12, 9, 9, 18, 27, 3, 7, 0, 28, 6, 28, 12, 21, 19, 1, 23, 27, 24, 6, 17, 15, 1, 10, 19, 27, 2, 24, 26, 30, 2, 26, 20, 18, 22, 19, 3, }
./hctx0/tags:wake_batch=3
./hctx0/tags:wake_index=0
./hctx0/tags:ws_active=0
./hctx0/tags:ws={
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:   {.wait_cnt=3, .wait=inactive},
./hctx0/tags:}
./hctx0/tags:round_robin=0
./hctx0/tags:min_shallow_depth=4294967295
./hctx0/tags:breserved_tags:
./hctx0/tags:depth=1
./hctx0/tags:busy=0
./hctx0/tags:cleared=1
./hctx0/tags:bits_per_word=64
./hctx0/tags:map_nr=1
./hctx0/tags:alloc_hint={0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
./hctx0/tags:wake_batch=1
./hctx0/tags:wake_index=0
./hctx0/tags:ws_active=0
./hctx0/tags:ws={
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:   {.wait_cnt=1, .wait=inactive},
./hctx0/tags:}
./hctx0/tags:round_robin=0
./hctx0/tags:min_shallow_depth=4294967295
./hctx0/ctx_map:00000000: 0000 0000 00
./hctx0/flags:alloc_policy=FIFO SHOULD_MERGE|TAG_QUEUE_SHARED|4
./sched/starved:0
./sched/batching:0
./write_hints:hint0: 0
./write_hints:hint1: 0
./write_hints:hint2: 0
./write_hints:hint3: 0
./write_hints:hint4: 0
./state:SAME_COMP|NONROT|IO_STAT|INIT_DONE|STATS|REGISTERED|NOWAIT
./pm_only:0
./poll_stat:read  (512 Bytes): samples=0
./poll_stat:write (512 Bytes): samples=0
./poll_stat:read  (1024 Bytes): samples=0
./poll_stat:write (1024 Bytes): samples=0
./poll_stat:read  (2048 Bytes): samples=0
./poll_stat:write (2048 Bytes): samples=0
./poll_stat:read  (4096 Bytes): samples=0
./poll_stat:write (4096 Bytes): samples=0
./poll_stat:read  (8192 Bytes): samples=0
./poll_stat:write (8192 Bytes): samples=0
./poll_stat:read  (16384 Bytes): samples=0
./poll_stat:write (16384 Bytes): samples=0
./poll_stat:read  (32768 Bytes): samples=0
./poll_stat:write (32768 Bytes): samples=0
./poll_stat:read  (65536 Bytes): samples=0
./poll_stat:write (65536 Bytes): samples=0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-07-06  8:10         ` Daniel Wagner
@ 2021-07-06  8:45           ` Ming Lei
  2021-07-06  8:59             ` Daniel Wagner
  0 siblings, 1 reply; 20+ messages in thread
From: Ming Lei @ 2021-07-06  8:45 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Tue, Jul 06, 2021 at 10:10:10AM +0200, Daniel Wagner wrote:
> On Tue, Jul 06, 2021 at 03:29:11PM +0800, Ming Lei wrote:
> > > and this seems to confirm, no I/O in flight.
> > 
> > What is the output of the following command after the hang is triggered?
> > 
> > (cd /sys/kernel/debug/block/nvme0n1 && find . -type f -exec grep -aH . {} \;)
> > 
> > Suppose the hang disk is nvme0n1.
> 
> see attachement
> 
> > No, percpu_ref_is_zero() is fine to be called in atomic mode.
> 
> Okay, that is what I hoped for :)

> /sys/kernel/debug/block/nvme0c0n1# find . -type f -exec grep -aH . {} \;

It is the mpath device's debugfs, what is output for the nvmef's
debugfs?


Thanks,
Ming


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-07-06  8:45           ` Ming Lei
@ 2021-07-06  8:59             ` Daniel Wagner
  2021-07-06 12:21               ` Daniel Wagner
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Wagner @ 2021-07-06  8:59 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

[-- Attachment #1: Type: text/plain, Size: 394 bytes --]

On Tue, Jul 06, 2021 at 04:45:30PM +0800, Ming Lei wrote:
> > /sys/kernel/debug/block/nvme0c0n1# find . -type f -exec grep -aH . {} \;
> 
> It is the mpath device's debugfs, what is output for the nvmef's
> debugfs?

Do you mean /sys/kernel/debug/block/{nvme0n1,nvme0n2}? These are
directories are empty.

There is only /sys/class/nvme/nvme0, but I don't think this is what you
are asking for.

[-- Attachment #2: blk-debug2.txt --]
[-- Type: text/plain, Size: 8509 bytes --]

/sys/class/nvme/nvme0# find . -type f -exec grep -aH . {} \;
grep: ./delete_controller: Permission denied
./uevent:MAJOR=245
./uevent:MINOR=0
./uevent:DEVNAME=nvme0
./uevent:NVME_TRTYPE=fc
./uevent:NVME_TRADDR=nn-0x201700a09890f5bf:pn-0x201b00a09890f5bf
./uevent:NVME_TRSVCID=none
./uevent:NVME_HOST_TRADDR=nn-0x200000109b579ef6:pn-0x100000109b579ef6
./cntlid:16065
./address:traddr=nn-0x201700a09890f5bf:pn-0x201b00a09890f5bf,host_traddr=nn-0x200000109b579ef6:pn-0x100000109b579ef6
./nvme0c0n1/uevent:DEVTYPE=disk
./nvme0c0n1/ext_range:0
./nvme0c0n1/ana_state:inaccessible
./nvme0c0n1/range:0
./nvme0c0n1/alignment_offset:0
./nvme0c0n1/power/runtime_active_time:0
./nvme0c0n1/power/runtime_active_kids:0
./nvme0c0n1/power/runtime_usage:0
./nvme0c0n1/power/runtime_status:unsupported
grep: ./nvme0c0n1/power/autosuspend_delay_ms: Input/output error
./nvme0c0n1/power/async:disabled
./nvme0c0n1/power/runtime_suspended_time:0
./nvme0c0n1/power/runtime_enabled:disabled
./nvme0c0n1/power/control:auto
./nvme0c0n1/ana_grpid:1
./nvme0c0n1/wwid:uuid.554bd55f-1de4-4600-a72a-e0e6da97b5be
./nvme0c0n1/ro:0
./nvme0c0n1/mq/0/cpu_list:0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 39
./nvme0c0n1/mq/0/nr_reserved_tags:1
./nvme0c0n1/mq/0/nr_tags:32
./nvme0c0n1/stat:       0        0        0        0        0        0        0        0        0        0        0        0        0        0        0      0
./nvme0c0n1/events_poll_msecs:-1
./nvme0c0n1/queue/io_poll_delay:-1
./nvme0c0n1/queue/max_integrity_segments:0
./nvme0c0n1/queue/zoned:none
./nvme0c0n1/queue/scheduler:[mq-deadline] kyber bfq none
./nvme0c0n1/queue/io_poll:0
./nvme0c0n1/queue/discard_zeroes_data:0
./nvme0c0n1/queue/minimum_io_size:4096
./nvme0c0n1/queue/nr_zones:0
./nvme0c0n1/queue/write_same_max_bytes:0
./nvme0c0n1/queue/max_segments:257
./nvme0c0n1/queue/dax:0
./nvme0c0n1/queue/physical_block_size:4096
./nvme0c0n1/queue/logical_block_size:4096
./nvme0c0n1/queue/virt_boundary_mask:4095
./nvme0c0n1/queue/zone_append_max_bytes:0
./nvme0c0n1/queue/io_timeout:30000
./nvme0c0n1/queue/nr_requests:64
./nvme0c0n1/queue/write_cache:write through
./nvme0c0n1/queue/stable_writes:0
./nvme0c0n1/queue/max_segment_size:4294967295
./nvme0c0n1/queue/rotational:0
./nvme0c0n1/queue/discard_max_bytes:0
./nvme0c0n1/queue/add_random:0
./nvme0c0n1/queue/discard_max_hw_bytes:0
./nvme0c0n1/queue/optimal_io_size:0
./nvme0c0n1/queue/chunk_sectors:0
./nvme0c0n1/queue/iosched/front_merges:1
./nvme0c0n1/queue/iosched/read_expire:500
./nvme0c0n1/queue/iosched/fifo_batch:16
./nvme0c0n1/queue/iosched/write_expire:5000
./nvme0c0n1/queue/iosched/writes_starved:2
./nvme0c0n1/queue/read_ahead_kb:128
./nvme0c0n1/queue/max_discard_segments:1
./nvme0c0n1/queue/write_zeroes_max_bytes:0
./nvme0c0n1/queue/nomerges:0
./nvme0c0n1/queue/zone_write_granularity:0
./nvme0c0n1/queue/wbt_lat_usec:2000
./nvme0c0n1/queue/fua:0
./nvme0c0n1/queue/discard_granularity:0
./nvme0c0n1/queue/rq_affinity:1
./nvme0c0n1/queue/max_sectors_kb:1024
./nvme0c0n1/queue/hw_sector_size:4096
./nvme0c0n1/queue/max_hw_sectors_kb:1024
./nvme0c0n1/queue/iostats:1
./nvme0c0n1/size:67108864
./nvme0c0n1/integrity/write_generate:0
./nvme0c0n1/integrity/format:none
./nvme0c0n1/integrity/read_verify:0
./nvme0c0n1/integrity/tag_size:0
./nvme0c0n1/integrity/protection_interval_bytes:0
./nvme0c0n1/integrity/device_is_integrity_capable:0
./nvme0c0n1/discard_alignment:0
./nvme0c0n1/uuid:554bd55f-1de4-4600-a72a-e0e6da97b5be
./nvme0c0n1/trace/end_lba:disabled
./nvme0c0n1/trace/act_mask:disabled
./nvme0c0n1/trace/start_lba:disabled
./nvme0c0n1/trace/enable:0
./nvme0c0n1/trace/pid:disabled
./nvme0c0n1/capability:630
./nvme0c0n1/hidden:1
./nvme0c0n1/removable:0
./nvme0c0n1/inflight:       0        0
./nvme0c0n1/nsid:1
./nvme0c0n1/make-it-fail:0
grep: ./reset_controller: Permission denied
./sqsize:31
./hostnqn:nqn.2014-08.org.nvmexpress:uuid:1a9e23dd-466e-45ca-9f43-a29aaf47cb21
./hostid:d5e55da0-19ae-42bc-a3ad-df9993cda3f6
./queue_count:2
./transport:fc
./subsysnqn:nqn.1992-08.com.netapp:sn.d646dc63336511e995cb00a0988fb732:subsystem.nvme-svm-dolin-ana_subsystem
./power/runtime_active_time:0
./power/runtime_active_kids:0
./power/runtime_usage:0
./power/runtime_status:unsupported
grep: ./power/autosuspend_delay_ms: Input/output error
./power/async:disabled
./power/runtime_suspended_time:0
./power/runtime_enabled:disabled
./power/control:auto
./reconnect_delay:2
grep: ./rescan_controller: Permission denied
./numa_node:-1
./model:NetApp ONTAP Controller
./dev:245:0
./fast_io_fail_tmo:off
./hwmon0/power/runtime_active_time:0
./hwmon0/power/runtime_active_kids:0
./hwmon0/power/runtime_usage:0
./hwmon0/power/runtime_status:unsupported
grep: ./hwmon0/power/autosuspend_delay_ms: Input/output error
./hwmon0/power/async:disabled
./hwmon0/power/runtime_suspended_time:0
./hwmon0/power/runtime_enabled:disabled
./hwmon0/power/control:auto
./hwmon0/temp1_label:Composite
./hwmon0/temp1_alarm:0
./hwmon0/temp1_input:-273150
./hwmon0/name:nvme
./firmware_rev:FFFFFFFF
./nvme0c0n2/uevent:DEVTYPE=disk
./nvme0c0n2/ext_range:0
./nvme0c0n2/ana_state:inaccessible
./nvme0c0n2/range:0
./nvme0c0n2/alignment_offset:0
./nvme0c0n2/power/runtime_active_time:0
./nvme0c0n2/power/runtime_active_kids:0
./nvme0c0n2/power/runtime_usage:0
./nvme0c0n2/power/runtime_status:unsupported
grep: ./nvme0c0n2/power/autosuspend_delay_ms: Input/output error
./nvme0c0n2/power/async:disabled
./nvme0c0n2/power/runtime_suspended_time:0
./nvme0c0n2/power/runtime_enabled:disabled
./nvme0c0n2/power/control:auto
./nvme0c0n2/ana_grpid:2
./nvme0c0n2/wwid:uuid.9d1866be-d687-40aa-bcff-90c7ce566435
./nvme0c0n2/ro:0
./nvme0c0n2/mq/0/cpu_list:0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 39
./nvme0c0n2/mq/0/nr_reserved_tags:1
./nvme0c0n2/mq/0/nr_tags:32
./nvme0c0n2/stat:       0        0        0        0        0        0        0        0        0        0        0        0        0        0        0      0
./nvme0c0n2/events_poll_msecs:-1
./nvme0c0n2/queue/io_poll_delay:-1
./nvme0c0n2/queue/max_integrity_segments:0
./nvme0c0n2/queue/zoned:none
./nvme0c0n2/queue/scheduler:[mq-deadline] kyber bfq none
./nvme0c0n2/queue/io_poll:0
./nvme0c0n2/queue/discard_zeroes_data:0
./nvme0c0n2/queue/minimum_io_size:4096
./nvme0c0n2/queue/nr_zones:0
./nvme0c0n2/queue/write_same_max_bytes:0
./nvme0c0n2/queue/max_segments:257
./nvme0c0n2/queue/dax:0
./nvme0c0n2/queue/physical_block_size:4096
./nvme0c0n2/queue/logical_block_size:4096
./nvme0c0n2/queue/virt_boundary_mask:4095
./nvme0c0n2/queue/zone_append_max_bytes:0
./nvme0c0n2/queue/io_timeout:30000
./nvme0c0n2/queue/nr_requests:64
./nvme0c0n2/queue/write_cache:write through
./nvme0c0n2/queue/stable_writes:0
./nvme0c0n2/queue/max_segment_size:4294967295
./nvme0c0n2/queue/rotational:0
./nvme0c0n2/queue/discard_max_bytes:0
./nvme0c0n2/queue/add_random:0
./nvme0c0n2/queue/discard_max_hw_bytes:0
./nvme0c0n2/queue/optimal_io_size:0
./nvme0c0n2/queue/chunk_sectors:0
./nvme0c0n2/queue/iosched/front_merges:1
./nvme0c0n2/queue/iosched/read_expire:500
./nvme0c0n2/queue/iosched/fifo_batch:16
./nvme0c0n2/queue/iosched/write_expire:5000
./nvme0c0n2/queue/iosched/writes_starved:2
./nvme0c0n2/queue/read_ahead_kb:128
./nvme0c0n2/queue/max_discard_segments:1
./nvme0c0n2/queue/write_zeroes_max_bytes:0
./nvme0c0n2/queue/nomerges:0
./nvme0c0n2/queue/zone_write_granularity:0
./nvme0c0n2/queue/wbt_lat_usec:2000
./nvme0c0n2/queue/fua:0
./nvme0c0n2/queue/discard_granularity:0
./nvme0c0n2/queue/rq_affinity:1
./nvme0c0n2/queue/max_sectors_kb:1024
./nvme0c0n2/queue/hw_sector_size:4096
./nvme0c0n2/queue/max_hw_sectors_kb:1024
./nvme0c0n2/queue/iostats:1
./nvme0c0n2/size:67108864
./nvme0c0n2/integrity/write_generate:0
./nvme0c0n2/integrity/format:none
./nvme0c0n2/integrity/read_verify:0
./nvme0c0n2/integrity/tag_size:0
./nvme0c0n2/integrity/protection_interval_bytes:0
./nvme0c0n2/integrity/device_is_integrity_capable:0
./nvme0c0n2/discard_alignment:0
./nvme0c0n2/uuid:9d1866be-d687-40aa-bcff-90c7ce566435
./nvme0c0n2/trace/end_lba:disabled
./nvme0c0n2/trace/act_mask:disabled
./nvme0c0n2/trace/start_lba:disabled
./nvme0c0n2/trace/enable:0
./nvme0c0n2/trace/pid:disabled
./nvme0c0n2/capability:630
./nvme0c0n2/hidden:1
./nvme0c0n2/removable:0
./nvme0c0n2/inflight:       0        0
./nvme0c0n2/nsid:2
./nvme0c0n2/make-it-fail:0
./state:connecting
./kato:5
./serial:80-AABMxwn3xAAAAAAAB
./ctrl_loss_tmo:600

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-07-06  8:59             ` Daniel Wagner
@ 2021-07-06 12:21               ` Daniel Wagner
  2021-07-07  2:46                 ` Ming Lei
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Wagner @ 2021-07-06 12:21 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

A nvme_start_freeze() before nvme_wait_freeze() fixes the hangers. It is this
simple?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
  2021-07-06 12:21               ` Daniel Wagner
@ 2021-07-07  2:46                 ` Ming Lei
  0 siblings, 0 replies; 20+ messages in thread
From: Ming Lei @ 2021-07-07  2:46 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-nvme, linux-kernel, James Smart, Keith Busch, Jens Axboe,
	Sagi Grimberg

On Tue, Jul 06, 2021 at 02:21:21PM +0200, Daniel Wagner wrote:
> A nvme_start_freeze() before nvme_wait_freeze() fixes the hangers. It is this
> simple?

Yeah, it can be the issue, also nvme_start_freeze() has to be paired
with nvme_unfreeze().


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-07-07  2:46 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-25 10:16 [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
2021-06-27 13:47   ` James Smart
2021-06-29  1:32   ` Ming Lei
2021-06-29 12:31   ` Hannes Reinecke
2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
2021-06-27 14:04   ` James Smart
2021-06-29  1:39   ` Ming Lei
2021-06-29  7:48     ` Daniel Wagner
2021-07-05 16:34     ` Daniel Wagner
2021-07-06  7:29       ` Ming Lei
2021-07-06  8:10         ` Daniel Wagner
2021-07-06  8:45           ` Ming Lei
2021-07-06  8:59             ` Daniel Wagner
2021-07-06 12:21               ` Daniel Wagner
2021-07-07  2:46                 ` Ming Lei
2021-06-29 12:31   ` Hannes Reinecke
2021-06-25 12:21 ` [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-06-25 13:00   ` Ming Lei
2021-06-29  1:31     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).