All of lore.kernel.org
 help / color / mirror / Atom feed
From: James Smart <jsmart2021@gmail.com>
To: Daniel Wagner <dwagner@suse.de>, linux-nvme@lists.infradead.org
Cc: linux-kernel@vger.kernel.org,
	James Smart <james.smart@broadcom.com>,
	Keith Busch <kbusch@kernel.org>, Ming Lei <ming.lei@redhat.com>,
	Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@suse.de>,
	Wen Xiong <wenxiong@us.ibm.com>,
	Himanshu Madhani <himanshu.madhani@oracle.com>
Subject: Re: [PATCH v5 0/3] Handle update hardware queues and queue freeze more carefully
Date: Fri, 20 Aug 2021 08:27:48 -0700	[thread overview]
Message-ID: <73a430da-84c8-5457-108a-7e1e2d81fa61@gmail.com> (raw)
In-Reply-To: <20210820115521.alveifzvad3zuwh4@carbon.lan>

On 8/20/2021 4:55 AM, Daniel Wagner wrote:
> On Fri, Aug 20, 2021 at 10:48:32AM +0200, Daniel Wagner wrote:
>> Then we try to do the same thing again which fails, thus we never
>> make progress.
>>
>> So clearly we need to update number of queues at one point. What would
>> be the right thing to do here? As I understood we need to be careful
>> with frozen requests. Can we abort them (is this even possible in this
>> state?) and requeue them before we update the queue numbers?
> 
> After starring a bit longer at the reset path, I think there is no
> pending request in any queue. nvme_fc_delete_association() calls
> __nvme_fc_abort_outstanding_ios() which makes sure all queues are
> drained (usage counter is 0). Also it clears the NVME_FC_Q_LIVE bit,
> which prevents further request added to queues.

yes, as long as we haven't attempted to create the io queues via 
nvme_fc_connect_io_queues(), nothing should be successful queueing and 
running down the hctx to start the io. nvme_fc_connect_io_queues() will 
use the queue for the Connect cmd, which is probably what generated the 
prior -16389 error.

Which says:"nvme-fc: Update hardware queues before using them" should be 
good to use.

> 
> I start wonder why we have to do the nvme_start_freeze() in the first
> place and why we want to wait for the freeze. 88e837ed0f1f ("nvme-fc:
> wait for queues to freeze before calling update_hr_hw_queues") doesn't
> really tell why we need wait for the freeze.

I think that is probably going to be true as well - no need to 
freeze/unfreeze around this path.  This was also a rather late add (last 
oct), so we had been running without the freezes for a long time, 
granted few devices change their queue counts.

I'll have to see if I can find what prompted the change. At first blush, 
I'm fine reverting it.

> 
> Given we know the usage counter of the queues is 0, I think we are
> safe to move the blk_mq_update_nr_hw_queues() before the start queue
> code. Also note nvme_fc_create_hw_io_queues() calls
> blk_mq_freeze_queue() but it wont block as we are sure there is no
> pending request.

Agree.

-- james

> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
> 


WARNING: multiple messages have this Message-ID (diff)
From: James Smart <jsmart2021@gmail.com>
To: Daniel Wagner <dwagner@suse.de>, linux-nvme@lists.infradead.org
Cc: linux-kernel@vger.kernel.org,
	James Smart <james.smart@broadcom.com>,
	Keith Busch <kbusch@kernel.org>, Ming Lei <ming.lei@redhat.com>,
	Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@suse.de>,
	Wen Xiong <wenxiong@us.ibm.com>,
	Himanshu Madhani <himanshu.madhani@oracle.com>
Subject: Re: [PATCH v5 0/3] Handle update hardware queues and queue freeze more carefully
Date: Fri, 20 Aug 2021 08:27:48 -0700	[thread overview]
Message-ID: <73a430da-84c8-5457-108a-7e1e2d81fa61@gmail.com> (raw)
In-Reply-To: <20210820115521.alveifzvad3zuwh4@carbon.lan>

On 8/20/2021 4:55 AM, Daniel Wagner wrote:
> On Fri, Aug 20, 2021 at 10:48:32AM +0200, Daniel Wagner wrote:
>> Then we try to do the same thing again which fails, thus we never
>> make progress.
>>
>> So clearly we need to update number of queues at one point. What would
>> be the right thing to do here? As I understood we need to be careful
>> with frozen requests. Can we abort them (is this even possible in this
>> state?) and requeue them before we update the queue numbers?
> 
> After starring a bit longer at the reset path, I think there is no
> pending request in any queue. nvme_fc_delete_association() calls
> __nvme_fc_abort_outstanding_ios() which makes sure all queues are
> drained (usage counter is 0). Also it clears the NVME_FC_Q_LIVE bit,
> which prevents further request added to queues.

yes, as long as we haven't attempted to create the io queues via 
nvme_fc_connect_io_queues(), nothing should be successful queueing and 
running down the hctx to start the io. nvme_fc_connect_io_queues() will 
use the queue for the Connect cmd, which is probably what generated the 
prior -16389 error.

Which says:"nvme-fc: Update hardware queues before using them" should be 
good to use.

> 
> I start wonder why we have to do the nvme_start_freeze() in the first
> place and why we want to wait for the freeze. 88e837ed0f1f ("nvme-fc:
> wait for queues to freeze before calling update_hr_hw_queues") doesn't
> really tell why we need wait for the freeze.

I think that is probably going to be true as well - no need to 
freeze/unfreeze around this path.  This was also a rather late add (last 
oct), so we had been running without the freezes for a long time, 
granted few devices change their queue counts.

I'll have to see if I can find what prompted the change. At first blush, 
I'm fine reverting it.

> 
> Given we know the usage counter of the queues is 0, I think we are
> safe to move the blk_mq_update_nr_hw_queues() before the start queue
> code. Also note nvme_fc_create_hw_io_queues() calls
> blk_mq_freeze_queue() but it wont block as we are sure there is no
> pending request.

Agree.

-- james

> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
> 


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-08-20 15:27 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-18 12:05 [PATCH v5 0/3] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-08-18 12:05 ` Daniel Wagner
2021-08-18 12:05 ` [PATCH v5 1/3] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
2021-08-18 12:05   ` Daniel Wagner
2021-08-18 12:05 ` [PATCH v5 2/3] nvme-fc: avoid race between time out and tear down Daniel Wagner
2021-08-18 12:05   ` Daniel Wagner
2021-08-18 12:05 ` [PATCH v5 3/3] nvme-fc: fix controller reset hang during traffic Daniel Wagner
2021-08-18 12:05   ` Daniel Wagner
2021-08-18 12:21   ` Hannes Reinecke
2021-08-18 12:21     ` Hannes Reinecke
2021-08-20  8:48 ` [PATCH v5 0/3] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-08-20  8:48   ` Daniel Wagner
2021-08-20 11:55   ` Daniel Wagner
2021-08-20 11:55     ` Daniel Wagner
2021-08-20 15:27     ` James Smart [this message]
2021-08-20 15:27       ` James Smart
2021-08-23  8:01 ` Christoph Hellwig
2021-08-23  8:01   ` Christoph Hellwig
2021-08-23  9:14   ` Daniel Wagner
2021-08-23  9:14     ` Daniel Wagner
2021-08-23  9:19     ` Christoph Hellwig
2021-08-23  9:19       ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=73a430da-84c8-5457-108a-7e1e2d81fa61@gmail.com \
    --to=jsmart2021@gmail.com \
    --cc=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=himanshu.madhani@oracle.com \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=wenxiong@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.