All of lore.kernel.org
 help / color / mirror / Atom feed
From: James Smart <james.smart@broadcom.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
	linux-nvme@lists.infradead.org
Subject: Re: [PATCH 1/2] blk-mq: introduce blk_mq_complete_request_sync()
Date: Mon, 18 Mar 2019 21:04:37 -0700	[thread overview]
Message-ID: <cb036f6f-3dd5-cf7b-4811-6c7d97a2279e@broadcom.com> (raw)
In-Reply-To: <20190319013142.GB22459@ming.t460p>



On 3/18/2019 6:31 PM, Ming Lei wrote:
> On Mon, Mar 18, 2019 at 10:37:08AM -0700, James Smart wrote:
>>
>> On 3/17/2019 8:29 PM, Ming Lei wrote:
>>> In NVMe's error handler, follows the typical steps for tearing down
>>> hardware:
>>>
>>> 1) stop blk_mq hw queues
>>> 2) stop the real hw queues
>>> 3) cancel in-flight requests via
>>> 	blk_mq_tagset_busy_iter(tags, cancel_request, ...)
>>> cancel_request():
>>> 	mark the request as abort
>>> 	blk_mq_complete_request(req);
>>> 4) destroy real hw queues
>>>
>>> However, there may be race between #3 and #4, because blk_mq_complete_request()
>>> actually completes the request asynchronously.
>>>
>>> This patch introduces blk_mq_complete_request_sync() for fixing the
>>> above race.
>>>
>> This won't help FC at all. Inherently, the "completion" has to be
>> asynchronous as line traffic may be required.
>>
>> e.g. FC doesn't use nvme_complete_request() in the iterator routine.
>>
> Looks FC has done the sync already, see nvme_fc_delete_association():
>
> 		...
>          /* wait for all io that had to be aborted */
>          spin_lock_irq(&ctrl->lock);
>          wait_event_lock_irq(ctrl->ioabort_wait, ctrl->iocnt == 0, ctrl->lock);
>          ctrl->flags &= ~FCCTRL_TERMIO;
>          spin_unlock_irq(&ctrl->lock);

yes - but the iterator started a lot of the back end io terminating in 
parallel. So waiting on many happening in parallel is better than 
waiting 1 at a time.   Even so, I've always disliked this wait and would 
have preferred to exit the thread with something monitoring the 
completions re-queuing a work thread to finish.

-- james


WARNING: multiple messages have this Message-ID (diff)
From: james.smart@broadcom.com (James Smart)
Subject: [PATCH 1/2] blk-mq: introduce blk_mq_complete_request_sync()
Date: Mon, 18 Mar 2019 21:04:37 -0700	[thread overview]
Message-ID: <cb036f6f-3dd5-cf7b-4811-6c7d97a2279e@broadcom.com> (raw)
In-Reply-To: <20190319013142.GB22459@ming.t460p>



On 3/18/2019 6:31 PM, Ming Lei wrote:
> On Mon, Mar 18, 2019@10:37:08AM -0700, James Smart wrote:
>>
>> On 3/17/2019 8:29 PM, Ming Lei wrote:
>>> In NVMe's error handler, follows the typical steps for tearing down
>>> hardware:
>>>
>>> 1) stop blk_mq hw queues
>>> 2) stop the real hw queues
>>> 3) cancel in-flight requests via
>>> 	blk_mq_tagset_busy_iter(tags, cancel_request, ...)
>>> cancel_request():
>>> 	mark the request as abort
>>> 	blk_mq_complete_request(req);
>>> 4) destroy real hw queues
>>>
>>> However, there may be race between #3 and #4, because blk_mq_complete_request()
>>> actually completes the request asynchronously.
>>>
>>> This patch introduces blk_mq_complete_request_sync() for fixing the
>>> above race.
>>>
>> This won't help FC at all. Inherently, the "completion" has to be
>> asynchronous as line traffic may be required.
>>
>> e.g. FC doesn't use nvme_complete_request() in the iterator routine.
>>
> Looks FC has done the sync already, see nvme_fc_delete_association():
>
> 		...
>          /* wait for all io that had to be aborted */
>          spin_lock_irq(&ctrl->lock);
>          wait_event_lock_irq(ctrl->ioabort_wait, ctrl->iocnt == 0, ctrl->lock);
>          ctrl->flags &= ~FCCTRL_TERMIO;
>          spin_unlock_irq(&ctrl->lock);

yes - but the iterator started a lot of the back end io terminating in 
parallel. So waiting on many happening in parallel is better than 
waiting 1 at a time.?? Even so, I've always disliked this wait and would 
have preferred to exit the thread with something monitoring the 
completions re-queuing a work thread to finish.

-- james

  reply	other threads:[~2019-03-19  4:04 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-18  3:29 [PATCH 0/2] blk-mq/nvme: cancel request synchronously Ming Lei
2019-03-18  3:29 ` Ming Lei
2019-03-18  3:29 ` [PATCH 1/2] blk-mq: introduce blk_mq_complete_request_sync() Ming Lei
2019-03-18  3:29   ` Ming Lei
2019-03-18  4:09   ` Bart Van Assche
2019-03-18  4:09     ` Bart Van Assche
2019-03-18  7:38     ` Ming Lei
2019-03-18  7:38       ` Ming Lei
2019-03-18 15:04       ` Bart Van Assche
2019-03-18 15:04         ` Bart Van Assche
2019-03-18 15:16         ` Ming Lei
2019-03-18 15:16           ` Ming Lei
2019-03-18 15:49           ` Bart Van Assche
2019-03-18 15:49             ` Bart Van Assche
2019-03-18 16:06             ` Ming Lei
2019-03-18 16:06               ` Ming Lei
2019-03-21  0:47             ` Sagi Grimberg
2019-03-21  0:47               ` Sagi Grimberg
2019-03-21  1:39               ` Ming Lei
2019-03-21  1:39                 ` Ming Lei
2019-03-21  2:04                 ` Sagi Grimberg
2019-03-21  2:04                   ` Sagi Grimberg
2019-03-21  2:32                   ` Ming Lei
2019-03-21  2:32                     ` Ming Lei
2019-03-21 21:40                     ` Sagi Grimberg
2019-03-21 21:40                       ` Sagi Grimberg
2019-03-27  8:27                       ` Christoph Hellwig
2019-03-27  8:27                         ` Christoph Hellwig
2019-03-21  2:15               ` Bart Van Assche
2019-03-21  2:15                 ` Bart Van Assche
2019-03-21  2:13       ` Sagi Grimberg
2019-03-21  2:13         ` Sagi Grimberg
2019-03-18 14:40     ` Keith Busch
2019-03-18 14:40       ` Keith Busch
2019-03-18 17:30     ` James Smart
2019-03-18 17:30       ` James Smart
2019-03-18 17:37   ` James Smart
2019-03-18 17:37     ` James Smart
2019-03-19  1:06     ` Ming Lei
2019-03-19  1:06       ` Ming Lei
2019-03-19  3:37       ` James Smart
2019-03-19  3:37         ` James Smart
2019-03-19  3:50         ` Ming Lei
2019-03-19  3:50           ` Ming Lei
2019-03-19  1:31     ` Ming Lei
2019-03-19  1:31       ` Ming Lei
2019-03-19  4:04       ` James Smart [this message]
2019-03-19  4:04         ` James Smart
2019-03-19  4:28         ` Ming Lei
2019-03-19  4:28           ` Ming Lei
2019-03-27  8:30   ` Christoph Hellwig
2019-03-27  8:30     ` Christoph Hellwig
2019-03-18  3:29 ` [PATCH 2/2] nvme: cancel request synchronously Ming Lei
2019-03-18  3:29   ` Ming Lei
2019-03-27  8:30   ` Christoph Hellwig
2019-03-27  8:30     ` Christoph Hellwig
2019-03-27  2:06 ` [PATCH 0/2] blk-mq/nvme: " Ming Lei
2019-03-27  2:06   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb036f6f-3dd5-cf7b-4811-6c7d97a2279e@broadcom.com \
    --to=james.smart@broadcom.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.