All of lore.kernel.org
 help / color / mirror / Atom feed
From: "jianchao.wang" <jianchao.w.wang@oracle.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org,
	Keith Busch <keith.busch@intel.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 1/2] nvme: pci: simplify timeout handling
Date: Fri, 27 Apr 2018 09:37:06 +0800	[thread overview]
Message-ID: <325688af-3ae2-49db-3a59-ef3903adcdf6@oracle.com> (raw)
In-Reply-To: <20180426155722.GA3597@ming.t460p>



On 04/26/2018 11:57 PM, Ming Lei wrote:
> Hi Jianchao,
> 
> On Thu, Apr 26, 2018 at 11:07:56PM +0800, jianchao.wang wrote:
>> Hi Ming
>>
>> Thanks for your wonderful solution. :)
>>
>> On 04/26/2018 08:39 PM, Ming Lei wrote:
>>> +/*
>>> + * This one is called after queues are quiesced, and no in-fligh timeout
>>> + * and nvme interrupt handling.
>>> + */
>>> +static void nvme_pci_cancel_request(struct request *req, void *data,
>>> +		bool reserved)
>>> +{
>>> +	/* make sure timed-out requests are covered too */
>>> +	if (req->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) {
>>> +		req->aborted_gstate = 0;
>>> +		req->rq_flags &= ~RQF_MQ_TIMEOUT_EXPIRED;
>>> +	}
>>> +
>>> +	nvme_cancel_request(req, data, reserved);
>>> +}
>>> +
>>>  static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
>>>  {
>>>  	int i;
>>> @@ -2223,10 +2316,17 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
>>>  	for (i = dev->ctrl.queue_count - 1; i >= 0; i--)
>>>  		nvme_suspend_queue(&dev->queues[i]);
>>>  
>>> +	/*
>>> +	 * safe to sync timeout after queues are quiesced, then all
>>> +	 * requests(include the time-out ones) will be canceled.
>>> +	 */
>>> +	nvme_sync_queues(&dev->ctrl);
>>> +	blk_sync_queue(dev->ctrl.admin_q);
>>> +
>> Looks like blk_sync_queue cannot drain all the timeout work.
>>
>> blk_sync_queue
>>   -> del_timer_sync          
>>                           blk_mq_timeout_work
>>                             -> mod_timer
>>   -> cancel_work_sync
>> the timeout work may come back again.
>> we may need to force all the in-flight requests to be timed out with blk_abort_request
>>
> 
> blk_abort_request() seems over-kill, we could avoid this race simply by
> returning EH_NOT_HANDLED if the controller is in-recovery.
return EH_NOT_HANDLED maybe not enough.
please consider the following scenario.

                                      nvme_error_handler
                                        -> nvme_dev_disable
                                          -> blk_sync_queue
//timeout comes again due to the
//scenario above
blk_mq_timeout_work                    
  -> blk_mq_check_expired               
    -> set aborted_gstate
                                          -> nvme_pci_cancel_request
                                            -> RQF_MQ_TIMEOUT_EXPIRED has not been set
                                            -> nvme_cancel_request
                                              -> blk_mq_complete_request
                                                -> do nothing
  -> blk_mq_ternimate_expired
    -> blk_mq_rq_timed_out
      -> set RQF_MQ_TIMEOUT_EXPIRED
      -> .timeout return BLK_EH_NOT_HANDLED

Then the timeout request is leaked.

> 
>>>  	nvme_pci_disable(dev);
>>
>> the interrupt will not come, but there maybe running one.
>> a synchronize_sched() here ?
> 
> We may cover this case by moving nvme_suspend_queue() before
> nvme_stop_queues().
> 
> Both two are very good catch, thanks!
> 

WARNING: multiple messages have this Message-ID (diff)
From: jianchao.w.wang@oracle.com (jianchao.wang)
Subject: [PATCH 1/2] nvme: pci: simplify timeout handling
Date: Fri, 27 Apr 2018 09:37:06 +0800	[thread overview]
Message-ID: <325688af-3ae2-49db-3a59-ef3903adcdf6@oracle.com> (raw)
In-Reply-To: <20180426155722.GA3597@ming.t460p>



On 04/26/2018 11:57 PM, Ming Lei wrote:
> Hi Jianchao,
> 
> On Thu, Apr 26, 2018@11:07:56PM +0800, jianchao.wang wrote:
>> Hi Ming
>>
>> Thanks for your wonderful solution. :)
>>
>> On 04/26/2018 08:39 PM, Ming Lei wrote:
>>> +/*
>>> + * This one is called after queues are quiesced, and no in-fligh timeout
>>> + * and nvme interrupt handling.
>>> + */
>>> +static void nvme_pci_cancel_request(struct request *req, void *data,
>>> +		bool reserved)
>>> +{
>>> +	/* make sure timed-out requests are covered too */
>>> +	if (req->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) {
>>> +		req->aborted_gstate = 0;
>>> +		req->rq_flags &= ~RQF_MQ_TIMEOUT_EXPIRED;
>>> +	}
>>> +
>>> +	nvme_cancel_request(req, data, reserved);
>>> +}
>>> +
>>>  static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
>>>  {
>>>  	int i;
>>> @@ -2223,10 +2316,17 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
>>>  	for (i = dev->ctrl.queue_count - 1; i >= 0; i--)
>>>  		nvme_suspend_queue(&dev->queues[i]);
>>>  
>>> +	/*
>>> +	 * safe to sync timeout after queues are quiesced, then all
>>> +	 * requests(include the time-out ones) will be canceled.
>>> +	 */
>>> +	nvme_sync_queues(&dev->ctrl);
>>> +	blk_sync_queue(dev->ctrl.admin_q);
>>> +
>> Looks like blk_sync_queue cannot drain all the timeout work.
>>
>> blk_sync_queue
>>   -> del_timer_sync          
>>                           blk_mq_timeout_work
>>                             -> mod_timer
>>   -> cancel_work_sync
>> the timeout work may come back again.
>> we may need to force all the in-flight requests to be timed out with blk_abort_request
>>
> 
> blk_abort_request() seems over-kill, we could avoid this race simply by
> returning EH_NOT_HANDLED if the controller is in-recovery.
return EH_NOT_HANDLED maybe not enough.
please consider the following scenario.

                                      nvme_error_handler
                                        -> nvme_dev_disable
                                          -> blk_sync_queue
//timeout comes again due to the
//scenario above
blk_mq_timeout_work                    
  -> blk_mq_check_expired               
    -> set aborted_gstate
                                          -> nvme_pci_cancel_request
                                            -> RQF_MQ_TIMEOUT_EXPIRED has not been set
                                            -> nvme_cancel_request
                                              -> blk_mq_complete_request
                                                -> do nothing
  -> blk_mq_ternimate_expired
    -> blk_mq_rq_timed_out
      -> set RQF_MQ_TIMEOUT_EXPIRED
      -> .timeout return BLK_EH_NOT_HANDLED

Then the timeout request is leaked.

> 
>>>  	nvme_pci_disable(dev);
>>
>> the interrupt will not come, but there maybe running one.
>> a synchronize_sched() here ?
> 
> We may cover this case by moving nvme_suspend_queue() before
> nvme_stop_queues().
> 
> Both two are very good catch, thanks!
> 

  parent reply	other threads:[~2018-04-27  1:37 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-26 12:39 [PATCH 0/2] nvme: pci: fix & improve timeout handling Ming Lei
2018-04-26 12:39 ` Ming Lei
2018-04-26 12:39 ` [PATCH 1/2] nvme: pci: simplify " Ming Lei
2018-04-26 12:39   ` Ming Lei
2018-04-26 15:07   ` jianchao.wang
2018-04-26 15:07     ` jianchao.wang
2018-04-26 15:57     ` Ming Lei
2018-04-26 15:57       ` Ming Lei
2018-04-26 16:16       ` Ming Lei
2018-04-26 16:16         ` Ming Lei
2018-04-27  1:37       ` jianchao.wang [this message]
2018-04-27  1:37         ` jianchao.wang
2018-04-27 14:57         ` Ming Lei
2018-04-27 14:57           ` Ming Lei
2018-04-28 14:00           ` jianchao.wang
2018-04-28 14:00             ` jianchao.wang
2018-04-28 21:57             ` Ming Lei
2018-04-28 21:57               ` Ming Lei
2018-04-28 22:27               ` Ming Lei
2018-04-28 22:27                 ` Ming Lei
2018-04-29  1:36                 ` Ming Lei
2018-04-29  1:36                   ` Ming Lei
2018-04-29  2:21                   ` jianchao.wang
2018-04-29  2:21                     ` jianchao.wang
2018-04-29 14:13                     ` Ming Lei
2018-04-29 14:13                       ` Ming Lei
2018-04-27 17:51   ` Keith Busch
2018-04-27 17:51     ` Keith Busch
2018-04-28  3:50     ` Ming Lei
2018-04-28  3:50       ` Ming Lei
2018-04-28 13:35       ` Keith Busch
2018-04-28 13:35         ` Keith Busch
2018-04-28 14:31         ` jianchao.wang
2018-04-28 14:31           ` jianchao.wang
2018-04-28 21:39         ` Ming Lei
2018-04-28 21:39           ` Ming Lei
2018-04-30 19:52           ` Keith Busch
2018-04-30 19:52             ` Keith Busch
2018-04-30 23:14             ` Ming Lei
2018-04-30 23:14               ` Ming Lei
2018-05-08 15:30       ` Keith Busch
2018-05-08 15:30         ` Keith Busch
2018-05-10 20:52         ` Ming Lei
2018-05-10 20:52           ` Ming Lei
2018-05-10 21:05           ` Keith Busch
2018-05-10 21:05             ` Keith Busch
2018-05-10 21:10             ` Ming Lei
2018-05-10 21:10               ` Ming Lei
2018-05-10 21:18               ` Keith Busch
2018-05-10 21:18                 ` Keith Busch
2018-05-10 21:24                 ` Ming Lei
2018-05-10 21:24                   ` Ming Lei
2018-05-10 21:44                   ` Keith Busch
2018-05-10 21:44                     ` Keith Busch
2018-05-10 21:50                     ` Ming Lei
2018-05-10 21:50                       ` Ming Lei
2018-05-10 21:53                     ` Ming Lei
2018-05-10 21:53                       ` Ming Lei
2018-05-10 22:03                 ` Ming Lei
2018-05-10 22:03                   ` Ming Lei
2018-05-10 22:43                   ` Keith Busch
2018-05-10 22:43                     ` Keith Busch
2018-05-11  0:14                     ` Ming Lei
2018-05-11  0:14                       ` Ming Lei
2018-05-11  2:10             ` Ming Lei
2018-05-11  2:10               ` Ming Lei
2018-04-26 12:39 ` [PATCH 2/2] nvme: pci: guarantee EH can make progress Ming Lei
2018-04-26 12:39   ` Ming Lei
2018-04-26 16:24   ` Keith Busch
2018-04-26 16:24     ` Keith Busch
2018-04-28  3:28     ` Ming Lei
2018-04-28  3:28       ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=325688af-3ae2-49db-3a59-ef3903adcdf6@oracle.com \
    --to=jianchao.w.wang@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.