All of lore.kernel.org
 help / color / mirror / Atom feed
From: "jianchao.wang" <jianchao.w.wang@oracle.com>
To: Keith Busch <kbusch@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>,
	"Busch, Keith" <keith.busch@intel.com>,
	James Smart <jsmart2021@gmail.com>,
	Bart Van Assche <bvanassche@acm.org>,
	Ming Lei <tom.leiming@gmail.com>,
	Josef Bacik <josef@toxicpanda.com>,
	linux-nvme <linux-nvme@lists.infradead.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-block <linux-block@vger.kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter
Date: Wed, 27 Mar 2019 10:27:57 +0800	[thread overview]
Message-ID: <1bbe1b5c-3564-55e8-6824-f679b3c5dd3f@oracle.com> (raw)
In-Reply-To: <20190327021521.GA7389@localhost.localdomain>



On 3/27/19 10:15 AM, Keith Busch wrote:
> On Wed, Mar 27, 2019 at 10:03:26AM +0800, jianchao.wang wrote:
>> Hi Keith
>>
>> On 3/27/19 7:57 AM, Keith Busch wrote:
>>> On Mon, Mar 25, 2019 at 08:05:53PM -0700, jianchao.wang wrote:
>>>> What if there used to be a io scheduler and leave some stale requests of sched tags ?
>>>> Or the nr_hw_queues was decreased and leave the hctx->fq->flush_rq ?
>>>
>>> Requests internally queued in scheduler or block layer are not eligible
>>> for the nvme driver's iterator callback. We only use it to reclaim
>>> dispatched requests that the target can't return, which only applies to
>>> requests that must have a valid rq->tag value from hctx->tags.
>>>  
>>>> The stable request could be some tings freed and used
>>>> by others and the state field happen to be overwritten to non-zero...
>>>
>>> I am not sure I follow what this means. At least for nvme, every queue
>>> sharing the same tagset is quiesced and frozen, there should be no
>>> request state in flux at the time we iterate.
>>>
>>
>> In nvme_dev_disable, when we try to reclaim the in-flight requests with blk_mq_tagset_busy_iter,
>> the request_queues are quiesced but just start-freeze.
>> We will try to _drain_ the in-flight requests for the _shutdown_ case when controller is not dead.
>> For the reset case, there still could be someone escapes the checking of queue freezing and enters
>> blk_mq_make_request and tries to allocate tag, then we may get,
>>
>> generic_make_request        nvme_dev_disable
>>  -> blk_queue_enter              
>>                               -> nvme_start_freeze (just start freeze, no drain)
>>                               -> nvme_stop_queues
>>  -> blk_mq_make_request
>>   - > blk_mq_get_request      -> blk_mq_tagset_busy_iter
>>      -> blk_mq_get_tag
>>                                 -> bt_tags_for_each
>>                                    -> bt_tags_iter
>>                                        -> rq = tags->rqs[] ---> [1]
>>      -> blk_mq_rq_ctx_init
>>        -> data->hctx->tags->rqs[rq->tag] = rq;
>>
>> The rq got on position [1] could be a stale request that has been freed due to,
>> 1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset
>> 2. a removed io scheduler's sched request
>>
>> And this stale request may have been used by others and the request->state is changed to a non-zero
>> value and passes the checking of blk_mq_request_started and then it will be handled by nvme_cancel_request.
> 
> How is that request state going to be anyting other than IDLE? A freed
> request state is IDLE, and continues to be IDLE until dispatched. But
> dispatch is blocked for the entire tagset, so request states can't be
> started during an nvme reset.
> 

As the comment above, the stable request maybe something that has been freed due to following case,
1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset
2. a removed io scheduler's sched request
and this freed request could be allocated by others which may change the field of request->state.

Thanks
Jianchao

WARNING: multiple messages have this Message-ID (diff)
From: jianchao.w.wang@oracle.com (jianchao.wang)
Subject: [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter
Date: Wed, 27 Mar 2019 10:27:57 +0800	[thread overview]
Message-ID: <1bbe1b5c-3564-55e8-6824-f679b3c5dd3f@oracle.com> (raw)
In-Reply-To: <20190327021521.GA7389@localhost.localdomain>



On 3/27/19 10:15 AM, Keith Busch wrote:
> On Wed, Mar 27, 2019@10:03:26AM +0800, jianchao.wang wrote:
>> Hi Keith
>>
>> On 3/27/19 7:57 AM, Keith Busch wrote:
>>> On Mon, Mar 25, 2019@08:05:53PM -0700, jianchao.wang wrote:
>>>> What if there used to be a io scheduler and leave some stale requests of sched tags ?
>>>> Or the nr_hw_queues was decreased and leave the hctx->fq->flush_rq ?
>>>
>>> Requests internally queued in scheduler or block layer are not eligible
>>> for the nvme driver's iterator callback. We only use it to reclaim
>>> dispatched requests that the target can't return, which only applies to
>>> requests that must have a valid rq->tag value from hctx->tags.
>>>  
>>>> The stable request could be some tings freed and used
>>>> by others and the state field happen to be overwritten to non-zero...
>>>
>>> I am not sure I follow what this means. At least for nvme, every queue
>>> sharing the same tagset is quiesced and frozen, there should be no
>>> request state in flux at the time we iterate.
>>>
>>
>> In nvme_dev_disable, when we try to reclaim the in-flight requests with blk_mq_tagset_busy_iter,
>> the request_queues are quiesced but just start-freeze.
>> We will try to _drain_ the in-flight requests for the _shutdown_ case when controller is not dead.
>> For the reset case, there still could be someone escapes the checking of queue freezing and enters
>> blk_mq_make_request and tries to allocate tag, then we may get,
>>
>> generic_make_request        nvme_dev_disable
>>  -> blk_queue_enter              
>>                               -> nvme_start_freeze (just start freeze, no drain)
>>                               -> nvme_stop_queues
>>  -> blk_mq_make_request
>>   - > blk_mq_get_request      -> blk_mq_tagset_busy_iter
>>      -> blk_mq_get_tag
>>                       ??        -> bt_tags_for_each
>>                          ???       -> bt_tags_iter
>>                               ??       -> rq = tags->rqs[] ---> [1]
>>      -> blk_mq_rq_ctx_init
>>        -> data->hctx->tags->rqs[rq->tag] = rq;
>>
>> The rq got on position [1] could be a stale request that has been freed due to,
>> 1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset
>> 2. a removed io scheduler's sched request
>>
>> And this stale request may have been used by others and the request->state is changed to a non-zero
>> value and passes the checking of blk_mq_request_started and then it will be handled by nvme_cancel_request.
> 
> How is that request state going to be anyting other than IDLE? A freed
> request state is IDLE, and continues to be IDLE until dispatched. But
> dispatch is blocked for the entire tagset, so request states can't be
> started during an nvme reset.
> 

As the comment above, the stable request maybe something that has been freed due to following case,
1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset
2. a removed io scheduler's sched request
and this freed request could be allocated by others which may change the field of request->state.

Thanks
Jianchao

  reply	other threads:[~2019-03-27  2:28 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-25  5:38 [PATCH V2 0/8]: blk-mq: use static_rqs to iterate busy tags Jianchao Wang
2019-03-25  5:38 ` Jianchao Wang
2019-03-25  5:38 ` [PATCH V2 1/8] blk-mq: get rid of the synchronize_rcu in __blk_mq_update_nr_hw_queues Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  5:38 ` [PATCH V2 2/8] blk-mq: use static_rqs instead of rqs to iterate tags Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  7:12   ` Dongli Zhang
2019-03-25  7:12     ` Dongli Zhang
2019-03-25  7:14     ` jianchao.wang
2019-03-25  7:14       ` jianchao.wang
2019-03-25  5:38 ` [PATCH V2 3/8] blk-mq: use blk_mq_queue_tag_inflight_iter in debugfs Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  5:38 ` [PATCH V2 4/8] mtip32xx: use blk_mq_queue_tag_inflight_iter Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  5:38 ` [PATCH V2 5/8] nbd: " Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  5:38 ` [PATCH V2 6/8] skd: " Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  5:38 ` [PATCH V2 7/8] nvme: " Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25 13:49   ` Keith Busch
2019-03-25 13:49     ` Keith Busch
2019-03-26  1:17     ` jianchao.wang
2019-03-26  1:17       ` jianchao.wang
2019-03-26  2:41       ` Ming Lei
2019-03-26  2:41         ` Ming Lei
2019-03-26  3:05         ` jianchao.wang
2019-03-26  3:05           ` jianchao.wang
2019-03-26 23:57           ` Keith Busch
2019-03-26 23:57             ` Keith Busch
2019-03-27  2:03             ` jianchao.wang
2019-03-27  2:03               ` jianchao.wang
2019-03-27  2:15               ` Keith Busch
2019-03-27  2:15                 ` Keith Busch
2019-03-27  2:27                 ` jianchao.wang [this message]
2019-03-27  2:27                   ` jianchao.wang
2019-03-27  2:33                   ` Keith Busch
2019-03-27  2:33                     ` Keith Busch
2019-03-27  2:45                     ` jianchao.wang
2019-03-27  2:45                       ` jianchao.wang
2019-03-27  6:51                       ` Keith Busch
2019-03-27  6:51                         ` Keith Busch
2019-03-27  7:18                         ` jianchao.wang
2019-03-27  7:18                           ` jianchao.wang
2019-03-25  5:38 ` [PATCH V2 8/8] blk-mq: remove blk_mq_tagset_busy_iter Jianchao Wang
2019-03-25  5:38   ` Jianchao Wang
2019-03-25  7:18   ` Hannes Reinecke
2019-03-25  7:18     ` Hannes Reinecke
2019-03-25  7:37     ` jianchao.wang
2019-03-25  7:37       ` jianchao.wang
2019-03-25  8:25       ` Hannes Reinecke
2019-03-25  8:25         ` Hannes Reinecke
2019-03-25  9:12         ` jianchao.wang
2019-03-25  9:12           ` jianchao.wang
2019-03-26 14:17         ` Jens Axboe
2019-03-26 14:17           ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2019-03-25  5:28 [PATCH V2 0/8]: blk-mq: use static_rqs to iterate busy tags Jianchao Wang
2019-03-25  5:28 ` [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter Jianchao Wang
2019-03-25  5:28   ` Jianchao Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1bbe1b5c-3564-55e8-6824-f679b3c5dd3f@oracle.com \
    --to=jianchao.w.wang@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=josef@toxicpanda.com \
    --cc=jsmart2021@gmail.com \
    --cc=jthumshirn@suse.de \
    --cc=kbusch@kernel.org \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=tom.leiming@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.