From: Chao Leng <lengchao@huawei.com>
To: Sagi Grimberg <sagi@grimberg.me>, <linux-nvme@lists.infradead.org>
Cc: <kbusch@kernel.org>, <axboe@fb.com>, <hch@lst.de>
Subject: Re: [PATCH] nvme-fabrics: fix crash for no IO queues
Date: Mon, 8 Mar 2021 09:30:47 +0800 [thread overview]
Message-ID: <78c5e9f9-f5b8-b8e5-1c36-3a5803d4b047@huawei.com> (raw)
In-Reply-To: <020b9f27-459a-2b98-2e76-ebcc874c9c32@grimberg.me>
On 2021/3/6 4:58, Sagi Grimberg wrote:
>
>> A crash happens when set feature(NVME_FEAT_NUM_QUEUES) timeout in nvme
>> over rdma(roce) reconnection, the reason is use the queue which is not
>> alloced.
>>
>> If queue is not live, should not allow queue request.
>
> Can you describe exactly the scenario here? What is the state
> here? LIVE? or DELETING?
If seting feature(NVME_FEAT_NUM_QUEUES) failed due to time out or
the target return 0 io queues, nvme_set_queue_count will return 0,
and then reconnection will continue and success. The state of controller
is LIVE. The request will continue to deliver by call ->queue_rq(),
and then crash happens.
>
>>
>> Signed-off-by: Chao Leng <lengchao@huawei.com>
>> ---
>> drivers/nvme/host/fabrics.h | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
>> index 733010d2eafd..2479744fc349 100644
>> --- a/drivers/nvme/host/fabrics.h
>> +++ b/drivers/nvme/host/fabrics.h
>> @@ -189,7 +189,7 @@ static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
>> {
>> if (likely(ctrl->state == NVME_CTRL_LIVE ||
>> ctrl->state == NVME_CTRL_DELETING))
>> - return true;
>> + return queue_live;
>> return __nvmf_check_ready(ctrl, rq, queue_live);
>> }
>
> There were some issues in the past that made us allow submitting
> requests in DELETING state and introducing DELETING_NOIO. See
> patch ecca390e8056 ("nvme: fix deadlock in disconnect during scan_work and/or ana_work")
This doesn't make any difference. When in deleting state the queue is
still live.
>
> The driver should be able to accept I/O in DELETING because the core
> changes the state to DELETING_NOIO _before_ it calls ->delete_ctrl so I
> don't understand how you get to this if the queue is not allocated...
The state of controller is live. The deletion process looks good.
> .
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-03-08 1:31 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-04 0:55 [PATCH] nvme-fabrics: fix crash for no IO queues Chao Leng
2021-03-05 20:58 ` Sagi Grimberg
2021-03-08 1:30 ` Chao Leng [this message]
2021-03-15 17:08 ` Sagi Grimberg
2021-03-16 1:23 ` Chao Leng
2021-03-16 2:02 ` Keith Busch
2021-03-16 5:08 ` Sagi Grimberg
2021-03-16 20:57 ` James Smart
2021-03-16 21:25 ` Keith Busch
2021-03-16 23:52 ` Sagi Grimberg
2021-03-17 0:19 ` James Smart
-- strict thread matches above, loose matches on Subject: below --
2021-03-03 2:53 Chao Leng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=78c5e9f9-f5b8-b8e5-1c36-3a5803d4b047@huawei.com \
--to=lengchao@huawei.com \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).