linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: "Singh, Balbir" <sblbir@amazon.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"sblbir@amzn.com" <sblbir@amzn.com>
Cc: "kbusch@kernel.org" <kbusch@kernel.org>,
	"axboe@fb.com" <axboe@fb.com>, "hch@lst.de" <hch@lst.de>,
	"sagi@grimberg.me" <sagi@grimberg.me>
Subject: Re: [PATCH v2 1/2] nvme/host/pci: Fix a race in controller removal
Date: Mon, 16 Sep 2019 12:56:43 -0700	[thread overview]
Message-ID: <14becaec-2284-d680-b3b2-c38537c91521@acm.org> (raw)
In-Reply-To: <25d9badc90a1eb951cb5103774e8360edaa8ec15.camel@amazon.com>

On 9/16/19 12:38 PM, Singh, Balbir wrote:
> On Mon, 2019-09-16 at 08:40 -0700, Bart Van Assche wrote:
>> On 9/13/19 4:36 PM, Balbir Singh wrote:
>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>> index b45f82d58be8..f6ddb58a7013 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -103,10 +103,16 @@ static void nvme_set_queue_dying(struct
>>> nvme_ns *ns)
>>>    	 */
>>>    	if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
>>>    		return;
>>> -	revalidate_disk(ns->disk);
>>>    	blk_set_queue_dying(ns->queue);
>>>    	/* Forcibly unquiesce queues to avoid blocking dispatch */
>>>    	blk_mq_unquiesce_queue(ns->queue);
>>> +	/*
>>> +	 * revalidate_disk, after all pending IO is cleaned up
>>> +	 * by blk_set_queue_dying, largely any races with blk parittion
>>> +	 * reads that might come in after freezing the queues,
>>> otherwise
>>> +	 * we'll end up waiting up on bd_mutex, creating a deadlock.
>>> +	 */
>>> +	revalidate_disk(ns->disk);
>>>    }
>>
>> The comment above revalidate_disk() looks wrong to me. I don't think
>> that blk_set_queue_dying() guarantees that ongoing commands have
>> finished by the time that function returns. All
>> blk_set_queue_dying()
>> does is to set the DYING flag, to kill q->q_usage_counter and to wake
>> up
>> threads that are waiting inside a request allocation function. It
>> does
>> not wait for pending commands to finish.
> 
> I was referring to the combined effect of blk_set_queue_dying() and
> blk_mq_unquiesce_queue() which should invoke blk_mq_run_hw_queues().
> I can see how that might be misleading. I can reword it to say
> 
> /*
>   * revalidate_disk, after all pending IO is cleaned up
>   * largely any races with block partition
>   * reads that might come in after freezing the queues, otherwise
>   * we'll end up waiting up on bd_mutex, creating a deadlock
>   */
> 
> Would that work?

I don't think so. Running the hardware queues is not sufficient to 
guarantee that requests that had been started before the DYING was set 
have finished.

Bart.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-09-16 19:56 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-13 23:36 [PATCH v2 1/2] nvme/host/pci: Fix a race in controller removal Balbir Singh
2019-09-13 23:36 ` [PATCH v2 2/2] nvme/host/core: Allow overriding of wait_ready timeout Balbir Singh
2019-09-16  7:41   ` Christoph Hellwig
2019-09-16 12:33     ` Singh, Balbir
2019-09-16 16:01       ` hch
2019-09-16 21:04         ` Singh, Balbir
2019-09-17  1:14           ` Keith Busch
2019-09-17  2:56             ` Singh, Balbir
2019-09-17  3:17               ` Bart Van Assche
2019-09-17  5:02                 ` Singh, Balbir
2019-09-17 17:21                 ` James Smart
2019-09-17 20:08                   ` James Smart
2019-09-17  3:54               ` Keith Busch
2019-09-16  7:49 ` [PATCH v2 1/2] nvme/host/pci: Fix a race in controller removal Christoph Hellwig
2019-09-16 12:07   ` Singh, Balbir
2019-09-16 15:40 ` Bart Van Assche
2019-09-16 19:38   ` Singh, Balbir
2019-09-16 19:56     ` Bart Van Assche [this message]
2019-09-16 20:40       ` Singh, Balbir
2019-09-17 17:55         ` Bart Van Assche
2019-09-17 20:30           ` Keith Busch
2019-09-17 20:44           ` Singh, Balbir
2019-09-16 20:07     ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=14becaec-2284-d680-b3b2-c38537c91521@acm.org \
    --to=bvanassche@acm.org \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=sblbir@amazon.com \
    --cc=sblbir@amzn.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).