linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Balbir Singh <sblbir@amzn.com>
Cc: axboe@fb.com, hch@lst.de, linux-nvme@lists.infradead.org,
	sagi@grimberg.me
Subject: Re: [PATCH v3 1/2] nvme/host/pci: Fix a race in controller removal
Date: Tue, 17 Sep 2019 17:45:33 -0600	[thread overview]
Message-ID: <20190917234531.GA17026@keith-busch> (raw)
In-Reply-To: <20190917224105.6758-1-sblbir@amzn.com>

On Tue, Sep 17, 2019 at 10:41:04PM +0000, Balbir Singh wrote:
> This race is hard to hit in general, now that we
> have the shutdown_lock in both nvme_reset_work() and
> nvme_dev_disable()
> 
> The real issue is that after doing all the setup work
> in nvme_reset_work(), when get another timeout (nvme_timeout()),
> then we proceed to disable the controller. This causes
> the reset work to only partially progress and then fail.
> 
> Depending on the progress made, we call into
> nvme_remove_dead_ctrl(), which does another
> nvme_dev_disable() freezing the block-mq queues.
> 
> I've noticed a race with udevd with udevd trying to re-read
> the partition table, it ends up with the bd_mutex held and
> it ends up waiting in blk_queue_enter(), since we froze
> the queues in nvme_dev_disable(). nvme_kill_queues() calls
> revalidate_disk() and ends up waiting on the bd_mutex
> resulting in a deadlock.
> 
> Allow the hung tasks a chance by unfreezing the queues after
> setting dying bits on the queue, then call revalidate_disk()
> to update the disk size.
> 
> NOTE: I've seen this race when the controller does not
> respond to IOs or abort requests, but responds to other
> commands and even signals it's ready after its reset,
> but still drops IO. I've tested this by emulating the
> behaviour in the driver.

I recommend the following changelog for this commit:

  User space programs like udevd may try to read to partitions at the
  same time the driver detects a namespace is unusable, and may deadlock
  if revalidate_disk() is called while such a process is waiting to
  enter the frozen queue. On detecting a dead namespace, move the disk
  revalidate after unblocking dispatchers that may be holding bd_butex.

And that's it.
 
> Signed-off-by: Balbir Singh <sblbir@amzn.com>
> ---
> 
> Changelog v3
>   1. Simplify the comment about moving revalidate_disk
> Changelog v2
>   1. Rely on blk_set_queue_dying to do the wake_all()
> 
>  drivers/nvme/host/core.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index b45f82d58be8..6ad1f1df9e44 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -103,10 +103,13 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
>  	 */
>  	if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
>  		return;
> -	revalidate_disk(ns->disk);
>  	blk_set_queue_dying(ns->queue);
>  	/* Forcibly unquiesce queues to avoid blocking dispatch */
>  	blk_mq_unquiesce_queue(ns->queue);
> +	/*
> +	 * Revalidate after unblocking dispatchers that may be holding bd_butex
> +	 */
> +	revalidate_disk(ns->disk);
>  }
>  
>  static void nvme_queue_scan(struct nvme_ctrl *ctrl)
> -- 
> 2.16.5
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

      parent reply	other threads:[~2019-09-17 23:45 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-17 22:41 [PATCH v3 1/2] nvme/host/pci: Fix a race in controller removal Balbir Singh
2019-09-17 22:41 ` [PATCH] nvme/host/core: Allow overriding of wait_ready timeout Balbir Singh
2019-09-17 23:45 ` Keith Busch [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190917234531.GA17026@keith-busch \
    --to=kbusch@kernel.org \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=sblbir@amzn.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).