From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>,
Anton Eidelman <anton@lightbitslabs.com>,
linux-nvme <linux-nvme@lists.infradead.org>
Subject: Re: nvme deadlock with ANA
Date: Thu, 2 Apr 2020 08:38:17 -0700 [thread overview]
Message-ID: <b657d8a2-14af-ef3d-6483-fcbbbbfe3897@grimberg.me> (raw)
In-Reply-To: <4ec0c3ba-398d-0922-87f4-4b0a99a79abb@suse.de>
>>>> I want to consult with you guys on a deadlock condition I'm able to
>>>> hit with a test that incorporate controller reconnect, ana updates
>>>> and live I/O with timeouts.
>>>>
>>>> This is true for NVMe/TCP, but can also happen in rdma or pci
>>>> drivers as
>>>> well.
>>>>
>>>> The deadlock combines 4 flows in parallel:
>>>> - ns scanning (triggered from reconnect)
>>>> - request timeout
>>>> - ANA update (triggered from reconnect)
>>>> - FS I/O coming into the mpath device
>>>>
>>>> (1) ns scanning triggers disk revalidation -> update disk info ->
>>>> freeze queue -> but blocked, why?
>>>
>>> What does -> but blocked mean?
>>
>> It is blocked and cannot complete, because of (2)
>>
>>>> (2) timeout handler reference the g_usage_counter - > but blocks in
>>>> the timeout handler, why?
>>>
>>> The timeout handler obviously needs to keep the queue alive while
>>> running. We could think of doing a try_get, though?
>>
>> It is keeping the queue alive, that is not the issue. it is blocked in
>> the driver .timeout() handler (i.e. nvme_tcp_timeout).
>>
>> The reason that it blocked and cannot make forward progress is because
>> the driver timeout handler will call nvme_stop_queues(), which is
>> blocked as this takes namespaces_rwsem...
>>
>> There is a chain of dependency that is deadlocking with circular
>> dependency.
>
> Can't you simply call 'nvme_reset_ctrl()' ?
> Seems to work reasonably well on the fc side, so I wonder what's
> different for tcp ...
As I mentioned, this is not specific for tcp, it pci timeout handler
can block just as well because it calls nvme_dev_disable which calls
nvme_stop_queues. The rest of the flows are not related to the
transport.
There is no expectation from the driver to always defer handling the
timeout to a different context, but should we make that operation for
nvme transprots at least?
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-04-02 15:38 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-26 6:23 nvme deadlock with ANA Sagi Grimberg
2020-03-26 6:29 ` Sagi Grimberg
2020-04-02 7:09 ` Sagi Grimberg
2020-04-02 15:18 ` Christoph Hellwig
2020-04-02 15:24 ` Sagi Grimberg
2020-04-02 15:30 ` Hannes Reinecke
2020-04-02 15:38 ` Sagi Grimberg [this message]
2020-04-02 17:22 ` James Smart
2020-04-02 16:00 ` Keith Busch
2020-04-02 16:08 ` Sagi Grimberg
2020-04-02 16:12 ` Hannes Reinecke
2020-04-02 16:18 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b657d8a2-14af-ef3d-6483-fcbbbbfe3897@grimberg.me \
--to=sagi@grimberg.me \
--cc=anton@lightbitslabs.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).