linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>,
	Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Cc: Damien Le Moal <Damien.LeMoal@wdc.com>
Subject: Re: bad unlock balance WARNING at nvme/045
Date: Wed, 26 Oct 2022 15:27:41 +0300	[thread overview]
Message-ID: <d2e62abe-56a1-b29f-ca3c-03e07dd8da31@grimberg.me> (raw)
In-Reply-To: <853f0eb7-1f0f-eefe-fd32-0b1a1a98d3d7@suse.de>



On 10/26/22 09:42, Hannes Reinecke wrote:
> On 10/18/22 12:57, Sagi Grimberg wrote:
>>
>>> Hello Hannes,
>>>
>>> I observed "WARNING: bad unlock balance detected!" at nvme/045 [1]. 
>>> As the Call
>>> Trace shows, nvme_auth_reset() has unbalanced mutex lock/unlock.
>>>
>>>     mutex_lock(&ctrl->dhchap_auth_mutex);
>>>     list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
>>>         mutex_unlock(&ctrl->dhchap_auth_mutex);
>>>         flush_work(&chap->auth_work);
>>>         __nvme_auth_reset(chap);
>>>     }
>>>     mutex_unlock(&ctrl->dhchap_auth_mutex);
>>>
>>> I tried to remove the mutex_unlock in the list iteration with a patch 
>>> [2], but
>>> it resulted in another "WARNING: possible recursive locking detected" 
>>> [3]. I'm
>>> not sure but cause of this WARN could be __nvme_auth_work and
>>> nvme_dhchap_auth_work in same nvme_wq.
>>>
>>> Could you take a look for fix?
>>
>> I'm looking at the code and I think that the way the concurrent
>> negotiations and how dhchap_auth_mutex is handled is very fragile,
>> also why should the per-queue auth_work hold the controller-wide
>> dhchap_auth_mutex? The only reason I see is because nvme_auth_negotiate
>> is checking if the chap context is already queued? Why should we
>> allow that?
>>
> Well; that's partially due to the internal design of linux-nvme.
> The controller structure itself doesn't have 'queues' per se; there just 
> is a general 'ctrl' structure. So while I would have loved to have a 
> per-queue structure to hook the chap authentication into, all I have is 
> the controller structure.
> Hence we have a controller-wide list holding all 'chap' structures for 
> the individual queues.
> Hence the controller-wide mutex to gate list modifications.

This stil can easily be an array that does not enforce a list traversal.

> 
>> I'd suggest to splice dhchap_auth_list, to a local list and then just
>> flush nvmet_wq in teardown flows. Same for renegotiations/reset flows.
>> And we should prevent for the double-queuing of chap negotiations to
>> begin with, instead of handling them (I still don't understand why this
>> is permitted, but perhaps just return EBUSY in this case?)
> 
> We don't double queue; we're re-using the existing entries.

I'd argue that you shouldn't. It'd be better to simply delete entries
once queue auth is done...

> Can you check if this fix works?
> 
> diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
> index c8a6db7c4498..4e824aab30eb 100644
> --- a/drivers/nvme/host/auth.c
> +++ b/drivers/nvme/host/auth.c
> @@ -926,7 +926,6 @@ void nvme_auth_reset(struct nvme_ctrl *ctrl)
> 
>          mutex_lock(&ctrl->dhchap_auth_mutex);
>          list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
> -               mutex_unlock(&ctrl->dhchap_auth_mutex);
>                  flush_work(&chap->auth_work);
>                  __nvme_auth_reset(chap);
>          }

How can this work? auth_work currently acquires the mutex. This
shouldn't solve anything.


      parent reply	other threads:[~2022-10-26 12:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-18  8:03 bad unlock balance WARNING at nvme/045 Shinichiro Kawasaki
2022-10-18 10:57 ` Sagi Grimberg
2022-10-26  2:13   ` Shinichiro Kawasaki
2022-10-26  6:42   ` Hannes Reinecke
2022-10-26 12:01     ` Shinichiro Kawasaki
2022-10-26 12:38       ` Sagi Grimberg
2022-10-28 13:52         ` Hannes Reinecke
2022-10-26 12:27     ` Sagi Grimberg [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d2e62abe-56a1-b29f-ca3c-03e07dd8da31@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=Damien.LeMoal@wdc.com \
    --cc=hare@suse.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=shinichiro.kawasaki@wdc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).