From: Xiang Zheng <zhengxiang9@huawei.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: <bhelgaas@google.com>, <wangxiongfeng2@huawei.com>,
<wanghaibin.wang@huawei.com>, <guoheyi@huawei.com>,
<yebiaoxiang@huawei.com>, <linux-pci@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <rjw@rjwysocki.net>,
<tglx@linutronix.de>, <guohanjun@huawei.com>,
<yangyingliang@huawei.com>
Subject: Re: [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data
Date: Tue, 29 Oct 2019 11:34:33 +0800 [thread overview]
Message-ID: <14e7d02e-215d-30dc-548c-e605f3ffdf1e@huawei.com> (raw)
In-Reply-To: <20191028163041.GA8257@bombadil.infradead.org>
On 2019/10/29 0:30, Matthew Wilcox wrote:
> On Mon, Oct 28, 2019 at 05:18:09PM +0800, Xiang Zheng wrote:
>> Commit "7ea7e98fd8d0" suggests that the "pci_lock" is sufficient,
>> and all the callers of pci_wait_cfg() are wrapped with the "pci_lock".
>>
>> However, since the commit "cdcb33f98244" merged, the accesses to
>> the pci_cfg_wait queue are not safe anymore. A "pci_lock" is
>> insufficient and we need to hold an additional queue lock while
>> read/write the wait queue.
>>
>> So let's use the add_wait_queue()/remove_wait_queue() instead of
>> __add_wait_queue()/__remove_wait_queue().
>
> As I said earlier, this reintroduces the deadlock addressed by
> cdcb33f9824429a926b971bf041a6cec238f91ff
>
Thanks Matthew, sorry for that I did not understand the way to reintroduce
the deadlock and sent this patch. If what I think is right, the possible
deadlock may be caused by the condition in which there are three processes:
*Process* *Acquired* *Wait For*
wake_up_all() wq_head->lock pi_lock
snbep_uncore_pci_read_counter() pi_lock pci_lock
pci_wait_cfg() pci_lock wq_head->lock
These processes suffer from the nested locks.:)
But for this problem, what do you think about the solution below:
diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index 2fccb5762c76..09342a74e5ea 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -207,14 +207,14 @@ static noinline void pci_wait_cfg(struct pci_dev *dev)
{
DECLARE_WAITQUEUE(wait, current);
- __add_wait_queue(&pci_cfg_wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
raw_spin_unlock_irq(&pci_lock);
+ add_wait_queue(&pci_cfg_wait, &wait);
schedule();
+ remove_wait_queue(&pci_cfg_wait, &wait);
raw_spin_lock_irq(&pci_lock);
} while (dev->block_cfg_access);
- __remove_wait_queue(&pci_cfg_wait, &wait);
}
/* Returns 0 on success, negative values indicate error. */
> .
>
--
Thanks,
Xiang
next prev parent reply other threads:[~2019-10-29 3:35 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-28 9:18 [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data Xiang Zheng
2019-10-28 16:30 ` Matthew Wilcox
2019-10-29 3:34 ` Xiang Zheng [this message]
2019-11-08 1:12 ` Xiang Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=14e7d02e-215d-30dc-548c-e605f3ffdf1e@huawei.com \
--to=zhengxiang9@huawei.com \
--cc=bhelgaas@google.com \
--cc=guohanjun@huawei.com \
--cc=guoheyi@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=rjw@rjwysocki.net \
--cc=tglx@linutronix.de \
--cc=wanghaibin.wang@huawei.com \
--cc=wangxiongfeng2@huawei.com \
--cc=willy@infradead.org \
--cc=yangyingliang@huawei.com \
--cc=yebiaoxiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).