* [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data
@ 2019-10-28 9:18 Xiang Zheng
2019-10-28 16:30 ` Matthew Wilcox
0 siblings, 1 reply; 4+ messages in thread
From: Xiang Zheng @ 2019-10-28 9:18 UTC (permalink / raw)
To: bhelgaas
Cc: zhengxiang9, wangxiongfeng2, wanghaibin.wang, guoheyi,
yebiaoxiang, linux-pci, linux-kernel, willy, rjw, tglx,
guohanjun, yangyingliang
Commit "7ea7e98fd8d0" suggests that the "pci_lock" is sufficient,
and all the callers of pci_wait_cfg() are wrapped with the "pci_lock".
However, since the commit "cdcb33f98244" merged, the accesses to
the pci_cfg_wait queue are not safe anymore. A "pci_lock" is
insufficient and we need to hold an additional queue lock while
read/write the wait queue.
So let's use the add_wait_queue()/remove_wait_queue() instead of
__add_wait_queue()/__remove_wait_queue().
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
Cc: Heyi Guo <guoheyi@huawei.com>
Cc: Biaoxiang Ye <yebiaoxiang@huawei.com>
Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
---
drivers/pci/access.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index 2fccb5762c76..247bf36e0047 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -207,14 +207,14 @@ static noinline void pci_wait_cfg(struct pci_dev *dev)
{
DECLARE_WAITQUEUE(wait, current);
- __add_wait_queue(&pci_cfg_wait, &wait);
+ add_wait_queue(&pci_cfg_wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
raw_spin_unlock_irq(&pci_lock);
schedule();
raw_spin_lock_irq(&pci_lock);
} while (dev->block_cfg_access);
- __remove_wait_queue(&pci_cfg_wait, &wait);
+ remove_wait_queue(&pci_cfg_wait, &wait);
}
/* Returns 0 on success, negative values indicate error. */
--
2.19.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data
2019-10-28 9:18 [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data Xiang Zheng
@ 2019-10-28 16:30 ` Matthew Wilcox
2019-10-29 3:34 ` Xiang Zheng
0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2019-10-28 16:30 UTC (permalink / raw)
To: Xiang Zheng
Cc: bhelgaas, wangxiongfeng2, wanghaibin.wang, guoheyi, yebiaoxiang,
linux-pci, linux-kernel, rjw, tglx, guohanjun, yangyingliang
On Mon, Oct 28, 2019 at 05:18:09PM +0800, Xiang Zheng wrote:
> Commit "7ea7e98fd8d0" suggests that the "pci_lock" is sufficient,
> and all the callers of pci_wait_cfg() are wrapped with the "pci_lock".
>
> However, since the commit "cdcb33f98244" merged, the accesses to
> the pci_cfg_wait queue are not safe anymore. A "pci_lock" is
> insufficient and we need to hold an additional queue lock while
> read/write the wait queue.
>
> So let's use the add_wait_queue()/remove_wait_queue() instead of
> __add_wait_queue()/__remove_wait_queue().
As I said earlier, this reintroduces the deadlock addressed by
cdcb33f9824429a926b971bf041a6cec238f91ff
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data
2019-10-28 16:30 ` Matthew Wilcox
@ 2019-10-29 3:34 ` Xiang Zheng
2019-11-08 1:12 ` Xiang Zheng
0 siblings, 1 reply; 4+ messages in thread
From: Xiang Zheng @ 2019-10-29 3:34 UTC (permalink / raw)
To: Matthew Wilcox
Cc: bhelgaas, wangxiongfeng2, wanghaibin.wang, guoheyi, yebiaoxiang,
linux-pci, linux-kernel, rjw, tglx, guohanjun, yangyingliang
On 2019/10/29 0:30, Matthew Wilcox wrote:
> On Mon, Oct 28, 2019 at 05:18:09PM +0800, Xiang Zheng wrote:
>> Commit "7ea7e98fd8d0" suggests that the "pci_lock" is sufficient,
>> and all the callers of pci_wait_cfg() are wrapped with the "pci_lock".
>>
>> However, since the commit "cdcb33f98244" merged, the accesses to
>> the pci_cfg_wait queue are not safe anymore. A "pci_lock" is
>> insufficient and we need to hold an additional queue lock while
>> read/write the wait queue.
>>
>> So let's use the add_wait_queue()/remove_wait_queue() instead of
>> __add_wait_queue()/__remove_wait_queue().
>
> As I said earlier, this reintroduces the deadlock addressed by
> cdcb33f9824429a926b971bf041a6cec238f91ff
>
Thanks Matthew, sorry for that I did not understand the way to reintroduce
the deadlock and sent this patch. If what I think is right, the possible
deadlock may be caused by the condition in which there are three processes:
*Process* *Acquired* *Wait For*
wake_up_all() wq_head->lock pi_lock
snbep_uncore_pci_read_counter() pi_lock pci_lock
pci_wait_cfg() pci_lock wq_head->lock
These processes suffer from the nested locks.:)
But for this problem, what do you think about the solution below:
diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index 2fccb5762c76..09342a74e5ea 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -207,14 +207,14 @@ static noinline void pci_wait_cfg(struct pci_dev *dev)
{
DECLARE_WAITQUEUE(wait, current);
- __add_wait_queue(&pci_cfg_wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
raw_spin_unlock_irq(&pci_lock);
+ add_wait_queue(&pci_cfg_wait, &wait);
schedule();
+ remove_wait_queue(&pci_cfg_wait, &wait);
raw_spin_lock_irq(&pci_lock);
} while (dev->block_cfg_access);
- __remove_wait_queue(&pci_cfg_wait, &wait);
}
/* Returns 0 on success, negative values indicate error. */
> .
>
--
Thanks,
Xiang
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data
2019-10-29 3:34 ` Xiang Zheng
@ 2019-11-08 1:12 ` Xiang Zheng
0 siblings, 0 replies; 4+ messages in thread
From: Xiang Zheng @ 2019-11-08 1:12 UTC (permalink / raw)
To: Matthew Wilcox
Cc: bhelgaas, wangxiongfeng2, wanghaibin.wang, guoheyi, yebiaoxiang,
linux-pci, linux-kernel, rjw, tglx, guohanjun, yangyingliang
Ping...
On 2019/10/29 11:34, Xiang Zheng wrote:
>
>
> On 2019/10/29 0:30, Matthew Wilcox wrote:
>> On Mon, Oct 28, 2019 at 05:18:09PM +0800, Xiang Zheng wrote:
>>> Commit "7ea7e98fd8d0" suggests that the "pci_lock" is sufficient,
>>> and all the callers of pci_wait_cfg() are wrapped with the "pci_lock".
>>>
>>> However, since the commit "cdcb33f98244" merged, the accesses to
>>> the pci_cfg_wait queue are not safe anymore. A "pci_lock" is
>>> insufficient and we need to hold an additional queue lock while
>>> read/write the wait queue.
>>>
>>> So let's use the add_wait_queue()/remove_wait_queue() instead of
>>> __add_wait_queue()/__remove_wait_queue().
>>
>> As I said earlier, this reintroduces the deadlock addressed by
>> cdcb33f9824429a926b971bf041a6cec238f91ff
>>
>
> Thanks Matthew, sorry for that I did not understand the way to reintroduce
> the deadlock and sent this patch. If what I think is right, the possible
> deadlock may be caused by the condition in which there are three processes:
>
> *Process* *Acquired* *Wait For*
> wake_up_all() wq_head->lock pi_lock
> snbep_uncore_pci_read_counter() pi_lock pci_lock
> pci_wait_cfg() pci_lock wq_head->lock
>
> These processes suffer from the nested locks.:)
>
> But for this problem, what do you think about the solution below:
>
> diff --git a/drivers/pci/access.c b/drivers/pci/access.c
> index 2fccb5762c76..09342a74e5ea 100644
> --- a/drivers/pci/access.c
> +++ b/drivers/pci/access.c
> @@ -207,14 +207,14 @@ static noinline void pci_wait_cfg(struct pci_dev *dev)
> {
> DECLARE_WAITQUEUE(wait, current);
>
> - __add_wait_queue(&pci_cfg_wait, &wait);
> do {
> set_current_state(TASK_UNINTERRUPTIBLE);
> raw_spin_unlock_irq(&pci_lock);
> + add_wait_queue(&pci_cfg_wait, &wait);
> schedule();
> + remove_wait_queue(&pci_cfg_wait, &wait);
> raw_spin_lock_irq(&pci_lock);
> } while (dev->block_cfg_access);
> - __remove_wait_queue(&pci_cfg_wait, &wait);
> }
>
> /* Returns 0 on success, negative values indicate error. */
>
>
>
>> .
>>
>
--
Thanks,
Xiang
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-11-08 1:12 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-28 9:18 [PATCH] pci: lock the pci_cfg_wait queue for the consistency of data Xiang Zheng
2019-10-28 16:30 ` Matthew Wilcox
2019-10-29 3:34 ` Xiang Zheng
2019-11-08 1:12 ` Xiang Zheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).