linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next] driver core: fix deadlock in __driver_attach
@ 2022-06-08  9:43 Zhang Wensheng
  2022-06-10 13:49 ` Greg KH
  2022-06-16  7:11 ` zhangwensheng (E)
  0 siblings, 2 replies; 5+ messages in thread
From: Zhang Wensheng @ 2022-06-08  9:43 UTC (permalink / raw)
  To: gregkh, rafael; +Cc: linux-kernel, yukuai3, zhangwensheng5

In __driver_attach function, There are also potential AA deadlock
problem, like the commit b232b02bf3c2 ("driver core: fix deadlock
in __device_attach").

Fixes: ef0ff68351be ("driver core: Probe devices asynchronously instead of the driver")
Signed-off-by: Zhang Wensheng <zhangwensheng5@huawei.com>
---
 drivers/base/dd.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 11b0fb6414d3..b766968a873c 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -1115,6 +1115,7 @@ static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
 static int __driver_attach(struct device *dev, void *data)
 {
 	struct device_driver *drv = data;
+	bool async = false;
 	int ret;
 
 	/*
@@ -1153,9 +1154,11 @@ static int __driver_attach(struct device *dev, void *data)
 		if (!dev->driver && !dev->p->async_driver) {
 			get_device(dev);
 			dev->p->async_driver = drv;
-			async_schedule_dev(__driver_attach_async_helper, dev);
+			async = true;
 		}
 		device_unlock(dev);
+		if (async)
+			async_schedule_dev(__driver_attach_async_helper, dev);
 		return 0;
 	}
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH -next] driver core: fix deadlock in __driver_attach
  2022-06-08  9:43 [PATCH -next] driver core: fix deadlock in __driver_attach Zhang Wensheng
@ 2022-06-10 13:49 ` Greg KH
  2022-06-16  8:00   ` zhangwensheng (E)
  2022-06-16  7:11 ` zhangwensheng (E)
  1 sibling, 1 reply; 5+ messages in thread
From: Greg KH @ 2022-06-10 13:49 UTC (permalink / raw)
  To: Zhang Wensheng; +Cc: rafael, linux-kernel, yukuai3

On Wed, Jun 08, 2022 at 05:43:55PM +0800, Zhang Wensheng wrote:
> In __driver_attach function, There are also potential AA deadlock
> problem, like the commit b232b02bf3c2 ("driver core: fix deadlock
> in __device_attach").

Potential, but real?

And the codepaths for drivers being added is much different than
devices, please provide the full information like you did in the other
commit.

Also, have you triggered this problem successfully and proven that this
change fixes the issue?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH -next] driver core: fix deadlock in __driver_attach
  2022-06-08  9:43 [PATCH -next] driver core: fix deadlock in __driver_attach Zhang Wensheng
  2022-06-10 13:49 ` Greg KH
@ 2022-06-16  7:11 ` zhangwensheng (E)
  1 sibling, 0 replies; 5+ messages in thread
From: zhangwensheng (E) @ 2022-06-16  7:11 UTC (permalink / raw)
  To: gregkh, rafael; +Cc: linux-kernel, yukuai3

friendly ping...

在 2022/6/8 17:43, Zhang Wensheng 写道:
> In __driver_attach function, There are also potential AA deadlock
> problem, like the commit b232b02bf3c2 ("driver core: fix deadlock
> in __device_attach").
>
> Fixes: ef0ff68351be ("driver core: Probe devices asynchronously instead of the driver")
> Signed-off-by: Zhang Wensheng <zhangwensheng5@huawei.com>
> ---
>   drivers/base/dd.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> index 11b0fb6414d3..b766968a873c 100644
> --- a/drivers/base/dd.c
> +++ b/drivers/base/dd.c
> @@ -1115,6 +1115,7 @@ static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
>   static int __driver_attach(struct device *dev, void *data)
>   {
>   	struct device_driver *drv = data;
> +	bool async = false;
>   	int ret;
>   
>   	/*
> @@ -1153,9 +1154,11 @@ static int __driver_attach(struct device *dev, void *data)
>   		if (!dev->driver && !dev->p->async_driver) {
>   			get_device(dev);
>   			dev->p->async_driver = drv;
> -			async_schedule_dev(__driver_attach_async_helper, dev);
> +			async = true;
>   		}
>   		device_unlock(dev);
> +		if (async)
> +			async_schedule_dev(__driver_attach_async_helper, dev);
>   		return 0;
>   	}
>   

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH -next] driver core: fix deadlock in __driver_attach
  2022-06-10 13:49 ` Greg KH
@ 2022-06-16  8:00   ` zhangwensheng (E)
  2022-06-21 19:34     ` Greg KH
  0 siblings, 1 reply; 5+ messages in thread
From: zhangwensheng (E) @ 2022-06-16  8:00 UTC (permalink / raw)
  To: Greg KH; +Cc: rafael, linux-kernel, yukuai3

sorry that I didn't see your reply.
it is real not potential, I have triggered this problem successfully and 
proven that this change can fix it.

stack like commit b232b02bf3c2 ("driver core: fix deadlock in 
__device_attach").
list below:
     In __driver_attach function, The lock holding logic is as follows:
     ...
     __driver_attach
     if (driver_allows_async_probing(drv))
       device_lock(dev)      // get lock dev
         async_schedule_dev(__driver_attach_async_helper, dev); // func
           async_schedule_node
             async_schedule_node_domain(func)
               entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);
               /* when fail or work limit, sync to execute func, but
                  __driver_attach_async_helper will get lock dev as
                  will, which will lead to A-A deadlock.  */
               if (!entry || atomic_read(&entry_count) > MAX_WORK) {
                 func;
               else
                 queue_work_node(node, system_unbound_wq, &entry->work)
       device_unlock(dev)

     As above show, when it is allowed to do async probes, because of
     out of memory or work limit, async work is not be allowed, to do
     sync execute instead. it will lead to A-A deadlock because of
     __driver_attach_async_helper getting lock dev.

     Because it's logic is same as commit b232b02bf3c2 ("driver core: 
fix deadlock
     in __device_attach"),  I simplify the description.


Reproduce:
and it can be reproduce by make the condition
(if (!entry || atomic_read(&entry_count) > MAX_WORK)) untenable, like below:

[  370.785650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[  370.787154] task:swapper/0       state:D stack:    0 pid:    1 
ppid:     0 flags:0x00004000
[  370.788865] Call Trace:
[  370.789374]  <TASK>
[  370.789841]  __schedule+0x482/0x1050
[  370.790613]  schedule+0x92/0x1a0
[  370.791290]  schedule_preempt_disabled+0x2c/0x50
[  370.792256]  __mutex_lock.isra.0+0x757/0xec0
[  370.793158]  __mutex_lock_slowpath+0x1f/0x30
[  370.794079]  mutex_lock+0x50/0x60
[  370.794795]  __device_driver_lock+0x2f/0x70
[  370.795677]  ? driver_probe_device+0xd0/0xd0
[  370.796576]  __driver_attach_async_helper+0x1d/0xd0
[  370.797318]  ? driver_probe_device+0xd0/0xd0
[  370.797957]  async_schedule_node_domain+0xa5/0xc0
[  370.798652]  async_schedule_node+0x19/0x30
[  370.799243]  __driver_attach+0x246/0x290
[  370.799828]  ? driver_allows_async_probing+0xa0/0xa0
[  370.800548]  bus_for_each_dev+0x9d/0x130
[  370.801132]  driver_attach+0x22/0x30
[  370.801666]  bus_add_driver+0x290/0x340
[  370.802246]  driver_register+0x88/0x140
[  370.802817]  ? virtio_scsi_init+0x116/0x116
[  370.803425]  scsi_register_driver+0x1a/0x30
[  370.804057]  init_sd+0x184/0x226
[  370.804533]  do_one_initcall+0x71/0x3a0
[  370.805107]  kernel_init_freeable+0x39a/0x43a
[  370.805759]  ? rest_init+0x150/0x150
[  370.806283]  kernel_init+0x26/0x230
[  370.806799]  ret_from_fork+0x1f/0x30

And my change can fix it.

thanks.

Wensheng.

在 2022/6/10 21:49, Greg KH 写道:
> On Wed, Jun 08, 2022 at 05:43:55PM +0800, Zhang Wensheng wrote:
>> In __driver_attach function, There are also potential AA deadlock
>> problem, like the commit b232b02bf3c2 ("driver core: fix deadlock
>> in __device_attach").
> Potential, but real?
>
> And the codepaths for drivers being added is much different than
> devices, please provide the full information like you did in the other
> commit.
>
> Also, have you triggered this problem successfully and proven that this
> change fixes the issue?
>
> thanks,
>
> greg k-h
> .

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH -next] driver core: fix deadlock in __driver_attach
  2022-06-16  8:00   ` zhangwensheng (E)
@ 2022-06-21 19:34     ` Greg KH
  0 siblings, 0 replies; 5+ messages in thread
From: Greg KH @ 2022-06-21 19:34 UTC (permalink / raw)
  To: zhangwensheng (E); +Cc: rafael, linux-kernel, yukuai3


A: http://en.wikipedia.org/wiki/Top_post
Q: Were do I find info about this thing called top-posting?
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

A: No.
Q: Should I include quotations after my reply?


http://daringfireball.net/2007/07/on_top

On Thu, Jun 16, 2022 at 04:00:58PM +0800, zhangwensheng (E) wrote:
> sorry that I didn't see your reply.
> it is real not potential, I have triggered this problem successfully and
> proven that this change can fix it.
> 
> stack like commit b232b02bf3c2 ("driver core: fix deadlock in
> __device_attach").
> list below:
>     In __driver_attach function, The lock holding logic is as follows:
>     ...
>     __driver_attach
>     if (driver_allows_async_probing(drv))
>       device_lock(dev)      // get lock dev
>         async_schedule_dev(__driver_attach_async_helper, dev); // func
>           async_schedule_node
>             async_schedule_node_domain(func)
>               entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);
>               /* when fail or work limit, sync to execute func, but
>                  __driver_attach_async_helper will get lock dev as
>                  will, which will lead to A-A deadlock.  */
>               if (!entry || atomic_read(&entry_count) > MAX_WORK) {
>                 func;
>               else
>                 queue_work_node(node, system_unbound_wq, &entry->work)
>       device_unlock(dev)
> 
>     As above show, when it is allowed to do async probes, because of
>     out of memory or work limit, async work is not be allowed, to do
>     sync execute instead. it will lead to A-A deadlock because of
>     __driver_attach_async_helper getting lock dev.
> 
>     Because it's logic is same as commit b232b02bf3c2 ("driver core: fix
> deadlock
>     in __device_attach"),  I simplify the description.
> 
> 
> Reproduce:
> and it can be reproduce by make the condition
> (if (!entry || atomic_read(&entry_count) > MAX_WORK)) untenable, like below:
> 
> [  370.785650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
> [  370.787154] task:swapper/0       state:D stack:    0 pid:    1 ppid:    
> 0 flags:0x00004000
> [  370.788865] Call Trace:
> [  370.789374]  <TASK>
> [  370.789841]  __schedule+0x482/0x1050
> [  370.790613]  schedule+0x92/0x1a0
> [  370.791290]  schedule_preempt_disabled+0x2c/0x50
> [  370.792256]  __mutex_lock.isra.0+0x757/0xec0
> [  370.793158]  __mutex_lock_slowpath+0x1f/0x30
> [  370.794079]  mutex_lock+0x50/0x60
> [  370.794795]  __device_driver_lock+0x2f/0x70
> [  370.795677]  ? driver_probe_device+0xd0/0xd0
> [  370.796576]  __driver_attach_async_helper+0x1d/0xd0
> [  370.797318]  ? driver_probe_device+0xd0/0xd0
> [  370.797957]  async_schedule_node_domain+0xa5/0xc0
> [  370.798652]  async_schedule_node+0x19/0x30
> [  370.799243]  __driver_attach+0x246/0x290
> [  370.799828]  ? driver_allows_async_probing+0xa0/0xa0
> [  370.800548]  bus_for_each_dev+0x9d/0x130
> [  370.801132]  driver_attach+0x22/0x30
> [  370.801666]  bus_add_driver+0x290/0x340
> [  370.802246]  driver_register+0x88/0x140
> [  370.802817]  ? virtio_scsi_init+0x116/0x116
> [  370.803425]  scsi_register_driver+0x1a/0x30
> [  370.804057]  init_sd+0x184/0x226
> [  370.804533]  do_one_initcall+0x71/0x3a0
> [  370.805107]  kernel_init_freeable+0x39a/0x43a
> [  370.805759]  ? rest_init+0x150/0x150
> [  370.806283]  kernel_init+0x26/0x230
> [  370.806799]  ret_from_fork+0x1f/0x30
> 
> And my change can fix it.

Ok, please put that type of information in the changelog text.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-06-21 19:34 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-08  9:43 [PATCH -next] driver core: fix deadlock in __driver_attach Zhang Wensheng
2022-06-10 13:49 ` Greg KH
2022-06-16  8:00   ` zhangwensheng (E)
2022-06-21 19:34     ` Greg KH
2022-06-16  7:11 ` zhangwensheng (E)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).