linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] watchdog: core: Fix circular locking dependency
@ 2016-04-21 14:38 Guenter Roeck
  2016-04-21 14:50 ` One Thousand Gnomes
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Guenter Roeck @ 2016-04-21 14:38 UTC (permalink / raw)
  To: Wim Van Sebroeck
  Cc: Clemens Gruber, linux-watchdog, linux-kernel, Guenter Roeck

lockdep reports the following circular locking dependency.

======================================================
INFO: possible circular locking dependency detected ]
4.6.0-rc3-00191-gfabf418 #162 Not tainted
-------------------------------------------------------
systemd/1 is trying to acquire lock:
((&(&wd_data->work)->work)){+.+...}, at: [<80141650>] flush_work+0x0/0x280

but task is already holding lock:

(&wd_data->lock){+.+...}, at: [<804acfa8>] watchdog_release+0x18/0x190

which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:

-> #1 (&wd_data->lock){+.+...}:
	[<80662310>] mutex_lock_nested+0x64/0x4a8
	[<804aca4c>] watchdog_ping_work+0x18/0x4c
	[<80143128>] process_one_work+0x1ac/0x500
	[<801434b4>] worker_thread+0x38/0x554
	[<80149510>] kthread+0xf4/0x108
	[<80107c10>] ret_from_fork+0x14/0x24

-> #0 ((&(&wd_data->work)->work)){+.+...}:
	[<8017c4e8>] lock_acquire+0x70/0x90
	[<8014169c>] flush_work+0x4c/0x280
	[<801440f8>] __cancel_work_timer+0x9c/0x1e0
	[<804acfcc>] watchdog_release+0x3c/0x190
	[<8022c5e8>] __fput+0x80/0x1c8
	[<80147b28>] task_work_run+0x94/0xc8
	[<8010b998>] do_work_pending+0x8c/0xb4
	[<80107ba8>] slow_work_pending+0xc/0x20

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0                    CPU1
----                    ----
lock(&wd_data->lock);
                        lock((&(&wd_data->work)->work));
                        lock(&wd_data->lock);
lock((&(&wd_data->work)->work));

*** DEADLOCK ***

1 lock held by systemd/1:

stack backtrace:
CPU: 2 PID: 1 Comm: systemd Not tainted 4.6.0-rc3-00191-gfabf418 #162
Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
[<8010f5e4>] (unwind_backtrace) from [<8010c038>] (show_stack+0x10/0x14)
[<8010c038>] (show_stack) from [<8039d7fc>] (dump_stack+0xa8/0xd4)
[<8039d7fc>] (dump_stack) from [<80177ee0>] (print_circular_bug+0x214/0x334)
[<80177ee0>] (print_circular_bug) from [<80179230>] (check_prevs_add+0x4dc/0x8e8)
[<80179230>] (check_prevs_add) from [<8017b3d8>] (__lock_acquire+0xc6c/0x14ec)
[<8017b3d8>] (__lock_acquire) from [<8017c4e8>] (lock_acquire+0x70/0x90)
[<8017c4e8>] (lock_acquire) from [<8014169c>] (flush_work+0x4c/0x280)
[<8014169c>] (flush_work) from [<801440f8>] (__cancel_work_timer+0x9c/0x1e0)
[<801440f8>] (__cancel_work_timer) from [<804acfcc>] (watchdog_release+0x3c/0x190)
[<804acfcc>] (watchdog_release) from [<8022c5e8>] (__fput+0x80/0x1c8)
[<8022c5e8>] (__fput) from [<80147b28>] (task_work_run+0x94/0xc8)
[<80147b28>] (task_work_run) from [<8010b998>] (do_work_pending+0x8c/0xb4)
[<8010b998>] (do_work_pending) from [<80107ba8>] (slow_work_pending+0xc/0x20)

Turns out the call to cancel_delayed_work_sync() in watchdog_release()
is not necessary and can be dropped. If the worker is no longer necessary,
the subsequent call to watchdog_update_worker() will cancel it. If it is
already running, it won't do anything, since the worker function checks
if it needs to ping the watchdog or not.

Reported-by: Clemens Gruber <clemens.gruber@pqgruber.com>
Tested-by: Clemens Gruber <clemens.gruber@pqgruber.com>
Fixes: 11d7aba9ceb7 ("watchdog: imx2: Convert to use infrastructure triggered keepalives")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
---
 drivers/watchdog/watchdog_dev.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
index e2c5abbb45ff..3595cffa24ea 100644
--- a/drivers/watchdog/watchdog_dev.c
+++ b/drivers/watchdog/watchdog_dev.c
@@ -736,7 +736,6 @@ static int watchdog_release(struct inode *inode, struct file *file)
 		watchdog_ping(wdd);
 	}
 
-	cancel_delayed_work_sync(&wd_data->work);
 	watchdog_update_worker(wdd);
 
 	/* make sure that /dev/watchdog can be re-opened */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-04-21 14:38 [PATCH] watchdog: core: Fix circular locking dependency Guenter Roeck
@ 2016-04-21 14:50 ` One Thousand Gnomes
  2016-04-21 23:30   ` Guenter Roeck
  2016-05-09 13:53 ` Clemens Gruber
  2016-05-14 16:41 ` Wim Van Sebroeck
  2 siblings, 1 reply; 9+ messages in thread
From: One Thousand Gnomes @ 2016-04-21 14:50 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Wim Van Sebroeck, Clemens Gruber, linux-watchdog, linux-kernel

> Turns out the call to cancel_delayed_work_sync() in watchdog_release()
> is not necessary and can be dropped. If the worker is no longer necessary,
> the subsequent call to watchdog_update_worker() will cancel it. If it is
> already running, it won't do anything, since the worker function checks
> if it needs to ping the watchdog or not.

Is this actually true. Consider the pathalogical case of the device being
closed and the modue unloaded. In that case the close completes, we drop
the module count but could still do work on it.

Alan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-04-21 14:50 ` One Thousand Gnomes
@ 2016-04-21 23:30   ` Guenter Roeck
  0 siblings, 0 replies; 9+ messages in thread
From: Guenter Roeck @ 2016-04-21 23:30 UTC (permalink / raw)
  To: One Thousand Gnomes
  Cc: Wim Van Sebroeck, Clemens Gruber, linux-watchdog, linux-kernel

On Thu, Apr 21, 2016 at 03:50:55PM +0100, One Thousand Gnomes wrote:
> > Turns out the call to cancel_delayed_work_sync() in watchdog_release()
> > is not necessary and can be dropped. If the worker is no longer necessary,
> > the subsequent call to watchdog_update_worker() will cancel it. If it is
> > already running, it won't do anything, since the worker function checks
> > if it needs to ping the watchdog or not.
> 
> Is this actually true. Consider the pathalogical case of the device being
> closed and the modue unloaded. In that case the close completes, we drop
> the module count but could still do work on it.
> 
Module unload is handled separately and also calls cancel_delayed_work_sync().

Guenter

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-04-21 14:38 [PATCH] watchdog: core: Fix circular locking dependency Guenter Roeck
  2016-04-21 14:50 ` One Thousand Gnomes
@ 2016-05-09 13:53 ` Clemens Gruber
  2016-05-09 14:18   ` Guenter Roeck
  2016-05-14 16:41 ` Wim Van Sebroeck
  2 siblings, 1 reply; 9+ messages in thread
From: Clemens Gruber @ 2016-05-09 13:53 UTC (permalink / raw)
  To: Guenter Roeck, Wim Van Sebroeck; +Cc: linux-watchdog, linux-kernel

On Thu, Apr 21, 2016 at 07:38:14AM -0700, Guenter Roeck wrote:
> lockdep reports the following circular locking dependency.
> 
> ======================================================
> INFO: possible circular locking dependency detected ]
> 4.6.0-rc3-00191-gfabf418 #162 Not tainted
> -------------------------------------------------------
> systemd/1 is trying to acquire lock:
> ((&(&wd_data->work)->work)){+.+...}, at: [<80141650>] flush_work+0x0/0x280
> 
> but task is already holding lock:
> 
> (&wd_data->lock){+.+...}, at: [<804acfa8>] watchdog_release+0x18/0x190
> 
> which lock already depends on the new lock.
> the existing dependency chain (in reverse order) is:
> 
> -> #1 (&wd_data->lock){+.+...}:
> 	[<80662310>] mutex_lock_nested+0x64/0x4a8
> 	[<804aca4c>] watchdog_ping_work+0x18/0x4c
> 	[<80143128>] process_one_work+0x1ac/0x500
> 	[<801434b4>] worker_thread+0x38/0x554
> 	[<80149510>] kthread+0xf4/0x108
> 	[<80107c10>] ret_from_fork+0x14/0x24
> 
> -> #0 ((&(&wd_data->work)->work)){+.+...}:
> 	[<8017c4e8>] lock_acquire+0x70/0x90
> 	[<8014169c>] flush_work+0x4c/0x280
> 	[<801440f8>] __cancel_work_timer+0x9c/0x1e0
> 	[<804acfcc>] watchdog_release+0x3c/0x190
> 	[<8022c5e8>] __fput+0x80/0x1c8
> 	[<80147b28>] task_work_run+0x94/0xc8
> 	[<8010b998>] do_work_pending+0x8c/0xb4
> 	[<80107ba8>] slow_work_pending+0xc/0x20
> 
> other info that might help us debug this:
> Possible unsafe locking scenario:
> 
> CPU0                    CPU1
> ----                    ----
> lock(&wd_data->lock);
>                         lock((&(&wd_data->work)->work));
>                         lock(&wd_data->lock);
> lock((&(&wd_data->work)->work));
> 
> *** DEADLOCK ***
> 
> 1 lock held by systemd/1:
> 
> stack backtrace:
> CPU: 2 PID: 1 Comm: systemd Not tainted 4.6.0-rc3-00191-gfabf418 #162
> Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
> [<8010f5e4>] (unwind_backtrace) from [<8010c038>] (show_stack+0x10/0x14)
> [<8010c038>] (show_stack) from [<8039d7fc>] (dump_stack+0xa8/0xd4)
> [<8039d7fc>] (dump_stack) from [<80177ee0>] (print_circular_bug+0x214/0x334)
> [<80177ee0>] (print_circular_bug) from [<80179230>] (check_prevs_add+0x4dc/0x8e8)
> [<80179230>] (check_prevs_add) from [<8017b3d8>] (__lock_acquire+0xc6c/0x14ec)
> [<8017b3d8>] (__lock_acquire) from [<8017c4e8>] (lock_acquire+0x70/0x90)
> [<8017c4e8>] (lock_acquire) from [<8014169c>] (flush_work+0x4c/0x280)
> [<8014169c>] (flush_work) from [<801440f8>] (__cancel_work_timer+0x9c/0x1e0)
> [<801440f8>] (__cancel_work_timer) from [<804acfcc>] (watchdog_release+0x3c/0x190)
> [<804acfcc>] (watchdog_release) from [<8022c5e8>] (__fput+0x80/0x1c8)
> [<8022c5e8>] (__fput) from [<80147b28>] (task_work_run+0x94/0xc8)
> [<80147b28>] (task_work_run) from [<8010b998>] (do_work_pending+0x8c/0xb4)
> [<8010b998>] (do_work_pending) from [<80107ba8>] (slow_work_pending+0xc/0x20)
> 
> Turns out the call to cancel_delayed_work_sync() in watchdog_release()
> is not necessary and can be dropped. If the worker is no longer necessary,
> the subsequent call to watchdog_update_worker() will cancel it. If it is
> already running, it won't do anything, since the worker function checks
> if it needs to ping the watchdog or not.
> 
> Reported-by: Clemens Gruber <clemens.gruber@pqgruber.com>
> Tested-by: Clemens Gruber <clemens.gruber@pqgruber.com>
> Fixes: 11d7aba9ceb7 ("watchdog: imx2: Convert to use infrastructure triggered keepalives")
> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
> ---
>  drivers/watchdog/watchdog_dev.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
> index e2c5abbb45ff..3595cffa24ea 100644
> --- a/drivers/watchdog/watchdog_dev.c
> +++ b/drivers/watchdog/watchdog_dev.c
> @@ -736,7 +736,6 @@ static int watchdog_release(struct inode *inode, struct file *file)
>  		watchdog_ping(wdd);
>  	}
>  
> -	cancel_delayed_work_sync(&wd_data->work);
>  	watchdog_update_worker(wdd);
>  
>  	/* make sure that /dev/watchdog can be re-opened */
> -- 
> 2.5.0
>

Hi,

I don't see this patch in the torvalds/linux tree.

Will this get in before the 4.6 release?

Thanks,
Clemens

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-05-09 13:53 ` Clemens Gruber
@ 2016-05-09 14:18   ` Guenter Roeck
  0 siblings, 0 replies; 9+ messages in thread
From: Guenter Roeck @ 2016-05-09 14:18 UTC (permalink / raw)
  To: Clemens Gruber, Wim Van Sebroeck; +Cc: linux-watchdog, linux-kernel

On 05/09/2016 06:53 AM, Clemens Gruber wrote:
> On Thu, Apr 21, 2016 at 07:38:14AM -0700, Guenter Roeck wrote:
>> lockdep reports the following circular locking dependency.
>>
>> ======================================================
>> INFO: possible circular locking dependency detected ]
>> 4.6.0-rc3-00191-gfabf418 #162 Not tainted
>> -------------------------------------------------------
>> systemd/1 is trying to acquire lock:
>> ((&(&wd_data->work)->work)){+.+...}, at: [<80141650>] flush_work+0x0/0x280
>>
>> but task is already holding lock:
>>
>> (&wd_data->lock){+.+...}, at: [<804acfa8>] watchdog_release+0x18/0x190
>>
>> which lock already depends on the new lock.
>> the existing dependency chain (in reverse order) is:
>>
>> -> #1 (&wd_data->lock){+.+...}:
>> 	[<80662310>] mutex_lock_nested+0x64/0x4a8
>> 	[<804aca4c>] watchdog_ping_work+0x18/0x4c
>> 	[<80143128>] process_one_work+0x1ac/0x500
>> 	[<801434b4>] worker_thread+0x38/0x554
>> 	[<80149510>] kthread+0xf4/0x108
>> 	[<80107c10>] ret_from_fork+0x14/0x24
>>
>> -> #0 ((&(&wd_data->work)->work)){+.+...}:
>> 	[<8017c4e8>] lock_acquire+0x70/0x90
>> 	[<8014169c>] flush_work+0x4c/0x280
>> 	[<801440f8>] __cancel_work_timer+0x9c/0x1e0
>> 	[<804acfcc>] watchdog_release+0x3c/0x190
>> 	[<8022c5e8>] __fput+0x80/0x1c8
>> 	[<80147b28>] task_work_run+0x94/0xc8
>> 	[<8010b998>] do_work_pending+0x8c/0xb4
>> 	[<80107ba8>] slow_work_pending+0xc/0x20
>>
>> other info that might help us debug this:
>> Possible unsafe locking scenario:
>>
>> CPU0                    CPU1
>> ----                    ----
>> lock(&wd_data->lock);
>>                          lock((&(&wd_data->work)->work));
>>                          lock(&wd_data->lock);
>> lock((&(&wd_data->work)->work));
>>
>> *** DEADLOCK ***
>>
>> 1 lock held by systemd/1:
>>
>> stack backtrace:
>> CPU: 2 PID: 1 Comm: systemd Not tainted 4.6.0-rc3-00191-gfabf418 #162
>> Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
>> [<8010f5e4>] (unwind_backtrace) from [<8010c038>] (show_stack+0x10/0x14)
>> [<8010c038>] (show_stack) from [<8039d7fc>] (dump_stack+0xa8/0xd4)
>> [<8039d7fc>] (dump_stack) from [<80177ee0>] (print_circular_bug+0x214/0x334)
>> [<80177ee0>] (print_circular_bug) from [<80179230>] (check_prevs_add+0x4dc/0x8e8)
>> [<80179230>] (check_prevs_add) from [<8017b3d8>] (__lock_acquire+0xc6c/0x14ec)
>> [<8017b3d8>] (__lock_acquire) from [<8017c4e8>] (lock_acquire+0x70/0x90)
>> [<8017c4e8>] (lock_acquire) from [<8014169c>] (flush_work+0x4c/0x280)
>> [<8014169c>] (flush_work) from [<801440f8>] (__cancel_work_timer+0x9c/0x1e0)
>> [<801440f8>] (__cancel_work_timer) from [<804acfcc>] (watchdog_release+0x3c/0x190)
>> [<804acfcc>] (watchdog_release) from [<8022c5e8>] (__fput+0x80/0x1c8)
>> [<8022c5e8>] (__fput) from [<80147b28>] (task_work_run+0x94/0xc8)
>> [<80147b28>] (task_work_run) from [<8010b998>] (do_work_pending+0x8c/0xb4)
>> [<8010b998>] (do_work_pending) from [<80107ba8>] (slow_work_pending+0xc/0x20)
>>
>> Turns out the call to cancel_delayed_work_sync() in watchdog_release()
>> is not necessary and can be dropped. If the worker is no longer necessary,
>> the subsequent call to watchdog_update_worker() will cancel it. If it is
>> already running, it won't do anything, since the worker function checks
>> if it needs to ping the watchdog or not.
>>
>> Reported-by: Clemens Gruber <clemens.gruber@pqgruber.com>
>> Tested-by: Clemens Gruber <clemens.gruber@pqgruber.com>
>> Fixes: 11d7aba9ceb7 ("watchdog: imx2: Convert to use infrastructure triggered keepalives")
>> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
>> ---
>>   drivers/watchdog/watchdog_dev.c | 1 -
>>   1 file changed, 1 deletion(-)
>>
>> diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
>> index e2c5abbb45ff..3595cffa24ea 100644
>> --- a/drivers/watchdog/watchdog_dev.c
>> +++ b/drivers/watchdog/watchdog_dev.c
>> @@ -736,7 +736,6 @@ static int watchdog_release(struct inode *inode, struct file *file)
>>   		watchdog_ping(wdd);
>>   	}
>>
>> -	cancel_delayed_work_sync(&wd_data->work);
>>   	watchdog_update_worker(wdd);
>>
>>   	/* make sure that /dev/watchdog can be re-opened */
>> --
>> 2.5.0
>>
>
> Hi,
>
> I don't see this patch in the torvalds/linux tree.
>
> Will this get in before the 4.6 release?
>
Hopefully. If not, I assume Wim will pick it up in the next commit window and
it will be applied to -stable.

Guenter

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-04-21 14:38 [PATCH] watchdog: core: Fix circular locking dependency Guenter Roeck
  2016-04-21 14:50 ` One Thousand Gnomes
  2016-05-09 13:53 ` Clemens Gruber
@ 2016-05-14 16:41 ` Wim Van Sebroeck
  2016-05-14 17:07   ` Guenter Roeck
  2 siblings, 1 reply; 9+ messages in thread
From: Wim Van Sebroeck @ 2016-05-14 16:41 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: Clemens Gruber, linux-watchdog, linux-kernel

Hi Guenter,

> lockdep reports the following circular locking dependency.
> 
> ======================================================
> INFO: possible circular locking dependency detected ]
> 4.6.0-rc3-00191-gfabf418 #162 Not tainted
> -------------------------------------------------------
> systemd/1 is trying to acquire lock:
> ((&(&wd_data->work)->work)){+.+...}, at: [<80141650>] flush_work+0x0/0x280
> 
> but task is already holding lock:
> 
> (&wd_data->lock){+.+...}, at: [<804acfa8>] watchdog_release+0x18/0x190
> 
> which lock already depends on the new lock.
> the existing dependency chain (in reverse order) is:
> 
> -> #1 (&wd_data->lock){+.+...}:
> 	[<80662310>] mutex_lock_nested+0x64/0x4a8
> 	[<804aca4c>] watchdog_ping_work+0x18/0x4c
> 	[<80143128>] process_one_work+0x1ac/0x500
> 	[<801434b4>] worker_thread+0x38/0x554
> 	[<80149510>] kthread+0xf4/0x108
> 	[<80107c10>] ret_from_fork+0x14/0x24
> 
> -> #0 ((&(&wd_data->work)->work)){+.+...}:
> 	[<8017c4e8>] lock_acquire+0x70/0x90
> 	[<8014169c>] flush_work+0x4c/0x280
> 	[<801440f8>] __cancel_work_timer+0x9c/0x1e0
> 	[<804acfcc>] watchdog_release+0x3c/0x190
> 	[<8022c5e8>] __fput+0x80/0x1c8
> 	[<80147b28>] task_work_run+0x94/0xc8
> 	[<8010b998>] do_work_pending+0x8c/0xb4
> 	[<80107ba8>] slow_work_pending+0xc/0x20
> 
> other info that might help us debug this:
> Possible unsafe locking scenario:
> 
> CPU0                    CPU1
> ----                    ----
> lock(&wd_data->lock);
>                         lock((&(&wd_data->work)->work));
>                         lock(&wd_data->lock);
> lock((&(&wd_data->work)->work));
> 
> *** DEADLOCK ***
> 
> 1 lock held by systemd/1:
> 
> stack backtrace:
> CPU: 2 PID: 1 Comm: systemd Not tainted 4.6.0-rc3-00191-gfabf418 #162
> Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
> [<8010f5e4>] (unwind_backtrace) from [<8010c038>] (show_stack+0x10/0x14)
> [<8010c038>] (show_stack) from [<8039d7fc>] (dump_stack+0xa8/0xd4)
> [<8039d7fc>] (dump_stack) from [<80177ee0>] (print_circular_bug+0x214/0x334)
> [<80177ee0>] (print_circular_bug) from [<80179230>] (check_prevs_add+0x4dc/0x8e8)
> [<80179230>] (check_prevs_add) from [<8017b3d8>] (__lock_acquire+0xc6c/0x14ec)
> [<8017b3d8>] (__lock_acquire) from [<8017c4e8>] (lock_acquire+0x70/0x90)
> [<8017c4e8>] (lock_acquire) from [<8014169c>] (flush_work+0x4c/0x280)
> [<8014169c>] (flush_work) from [<801440f8>] (__cancel_work_timer+0x9c/0x1e0)
> [<801440f8>] (__cancel_work_timer) from [<804acfcc>] (watchdog_release+0x3c/0x190)
> [<804acfcc>] (watchdog_release) from [<8022c5e8>] (__fput+0x80/0x1c8)
> [<8022c5e8>] (__fput) from [<80147b28>] (task_work_run+0x94/0xc8)
> [<80147b28>] (task_work_run) from [<8010b998>] (do_work_pending+0x8c/0xb4)
> [<8010b998>] (do_work_pending) from [<80107ba8>] (slow_work_pending+0xc/0x20)
> 
> Turns out the call to cancel_delayed_work_sync() in watchdog_release()
> is not necessary and can be dropped. If the worker is no longer necessary,
> the subsequent call to watchdog_update_worker() will cancel it. If it is
> already running, it won't do anything, since the worker function checks
> if it needs to ping the watchdog or not.
> 
> Reported-by: Clemens Gruber <clemens.gruber@pqgruber.com>
> Tested-by: Clemens Gruber <clemens.gruber@pqgruber.com>
> Fixes: 11d7aba9ceb7 ("watchdog: imx2: Convert to use infrastructure triggered keepalives")
> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
> ---
>  drivers/watchdog/watchdog_dev.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
> index e2c5abbb45ff..3595cffa24ea 100644
> --- a/drivers/watchdog/watchdog_dev.c
> +++ b/drivers/watchdog/watchdog_dev.c
> @@ -736,7 +736,6 @@ static int watchdog_release(struct inode *inode, struct file *file)
>  		watchdog_ping(wdd);
>  	}
>  
> -	cancel_delayed_work_sync(&wd_data->work);
>  	watchdog_update_worker(wdd);
>  
>  	/* make sure that /dev/watchdog can be re-opened */
> -- 
> 2.5.0
> 

This patch has been added to linux-watchdog-next.

Kind regards,
Wim.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-05-14 16:41 ` Wim Van Sebroeck
@ 2016-05-14 17:07   ` Guenter Roeck
  2016-05-14 17:24     ` Guenter Roeck
  0 siblings, 1 reply; 9+ messages in thread
From: Guenter Roeck @ 2016-05-14 17:07 UTC (permalink / raw)
  To: Wim Van Sebroeck; +Cc: Clemens Gruber, linux-watchdog, linux-kernel

Hi Wim,

On 05/14/2016 09:41 AM, Wim Van Sebroeck wrote:
> Hi Guenter,
>
>> lockdep reports the following circular locking dependency.
>>

You are faster than me this time. I was just about to send you a pull request.
Sorry for being late. The watchdog-next branch in my repository on kernel.org
has all patches in my queue, in case you want to have a look.

I'll rebase my branch to yours and see if anything is missing, then send
you a pull request on top of it if needed.

>
> This patch has been added to linux-watchdog-next.
>

This patch should probably be tagged for v4.6.

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-05-14 17:07   ` Guenter Roeck
@ 2016-05-14 17:24     ` Guenter Roeck
  2016-05-14 18:29       ` Wim Van Sebroeck
  0 siblings, 1 reply; 9+ messages in thread
From: Guenter Roeck @ 2016-05-14 17:24 UTC (permalink / raw)
  To: Wim Van Sebroeck; +Cc: Clemens Gruber, linux-watchdog, linux-kernel

On 05/14/2016 10:07 AM, Guenter Roeck wrote:
> Hi Wim,
>
> On 05/14/2016 09:41 AM, Wim Van Sebroeck wrote:
>> Hi Guenter,
>>
>>> lockdep reports the following circular locking dependency.
>>>
>
> You are faster than me this time. I was just about to send you a pull request.
> Sorry for being late. The watchdog-next branch in my repository on kernel.org
> has all patches in my queue, in case you want to have a look.
>
> I'll rebase my branch to yours and see if anything is missing, then send
> you a pull request on top of it if needed.
>

All there. Looks like you picked it up right after I updated the branch
this morning.

>>
>> This patch has been added to linux-watchdog-next.
>>
>
> This patch should probably be tagged for v4.6.
>
Turns out it has a Fixes: tag, so an additional tag should not be necessary.

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] watchdog: core: Fix circular locking dependency
  2016-05-14 17:24     ` Guenter Roeck
@ 2016-05-14 18:29       ` Wim Van Sebroeck
  0 siblings, 0 replies; 9+ messages in thread
From: Wim Van Sebroeck @ 2016-05-14 18:29 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: Clemens Gruber, linux-watchdog, linux-kernel

Hi Guenter,

> On 05/14/2016 10:07 AM, Guenter Roeck wrote:
> >Hi Wim,
> >
> >On 05/14/2016 09:41 AM, Wim Van Sebroeck wrote:
> >>Hi Guenter,
> >>
> >>>lockdep reports the following circular locking dependency.
> >>>
> >
> >You are faster than me this time. I was just about to send you a pull 
> >request.
> >Sorry for being late. The watchdog-next branch in my repository on 
> >kernel.org
> >has all patches in my queue, in case you want to have a look.
> >
> >I'll rebase my branch to yours and see if anything is missing, then send
> >you a pull request on top of it if needed.
> >
> 
> All there. Looks like you picked it up right after I updated the branch
> this morning.

Correct ;-)

Kind regards,
Wim.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-05-14 18:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-21 14:38 [PATCH] watchdog: core: Fix circular locking dependency Guenter Roeck
2016-04-21 14:50 ` One Thousand Gnomes
2016-04-21 23:30   ` Guenter Roeck
2016-05-09 13:53 ` Clemens Gruber
2016-05-09 14:18   ` Guenter Roeck
2016-05-14 16:41 ` Wim Van Sebroeck
2016-05-14 17:07   ` Guenter Roeck
2016-05-14 17:24     ` Guenter Roeck
2016-05-14 18:29       ` Wim Van Sebroeck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).