* [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
@ 2018-09-18 13:21 Steffen Trumtrar
2018-09-18 13:46 ` Guenter Roeck
2018-09-18 18:14 ` Julia Cartwright
0 siblings, 2 replies; 27+ messages in thread
From: Steffen Trumtrar @ 2018-09-18 13:21 UTC (permalink / raw)
To: linux-watchdog
Cc: Wim Van Sebroeck, Guenter Roeck, Christophe Leroy, linux-rt-users
Hi all!
I'm seeing an issue with the dw_wdt watchdog on the SoCFPGA ARM
platform with the latest linux-rt v4.18.5-rt4. Actually I seem to
have the same problem, that these patches try to fix:
38a1222ae4f364d5bd5221fe305dbb0889f45d15
Author: Christophe Leroy <christophe.leroy@c-s.fr>
AuthorDate: Fri Dec 8 11:18:35 2017 +0100
Commit: Wim Van Sebroeck <wim@iguana.be>
CommitDate: Thu Dec 28 20:45:57 2017 +0100
Follows: v4.15-rc3 (345)
Precedes: v4.16-rc1 (13997)
watchdog: core: make sure the watchdog worker always works
When running a command like 'chrt -f 50 dd if=/dev/zero
of=/dev/null',
the watchdog_worker fails to service the HW watchdog and the
HW watchdog fires long before the watchdog soft timeout.
At the moment, the watchdog_worker is invoked as a delayed work.
Delayed works are handled by non realtime kernel threads. The
WQ_HIGHPRI flag only increases the niceness of that threads.
This patch replaces the delayed work logic by kthread delayed
work,
and sets the associated kernel task to SCHED_FIFO with the
highest
priority, in order to ensure that the watchdog worker will run
as
soon as possible.
1ff688209e2ed23f699269b9733993e2ce123fd2
Author: Christophe Leroy <christophe.leroy@c-s.fr>
AuthorDate: Thu Jan 18 12:11:21 2018 +0100
Commit: Wim Van Sebroeck <wim@iguana.be>
CommitDate: Sun Jan 21 12:44:59 2018 +0100
Follows: v4.15-rc3 (349)
Precedes: v4.16-rc1 (13993)
watchdog: core: make sure the watchdog_worker is not deferred
commit 4cd13c21b207e ("softirq: Let ksoftirqd do its job") has
the
effect of deferring timer handling in case of high CPU load,
hence
delaying the delayed work allthought the worker is running which
high realtime priority.
As hrtimers are not managed by softirqs, this patch replaces the
delayed work by a plain work and uses an hrtimer to schedule
that work.
If I run the same test or 'chrt 50 hackbench 20 -l 150' or any
task where I change the prio with chrt and that runs long enough,
I get a system reset from the watchdog because it times out. This
only happens if the watchdog is already enabled on boot and
CONFIG_PREEMPT_RT_FULL is set.
Any idea if I'm missing something essential? If I understand it
correctly, the two commits fix the framework and therefore the
dw_wdt driver doesn't need any updates.
Best regards,
Steffen
--
Pengutronix e.K. | Steffen Trumtrar
|
Industrial Linux Solutions |
http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany| Phone:
+49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax:
+49-5121-206917-5555|
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-18 13:21 [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering Steffen Trumtrar
@ 2018-09-18 13:46 ` Guenter Roeck
2018-09-19 6:46 ` Steffen Trumtrar
2018-09-20 8:18 ` [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering Tim Sander
2018-09-18 18:14 ` Julia Cartwright
1 sibling, 2 replies; 27+ messages in thread
From: Guenter Roeck @ 2018-09-18 13:46 UTC (permalink / raw)
To: Steffen Trumtrar, linux-watchdog
Cc: Wim Van Sebroeck, Christophe Leroy, linux-rt-users
On 09/18/2018 06:21 AM, Steffen Trumtrar wrote:
>
> Hi all!
>
> I'm seeing an issue with the dw_wdt watchdog on the SoCFPGA ARM platform with the latest linux-rt v4.18.5-rt4. Actually I seem to have the same problem, that these patches try to fix:
>
> 38a1222ae4f364d5bd5221fe305dbb0889f45d15
> Author: Christophe Leroy <christophe.leroy@c-s.fr>
> AuthorDate: Fri Dec 8 11:18:35 2017 +0100
> Commit: Wim Van Sebroeck <wim@iguana.be>
> CommitDate: Thu Dec 28 20:45:57 2017 +0100
>
> Follows: v4.15-rc3 (345)
> Precedes: v4.16-rc1 (13997)
>
> watchdog: core: make sure the watchdog worker always works
>
> When running a command like 'chrt -f 50 dd if=/dev/zero of=/dev/null',
> the watchdog_worker fails to service the HW watchdog and the
> HW watchdog fires long before the watchdog soft timeout.
>
> At the moment, the watchdog_worker is invoked as a delayed work.
> Delayed works are handled by non realtime kernel threads. The
> WQ_HIGHPRI flag only increases the niceness of that threads.
>
> This patch replaces the delayed work logic by kthread delayed work,
> and sets the associated kernel task to SCHED_FIFO with the highest
> priority, in order to ensure that the watchdog worker will run as
> soon as possible.
>
>
> 1ff688209e2ed23f699269b9733993e2ce123fd2
> Author: Christophe Leroy <christophe.leroy@c-s.fr>
> AuthorDate: Thu Jan 18 12:11:21 2018 +0100
> Commit: Wim Van Sebroeck <wim@iguana.be>
> CommitDate: Sun Jan 21 12:44:59 2018 +0100
>
> Follows: v4.15-rc3 (349)
> Precedes: v4.16-rc1 (13993)
>
> watchdog: core: make sure the watchdog_worker is not deferred
>
> commit 4cd13c21b207e ("softirq: Let ksoftirqd do its job") has the
> effect of deferring timer handling in case of high CPU load, hence
> delaying the delayed work allthought the worker is running which
> high realtime priority.
>
> As hrtimers are not managed by softirqs, this patch replaces the
> delayed work by a plain work and uses an hrtimer to schedule that work.
>
>
> If I run the same test or 'chrt 50 hackbench 20 -l 150' or any task where I change the prio with chrt and that runs long enough, I get a system reset from the watchdog because it times out. This only happens if the watchdog is already enabled on boot and CONFIG_PREEMPT_RT_FULL is set.
>
> Any idea if I'm missing something essential? If I understand it correctly, the two commits fix the framework and therefore the dw_wdt driver doesn't need any updates.
>
I find your e-mail confusing, sorry. The subject says that the watchdog is not
triggering, the description says that it is triggering when it should not.
You also provide no information if the watchdog is active (open from user space)
or not. There is some indication in "This only happens if the watchdog is already
enabled on boot" but that isn't really precise - it may be enabled on boot but still
open. On top of that, your e-mail suggests that the problem may be a regression,
since you refer to a specific kernel release, yet you provide no information if
the very same test worked with a different kernel version, or what that kernel
version would be.
Please not only describe what you are doing, but also provide the complete context.
Specifically,
- Did this ever work ? If yes, what are working kernel versions ?
- Is the watchdog device open ?
- Does it make a difference if it is ?
- What is the configured watchdog timeout (both from BIOS/ROMMON and in Linux) ?
Thanks,
Guenter
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-18 13:21 [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering Steffen Trumtrar
2018-09-18 13:46 ` Guenter Roeck
@ 2018-09-18 18:14 ` Julia Cartwright
1 sibling, 0 replies; 27+ messages in thread
From: Julia Cartwright @ 2018-09-18 18:14 UTC (permalink / raw)
To: Steffen Trumtrar
Cc: linux-watchdog, Wim Van Sebroeck, Guenter Roeck,
Christophe Leroy, linux-rt-users
On Tue, Sep 18, 2018 at 03:21:08PM +0200, Steffen Trumtrar wrote:
>
> Hi all!
>
> I'm seeing an issue with the dw_wdt watchdog on the SoCFPGA ARM platform
> with the latest linux-rt v4.18.5-rt4. Actually I seem to have the same
> problem, that these patches try to fix:
>
> 38a1222ae4f364d5bd5221fe305dbb0889f45d15
> Author: Christophe Leroy <christophe.leroy@c-s.fr>
> AuthorDate: Fri Dec 8 11:18:35 2017 +0100
> Commit: Wim Van Sebroeck <wim@iguana.be>
> CommitDate: Thu Dec 28 20:45:57 2017 +0100
>
> Follows: v4.15-rc3 (345)
> Precedes: v4.16-rc1 (13997)
>
> watchdog: core: make sure the watchdog worker always works
>
> When running a command like 'chrt -f 50 dd if=/dev/zero of=/dev/null',
> the watchdog_worker fails to service the HW watchdog and the
> HW watchdog fires long before the watchdog soft timeout.
>
> At the moment, the watchdog_worker is invoked as a delayed work.
> Delayed works are handled by non realtime kernel threads. The
> WQ_HIGHPRI flag only increases the niceness of that threads.
>
> This patch replaces the delayed work logic by kthread delayed work,
> and sets the associated kernel task to SCHED_FIFO with the highest
> priority, in order to ensure that the watchdog worker will run as
> soon as possible.
>
>
> 1ff688209e2ed23f699269b9733993e2ce123fd2
> Author: Christophe Leroy <christophe.leroy@c-s.fr>
> AuthorDate: Thu Jan 18 12:11:21 2018 +0100
> Commit: Wim Van Sebroeck <wim@iguana.be>
> CommitDate: Sun Jan 21 12:44:59 2018 +0100
>
> Follows: v4.15-rc3 (349)
> Precedes: v4.16-rc1 (13993)
>
> watchdog: core: make sure the watchdog_worker is not deferred
>
> commit 4cd13c21b207e ("softirq: Let ksoftirqd do its job") has the
> effect of deferring timer handling in case of high CPU load, hence
> delaying the delayed work allthought the worker is running which
> high realtime priority.
>
> As hrtimers are not managed by softirqs, this patch replaces the
> delayed work by a plain work and uses an hrtimer to schedule that work.
These above two commits are trying very hard to ensure timely wakeup and
execution of the watchdogd thread. First by moving moving to kthread
delayed work, and secondly to vanilla kthread work + hardirq.
This is sufficient on mainline, because hardirq expiry fns are
unconditionally executed in hardirq context. With PREEMPT_RT_FULL,
however, the hrtimer expiry functions are executed in softirq context
unless explicitly opted out.
...meaning that w/ PREEMPT_RT_FULL the expiry (and therefore the
watchdogd wakeup) may be indefinitely starved if there are runnable RT
tasks of higher priority than the softirq callback thread (ktimersoftd @
SCHED_FIFO 1 by default).
This is an inversion.
One possible solution is to opt-out of the hrtimer softirq deferral by
making use of the HRTIMER_MODE_HARD, however, the expiry function will
need to be vetted for use in hardirq context w/ PREEMPT_RT_FULL. From a
cursory glance at the kthread_worker locking, it is not hardirq safe. :-\
Julia
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-18 13:46 ` Guenter Roeck
@ 2018-09-19 6:46 ` Steffen Trumtrar
2018-09-19 19:43 ` Guenter Roeck
2018-09-20 8:18 ` [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering Tim Sander
1 sibling, 1 reply; 27+ messages in thread
From: Steffen Trumtrar @ 2018-09-19 6:46 UTC (permalink / raw)
To: Guenter Roeck
Cc: linux-watchdog, Wim Van Sebroeck, Christophe Leroy, linux-rt-users
On Tue, Sep 18, 2018 at 06:46:15AM -0700, Guenter Roeck wrote:
> On 09/18/2018 06:21 AM, Steffen Trumtrar wrote:
> >
> > Hi all!
> >
> > I'm seeing an issue with the dw_wdt watchdog on the SoCFPGA ARM platform with the latest linux-rt v4.18.5-rt4. Actually I seem to have the same problem, that these patches try to fix:
> >
> > 38a1222ae4f364d5bd5221fe305dbb0889f45d15
> > Author: Christophe Leroy <christophe.leroy@c-s.fr>
> > AuthorDate: Fri Dec 8 11:18:35 2017 +0100
> > Commit: Wim Van Sebroeck <wim@iguana.be>
> > CommitDate: Thu Dec 28 20:45:57 2017 +0100
> >
> > Follows: v4.15-rc3 (345)
> > Precedes: v4.16-rc1 (13997)
> >
> > watchdog: core: make sure the watchdog worker always works
> >
> > When running a command like 'chrt -f 50 dd if=/dev/zero of=/dev/null',
> > the watchdog_worker fails to service the HW watchdog and the
> > HW watchdog fires long before the watchdog soft timeout.
> >
> > At the moment, the watchdog_worker is invoked as a delayed work.
> > Delayed works are handled by non realtime kernel threads. The
> > WQ_HIGHPRI flag only increases the niceness of that threads.
> >
> > This patch replaces the delayed work logic by kthread delayed work,
> > and sets the associated kernel task to SCHED_FIFO with the highest
> > priority, in order to ensure that the watchdog worker will run as
> > soon as possible.
> >
> >
> > 1ff688209e2ed23f699269b9733993e2ce123fd2
> > Author: Christophe Leroy <christophe.leroy@c-s.fr>
> > AuthorDate: Thu Jan 18 12:11:21 2018 +0100
> > Commit: Wim Van Sebroeck <wim@iguana.be>
> > CommitDate: Sun Jan 21 12:44:59 2018 +0100
> >
> > Follows: v4.15-rc3 (349)
> > Precedes: v4.16-rc1 (13993)
> >
> > watchdog: core: make sure the watchdog_worker is not deferred
> >
> > commit 4cd13c21b207e ("softirq: Let ksoftirqd do its job") has the
> > effect of deferring timer handling in case of high CPU load, hence
> > delaying the delayed work allthought the worker is running which
> > high realtime priority.
> >
> > As hrtimers are not managed by softirqs, this patch replaces the
> > delayed work by a plain work and uses an hrtimer to schedule that work.
> >
> >
> > If I run the same test or 'chrt 50 hackbench 20 -l 150' or any task where I change the prio with chrt and that runs long enough, I get a system reset from the watchdog because it times out. This only happens if the watchdog is already enabled on boot and CONFIG_PREEMPT_RT_FULL is set.
> >
> > Any idea if I'm missing something essential? If I understand it correctly, the two commits fix the framework and therefore the dw_wdt driver doesn't need any updates.
> >
>
> I find your e-mail confusing, sorry. The subject says that the watchdog is not
> triggering, the description says that it is triggering when it should not.
>
Sorry. Let me try again.
The problem I observe, is that the watchdog is trigged, because it doesn't get pinged.
The ksoftirqd seems to be blocked although it runs at a much higher priority than the
blocking userspace task.
> You also provide no information if the watchdog is active (open from user space)
> or not. There is some indication in "This only happens if the watchdog is already
> enabled on boot" but that isn't really precise - it may be enabled on boot but still
> open. On top of that, your e-mail suggests that the problem may be a regression,
> since you refer to a specific kernel release, yet you provide no information if
> the very same test worked with a different kernel version, or what that kernel
> version would be.
>
> Please not only describe what you are doing, but also provide the complete context.
> Specifically,
> - Did this ever work ? If yes, what are working kernel versions ?
I don't know, if it ever worked or not. This is the first kernel version I tried.
According to the two commits mentioned, I assume that it won't work in older versions.
> - Is the watchdog device open ?
> - Does it make a difference if it is ?
In my test case, the device is not open. It gets started by the bootloader and than is
running. I tried opening the device after it was already running, but it does not make
a difference. If the watchdog is put into running state by opening it from userspace,
the bug does not occur. If the bootloader starts it and the kernel just continues pinging
the watchdog, it does occur, open or not.
> - What is the configured watchdog timeout (both from BIOS/ROMMON and in Linux) ?
The watchdog is configured to a timeout of 5 seconds, both in bootloader and kernel.
The dw_wdt driver will round it to the nearest value it supports. Higher values do not make
the bug go away.
Thanks,
Steffen
--
Pengutronix e.K. | |
Industrial Linux Solutions | http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-19 6:46 ` Steffen Trumtrar
@ 2018-09-19 19:43 ` Guenter Roeck
2018-09-20 20:48 ` Julia Cartwright
0 siblings, 1 reply; 27+ messages in thread
From: Guenter Roeck @ 2018-09-19 19:43 UTC (permalink / raw)
To: Steffen Trumtrar
Cc: linux-watchdog, Wim Van Sebroeck, Christophe Leroy, linux-rt-users
On Wed, Sep 19, 2018 at 08:46:19AM +0200, Steffen Trumtrar wrote:
> On Tue, Sep 18, 2018 at 06:46:15AM -0700, Guenter Roeck wrote:
> > On 09/18/2018 06:21 AM, Steffen Trumtrar wrote:
> > >
> > > Hi all!
> > >
> > > I'm seeing an issue with the dw_wdt watchdog on the SoCFPGA ARM platform with the latest linux-rt v4.18.5-rt4. Actually I seem to have the same problem, that these patches try to fix:
> > >
> > > 38a1222ae4f364d5bd5221fe305dbb0889f45d15
> > > Author: Christophe Leroy <christophe.leroy@c-s.fr>
> > > AuthorDate: Fri Dec 8 11:18:35 2017 +0100
> > > Commit: Wim Van Sebroeck <wim@iguana.be>
> > > CommitDate: Thu Dec 28 20:45:57 2017 +0100
> > >
> > > Follows: v4.15-rc3 (345)
> > > Precedes: v4.16-rc1 (13997)
> > >
> > > watchdog: core: make sure the watchdog worker always works
> > >
> > > When running a command like 'chrt -f 50 dd if=/dev/zero of=/dev/null',
> > > the watchdog_worker fails to service the HW watchdog and the
> > > HW watchdog fires long before the watchdog soft timeout.
> > >
> > > At the moment, the watchdog_worker is invoked as a delayed work.
> > > Delayed works are handled by non realtime kernel threads. The
> > > WQ_HIGHPRI flag only increases the niceness of that threads.
> > >
> > > This patch replaces the delayed work logic by kthread delayed work,
> > > and sets the associated kernel task to SCHED_FIFO with the highest
> > > priority, in order to ensure that the watchdog worker will run as
> > > soon as possible.
> > >
> > >
> > > 1ff688209e2ed23f699269b9733993e2ce123fd2
> > > Author: Christophe Leroy <christophe.leroy@c-s.fr>
> > > AuthorDate: Thu Jan 18 12:11:21 2018 +0100
> > > Commit: Wim Van Sebroeck <wim@iguana.be>
> > > CommitDate: Sun Jan 21 12:44:59 2018 +0100
> > >
> > > Follows: v4.15-rc3 (349)
> > > Precedes: v4.16-rc1 (13993)
> > >
> > > watchdog: core: make sure the watchdog_worker is not deferred
> > >
> > > commit 4cd13c21b207e ("softirq: Let ksoftirqd do its job") has the
> > > effect of deferring timer handling in case of high CPU load, hence
> > > delaying the delayed work allthought the worker is running which
> > > high realtime priority.
> > >
> > > As hrtimers are not managed by softirqs, this patch replaces the
> > > delayed work by a plain work and uses an hrtimer to schedule that work.
> > >
> > >
> > > If I run the same test or 'chrt 50 hackbench 20 -l 150' or any task where I change the prio with chrt and that runs long enough, I get a system reset from the watchdog because it times out. This only happens if the watchdog is already enabled on boot and CONFIG_PREEMPT_RT_FULL is set.
> > >
> > > Any idea if I'm missing something essential? If I understand it correctly, the two commits fix the framework and therefore the dw_wdt driver doesn't need any updates.
> > >
> >
> > I find your e-mail confusing, sorry. The subject says that the watchdog is not
> > triggering, the description says that it is triggering when it should not.
> >
>
> Sorry. Let me try again.
> The problem I observe, is that the watchdog is trigged, because it doesn't get pinged.
> The ksoftirqd seems to be blocked although it runs at a much higher priority than the
> blocking userspace task.
>
Are you sure about that ? The other email seemed to suggest that the userspace
task is running at higher priority.
> > You also provide no information if the watchdog is active (open from user space)
> > or not. There is some indication in "This only happens if the watchdog is already
> > enabled on boot" but that isn't really precise - it may be enabled on boot but still
> > open. On top of that, your e-mail suggests that the problem may be a regression,
> > since you refer to a specific kernel release, yet you provide no information if
> > the very same test worked with a different kernel version, or what that kernel
> > version would be.
> >
> > Please not only describe what you are doing, but also provide the complete context.
> > Specifically,
> > - Did this ever work ? If yes, what are working kernel versions ?
>
> I don't know, if it ever worked or not. This is the first kernel version I tried.
> According to the two commits mentioned, I assume that it won't work in older versions.
>
> > - Is the watchdog device open ?
> > - Does it make a difference if it is ?
>
> In my test case, the device is not open. It gets started by the bootloader and than is
> running. I tried opening the device after it was already running, but it does not make
> a difference. If the watchdog is put into running state by opening it from userspace,
> the bug does not occur. If the bootloader starts it and the kernel just continues pinging
> the watchdog, it does occur, open or not.
>
The big question here is if the watchdog daemon that keeps the watchdog open is
running at higher priority than the userspace task taking all available CPU
time.
If the watchdog daemon runs at lower priority, the observed behavior would be
as expected.
Overall, we have a number possibilities to consider:
- The kernel watchdog timer thread is not triggered at all under some
circumstances, meaning it is not set properly. So far we have no real
indication that this is the case (since the code works fine unless some
userspace task takes all available CPU time).
- The watchdog device is closed. The kernel watchdog timer thread is
starved and does not get to run. The question is what to do in this
situation. In a real time system, this is almost always a fatal
condition. Should the system really be kept alive in this situation ?
- The watchdog device is open.
- The watchdog daemon runs at higher priority than the process taking
all CPU time. Everything should work as expected.
- The watchdog daemon runs at the same or at a lower priority than
the process taking all CPU time. I would argue that, in this case,
everything is also working as expected: The watchdog daemon is starved,
does not get to run, and the system resets because, per its configuration,
it is in bad shape.
Overall, the only real possible problem would be if the watchdog thread
in the kernel does not run because of some bug, and that it is not really
starved. We would probably have to instrument the kernel to find out if
this is the case, unless someone has a better idea.
Am I missing something ?
Thanks,
Guenter
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-18 13:46 ` Guenter Roeck
2018-09-19 6:46 ` Steffen Trumtrar
@ 2018-09-20 8:18 ` Tim Sander
1 sibling, 0 replies; 27+ messages in thread
From: Tim Sander @ 2018-09-20 8:18 UTC (permalink / raw)
To: Guenter Roeck
Cc: Steffen Trumtrar, linux-watchdog, Wim Van Sebroeck,
Christophe Leroy, linux-rt-users
Hi
I am also seeing this problem.
Am Dienstag, 18. September 2018, 15:46:15 CEST schrieb Guenter Roeck:
> Please not only describe what you are doing, but also provide the complete
> context. Specifically,
> - Did this ever work ?
I have not found a working kernel version.
> If yes, what are working kernel versions ?
I have tried different kernel version which where *not* working:
patch-4.11.12-rt15 (with some Altera/Intel driver patches)
4.16.8-rt3
4.18-rc8-rt1
4.18.7-rt5 (with adxl372 iio patches and denali nand race fix reported on
mainline)
> - Is the watchdog device open ?
Good point: The bootloader arms the watchdog. Then
WATCHDOG_HANDLE_BOOT_ENABLED is enabled in kernel.
But i forgot to let usermode take over.
> - Does it make a difference if it is ?
If i set RuntimeWatchdogSec=30 in /etc/systemd/system.conf (i.e. enable
usermode watchdog) the problem seems to go away (at least i didn't had a
reboot dony with my short testruns by now).
> - What is the configured watchdog timeout (both from BIOS/ROMMON and in
> Linux) ?
30s
So to wrap up: i have the suspicion that the option
WATCHDOG_HANDLE_BOOT_ENABLED without usermode taking over in Linux leads to
some strange behaviour with preempt rt?
Best regards
Tim
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-19 19:43 ` Guenter Roeck
@ 2018-09-20 20:48 ` Julia Cartwright
2018-09-21 13:34 ` Guenter Roeck
2018-09-24 7:24 ` Steffen Trumtrar
0 siblings, 2 replies; 27+ messages in thread
From: Julia Cartwright @ 2018-09-20 20:48 UTC (permalink / raw)
To: Guenter Roeck
Cc: Steffen Trumtrar, linux-watchdog, Wim Van Sebroeck,
Christophe Leroy, linux-rt-users
Hello all-
On Wed, Sep 19, 2018 at 12:43:03PM -0700, Guenter Roeck wrote:
> On Wed, Sep 19, 2018 at 08:46:19AM +0200, Steffen Trumtrar wrote:
> > On Tue, Sep 18, 2018 at 06:46:15AM -0700, Guenter Roeck wrote:
[..]
> > The problem I observe, is that the watchdog is trigged, because it doesn't get pinged.
> > The ksoftirqd seems to be blocked although it runs at a much higher priority than the
> > blocking userspace task.
> >
> Are you sure about that ? The other email seemed to suggest that the userspace
> task is running at higher priority.
Also: ksoftirqd is irrelevant on RT for the kernel watchdog thread. The
relevant thread is ktimersoftd, which is the thread responsible for
invoking hrtimer expiry functions, like what's being used for watchdogd.
[..]
> Overall, we have a number possibilities to consider:
>
> - The kernel watchdog timer thread is not triggered at all under some
> circumstances, meaning it is not set properly. So far we have no real
> indication that this is the case (since the code works fine unless some
> userspace task takes all available CPU time).
What do you mean by "not triggered". Do you mean woken-up/activated
from a scheduling perspective? In the case I identified in my other
email, the watchdogd thread wakeup doesn't even occur, even when the
periodic ping timer expires, because ktimersoftd has been starved.
I suspect that's what's going on for Steffen, but am not yet sure.
> - The watchdog device is closed. The kernel watchdog timer thread is
> starved and does not get to run. The question is what to do in this
> situation. In a real time system, this is almost always a fatal
> condition. Should the system really be kept alive in this situation ?
Sometimes its the right decision, sometimes its not. The only sensible
thing to do is to allow the user make the decision that's right for
their application needs by allowing the relative prioritization of
watchdogd and their application threads.
...which they can do now, but it's not effective on RT because of the
timer deferral through ktimersoftd.
The solution, in my mind, and like I mentioned in my other email, is to
opt-out of the ktimersoftd-deferral mechanism. This requires some
tweaking with the kthread_worker bits to ensure safety in hardirq
context, but that seems straightforward. See the below.
Julia
-- 8< --
diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
index ffbdc4642ea5..9c2b3e5cebdc 100644
--- a/drivers/watchdog/watchdog_dev.c
+++ b/drivers/watchdog/watchdog_dev.c
@@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
return -ENODEV;
kthread_init_work(&wd_data->work, watchdog_ping_work);
- hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
wd_data->timer.function = watchdog_timer_expired;
if (wdd->id == 0) {
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index c1961761311d..ad292898f7f2 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -85,7 +85,7 @@ enum {
struct kthread_worker {
unsigned int flags;
- spinlock_t lock;
+ raw_spinlock_t lock;
struct list_head work_list;
struct list_head delayed_work_list;
struct task_struct *task;
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 486dedbd9af5..c1d9ee6671c6 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
struct lock_class_key *key)
{
memset(worker, 0, sizeof(struct kthread_worker));
- spin_lock_init(&worker->lock);
+ raw_spin_lock_init(&worker->lock);
lockdep_set_class_and_name(&worker->lock, key, name);
INIT_LIST_HEAD(&worker->work_list);
INIT_LIST_HEAD(&worker->delayed_work_list);
@@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr)
if (kthread_should_stop()) {
__set_current_state(TASK_RUNNING);
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
worker->task = NULL;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
return 0;
}
work = NULL;
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
if (!list_empty(&worker->work_list)) {
work = list_first_entry(&worker->work_list,
struct kthread_work, node);
list_del_init(&work->node);
}
worker->current_work = work;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
if (work) {
__set_current_state(TASK_RUNNING);
@@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
bool ret = false;
unsigned long flags;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
if (!queuing_blocked(worker, work)) {
kthread_insert_work(worker, work, &worker->work_list);
ret = true;
}
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_queue_work);
@@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
if (WARN_ON_ONCE(!worker))
return;
- spin_lock(&worker->lock);
+ raw_spin_lock(&worker->lock);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
list_del_init(&work->node);
kthread_insert_work(worker, work, &worker->work_list);
- spin_unlock(&worker->lock);
+ raw_spin_unlock(&worker->lock);
}
EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
@@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
unsigned long flags;
bool ret = false;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
if (!queuing_blocked(worker, work)) {
__kthread_queue_delayed_work(worker, dwork, delay);
ret = true;
}
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work)
if (!worker)
return;
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work)
else
noop = true;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
if (!noop)
wait_for_completion(&fwork.done);
@@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
* any queuing is blocked by setting the canceling counter.
*/
work->canceling++;
- spin_unlock_irqrestore(&worker->lock, *flags);
+ raw_spin_unlock_irqrestore(&worker->lock, *flags);
del_timer_sync(&dwork->timer);
- spin_lock_irqsave(&worker->lock, *flags);
+ raw_spin_lock_irqsave(&worker->lock, *flags);
work->canceling--;
}
@@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
unsigned long flags;
int ret = false;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
/* Do not bother with canceling when never queued. */
if (!work->worker)
@@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
fast_queue:
__kthread_queue_delayed_work(worker, dwork, delay);
out:
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
if (!worker)
goto out;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
* In the meantime, block any queuing by setting the canceling counter.
*/
work->canceling++;
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
kthread_flush_work(work);
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
work->canceling--;
out_fast:
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
out:
return ret;
}
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-20 20:48 ` Julia Cartwright
@ 2018-09-21 13:34 ` Guenter Roeck
2018-09-21 16:42 ` Julia Cartwright
2018-09-24 7:24 ` Steffen Trumtrar
1 sibling, 1 reply; 27+ messages in thread
From: Guenter Roeck @ 2018-09-21 13:34 UTC (permalink / raw)
To: Julia Cartwright
Cc: Steffen Trumtrar, linux-watchdog, Wim Van Sebroeck,
Christophe Leroy, linux-rt-users
On 09/20/2018 01:48 PM, Julia Cartwright wrote:
> Hello all-
>
> On Wed, Sep 19, 2018 at 12:43:03PM -0700, Guenter Roeck wrote:
>> On Wed, Sep 19, 2018 at 08:46:19AM +0200, Steffen Trumtrar wrote:
>>> On Tue, Sep 18, 2018 at 06:46:15AM -0700, Guenter Roeck wrote:
> [..]
>>> The problem I observe, is that the watchdog is trigged, because it doesn't get pinged.
>>> The ksoftirqd seems to be blocked although it runs at a much higher priority than the
>>> blocking userspace task.
>>>
>> Are you sure about that ? The other email seemed to suggest that the userspace
>> task is running at higher priority.
>
> Also: ksoftirqd is irrelevant on RT for the kernel watchdog thread. The
> relevant thread is ktimersoftd, which is the thread responsible for
> invoking hrtimer expiry functions, like what's being used for watchdogd.
>
> [..]
>> Overall, we have a number possibilities to consider:
>>
>> - The kernel watchdog timer thread is not triggered at all under some
>> circumstances, meaning it is not set properly. So far we have no real
>> indication that this is the case (since the code works fine unless some
>> userspace task takes all available CPU time).
>
> What do you mean by "not triggered". Do you mean woken-up/activated
> from a scheduling perspective? In the case I identified in my other
> email, the watchdogd thread wakeup doesn't even occur, even when the
> periodic ping timer expires, because ktimersoftd has been starved.
>
Sorry for not using the correct term. Sometimes I am a bit sloppy.
Yes, I meant "woken-up/activated from a scheduling perspective".
> I suspect that's what's going on for Steffen, but am not yet sure.
>
>> - The watchdog device is closed. The kernel watchdog timer thread is
>> starved and does not get to run. The question is what to do in this
>> situation. In a real time system, this is almost always a fatal
>> condition. Should the system really be kept alive in this situation ?
>
> Sometimes its the right decision, sometimes its not. The only sensible
> thing to do is to allow the user make the decision that's right for
> their application needs by allowing the relative prioritization of
> watchdogd and their application threads.
>
Agreed, but that doesn't help if the watchdog daemon is not open or if the
hardware watchdog interval is too small and the kernel mechanism is needed
to ping the watchdog.
> ...which they can do now, but it's not effective on RT because of the
> timer deferral through ktimersoftd.
>
> The solution, in my mind, and like I mentioned in my other email, is to
> opt-out of the ktimersoftd-deferral mechanism. This requires some
> tweaking with the kthread_worker bits to ensure safety in hardirq
> context, but that seems straightforward. See the below.
>
Makes sense to me, though I have no idea what it would take to push
the necessary changes into the core kernel.
However, I must be missing something: Looking into the kernel code,
it seems to me that the spin_lock functions call the respective raw_
spinlock functions right away. With that in mind, why would the kernel
code change be necessary ? Also, I don't see HRTIMER_MODE_REL_HARD
defined anywhere. Is this RT specific ?
Thanks,
Guenter
> Julia
>
> -- 8< --
> diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
> index ffbdc4642ea5..9c2b3e5cebdc 100644
> --- a/drivers/watchdog/watchdog_dev.c
> +++ b/drivers/watchdog/watchdog_dev.c
> @@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
> return -ENODEV;
>
> kthread_init_work(&wd_data->work, watchdog_ping_work);
> - hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> + hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
> wd_data->timer.function = watchdog_timer_expired;
>
> if (wdd->id == 0) {
> diff --git a/include/linux/kthread.h b/include/linux/kthread.h
> index c1961761311d..ad292898f7f2 100644
> --- a/include/linux/kthread.h
> +++ b/include/linux/kthread.h
> @@ -85,7 +85,7 @@ enum {
>
> struct kthread_worker {
> unsigned int flags;
> - spinlock_t lock;
> + raw_spinlock_t lock;
> struct list_head work_list;
> struct list_head delayed_work_list;
> struct task_struct *task;
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 486dedbd9af5..c1d9ee6671c6 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
> struct lock_class_key *key)
> {
> memset(worker, 0, sizeof(struct kthread_worker));
> - spin_lock_init(&worker->lock);
> + raw_spin_lock_init(&worker->lock);
> lockdep_set_class_and_name(&worker->lock, key, name);
> INIT_LIST_HEAD(&worker->work_list);
> INIT_LIST_HEAD(&worker->delayed_work_list);
> @@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr)
>
> if (kthread_should_stop()) {
> __set_current_state(TASK_RUNNING);
> - spin_lock_irq(&worker->lock);
> + raw_spin_lock_irq(&worker->lock);
> worker->task = NULL;
> - spin_unlock_irq(&worker->lock);
> + raw_spin_unlock_irq(&worker->lock);
> return 0;
> }
>
> work = NULL;
> - spin_lock_irq(&worker->lock);
> + raw_spin_lock_irq(&worker->lock);
> if (!list_empty(&worker->work_list)) {
> work = list_first_entry(&worker->work_list,
> struct kthread_work, node);
> list_del_init(&work->node);
> }
> worker->current_work = work;
> - spin_unlock_irq(&worker->lock);
> + raw_spin_unlock_irq(&worker->lock);
>
> if (work) {
> __set_current_state(TASK_RUNNING);
> @@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
> bool ret = false;
> unsigned long flags;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
> if (!queuing_blocked(worker, work)) {
> kthread_insert_work(worker, work, &worker->work_list);
> ret = true;
> }
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> return ret;
> }
> EXPORT_SYMBOL_GPL(kthread_queue_work);
> @@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
> if (WARN_ON_ONCE(!worker))
> return;
>
> - spin_lock(&worker->lock);
> + raw_spin_lock(&worker->lock);
> /* Work must not be used with >1 worker, see kthread_queue_work(). */
> WARN_ON_ONCE(work->worker != worker);
>
> @@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
> list_del_init(&work->node);
> kthread_insert_work(worker, work, &worker->work_list);
>
> - spin_unlock(&worker->lock);
> + raw_spin_unlock(&worker->lock);
> }
> EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
>
> @@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
> unsigned long flags;
> bool ret = false;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
>
> if (!queuing_blocked(worker, work)) {
> __kthread_queue_delayed_work(worker, dwork, delay);
> ret = true;
> }
>
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> return ret;
> }
> EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
> @@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work)
> if (!worker)
> return;
>
> - spin_lock_irq(&worker->lock);
> + raw_spin_lock_irq(&worker->lock);
> /* Work must not be used with >1 worker, see kthread_queue_work(). */
> WARN_ON_ONCE(work->worker != worker);
>
> @@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work)
> else
> noop = true;
>
> - spin_unlock_irq(&worker->lock);
> + raw_spin_unlock_irq(&worker->lock);
>
> if (!noop)
> wait_for_completion(&fwork.done);
> @@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
> * any queuing is blocked by setting the canceling counter.
> */
> work->canceling++;
> - spin_unlock_irqrestore(&worker->lock, *flags);
> + raw_spin_unlock_irqrestore(&worker->lock, *flags);
> del_timer_sync(&dwork->timer);
> - spin_lock_irqsave(&worker->lock, *flags);
> + raw_spin_lock_irqsave(&worker->lock, *flags);
> work->canceling--;
> }
>
> @@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
> unsigned long flags;
> int ret = false;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
>
> /* Do not bother with canceling when never queued. */
> if (!work->worker)
> @@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
> fast_queue:
> __kthread_queue_delayed_work(worker, dwork, delay);
> out:
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> return ret;
> }
> EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
> @@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
> if (!worker)
> goto out;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
> /* Work must not be used with >1 worker, see kthread_queue_work(). */
> WARN_ON_ONCE(work->worker != worker);
>
> @@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
> * In the meantime, block any queuing by setting the canceling counter.
> */
> work->canceling++;
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> kthread_flush_work(work);
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
> work->canceling--;
>
> out_fast:
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> out:
> return ret;
> }
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-21 13:34 ` Guenter Roeck
@ 2018-09-21 16:42 ` Julia Cartwright
2018-09-21 20:21 ` Guenter Roeck
0 siblings, 1 reply; 27+ messages in thread
From: Julia Cartwright @ 2018-09-21 16:42 UTC (permalink / raw)
To: Guenter Roeck, Tim Sander, Steffen Trumtrar
Cc: linux-watchdog, Wim Van Sebroeck, Christophe Leroy, linux-rt-users
On Fri, Sep 21, 2018 at 06:34:24AM -0700, Guenter Roeck wrote:
> On 09/20/2018 01:48 PM, Julia Cartwright wrote:
> > On Wed, Sep 19, 2018 at 12:43:03PM -0700, Guenter Roeck wrote:
[..]
> > > Overall, we have a number possibilities to consider:
> > >
> > > - The kernel watchdog timer thread is not triggered at all under some
> > > circumstances, meaning it is not set properly. So far we have no real
> > > indication that this is the case (since the code works fine unless some
> > > userspace task takes all available CPU time).
> >
> > What do you mean by "not triggered". Do you mean woken-up/activated
> > from a scheduling perspective? In the case I identified in my other
> > email, the watchdogd thread wakeup doesn't even occur, even when the
> > periodic ping timer expires, because ktimersoftd has been starved.
> >
>
> Sorry for not using the correct term. Sometimes I am a bit sloppy.
> Yes, I meant "woken-up/activated from a scheduling perspective".
Thanks for the clarification. I think we're on the same page. :)
> > I suspect that's what's going on for Steffen, but am not yet sure.
> >
> > > - The watchdog device is closed. The kernel watchdog timer thread is
> > > starved and does not get to run. The question is what to do in this
> > > situation. In a real time system, this is almost always a fatal
> > > condition. Should the system really be kept alive in this situation ?
> >
> > Sometimes its the right decision, sometimes its not. The only sensible
> > thing to do is to allow the user make the decision that's right for
> > their application needs by allowing the relative prioritization of
> > watchdogd and their application threads.
>
> Agreed, but that doesn't help if the watchdog daemon is not open or if the
> hardware watchdog interval is too small and the kernel mechanism is needed
> to ping the watchdog.
Makes sense.
> > ...which they can do now, but it's not effective on RT because of the
> > timer deferral through ktimersoftd.
> >
> > The solution, in my mind, and like I mentioned in my other email, is to
> > opt-out of the ktimersoftd-deferral mechanism. This requires some
> > tweaking with the kthread_worker bits to ensure safety in hardirq
> > context, but that seems straightforward. See the below.
>
> Makes sense to me, though I have no idea what it would take to push
> the necessary changes into the core kernel.
As of now, this bug doesn't exist in mainline because the hrtimer
deferral bits haven't landed yet, as you note below.
> However, I must be missing something: Looking into the kernel code,
> it seems to me that the spin_lock functions call the respective raw_
> spinlock functions right away. With that in mind, why would the kernel
> code change be necessary ? Also, I don't see HRTIMER_MODE_REL_HARD
> defined anywhere. Is this RT specific ?
Yes, there is no functional difference in mainline currently between a
spin_lock_t and a raw_spin_lock_t. There is also no
HRTIMER_MODE_REL_HARD like mentioned before. These are
features/concepts currently only in the RT tree, but should be making
their way into mainline soon.
As far as path forward, I'd like to get some confirmation from Steffen
and/or Tim that the proposed patch fixes their issue, then I'll cook
some proper patches; the kthread_worker bits could go mainline now
because there is no dependency, but the watchdog change will need to be
RT-only for now.
Julia
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-21 16:42 ` Julia Cartwright
@ 2018-09-21 20:21 ` Guenter Roeck
0 siblings, 0 replies; 27+ messages in thread
From: Guenter Roeck @ 2018-09-21 20:21 UTC (permalink / raw)
To: Julia Cartwright
Cc: Tim Sander, Steffen Trumtrar, linux-watchdog, Wim Van Sebroeck,
Christophe Leroy, linux-rt-users
On Fri, Sep 21, 2018 at 04:42:04PM +0000, Julia Cartwright wrote:
> On Fri, Sep 21, 2018 at 06:34:24AM -0700, Guenter Roeck wrote:
> > On 09/20/2018 01:48 PM, Julia Cartwright wrote:
> > > On Wed, Sep 19, 2018 at 12:43:03PM -0700, Guenter Roeck wrote:
> [..]
> > > > Overall, we have a number possibilities to consider:
> > > >
> > > > - The kernel watchdog timer thread is not triggered at all under some
> > > > circumstances, meaning it is not set properly. So far we have no real
> > > > indication that this is the case (since the code works fine unless some
> > > > userspace task takes all available CPU time).
> > >
> > > What do you mean by "not triggered". Do you mean woken-up/activated
> > > from a scheduling perspective? In the case I identified in my other
> > > email, the watchdogd thread wakeup doesn't even occur, even when the
> > > periodic ping timer expires, because ktimersoftd has been starved.
> > >
> >
> > Sorry for not using the correct term. Sometimes I am a bit sloppy.
> > Yes, I meant "woken-up/activated from a scheduling perspective".
>
> Thanks for the clarification. I think we're on the same page. :)
>
> > > I suspect that's what's going on for Steffen, but am not yet sure.
> > >
> > > > - The watchdog device is closed. The kernel watchdog timer thread is
> > > > starved and does not get to run. The question is what to do in this
> > > > situation. In a real time system, this is almost always a fatal
> > > > condition. Should the system really be kept alive in this situation ?
> > >
> > > Sometimes its the right decision, sometimes its not. The only sensible
> > > thing to do is to allow the user make the decision that's right for
> > > their application needs by allowing the relative prioritization of
> > > watchdogd and their application threads.
> >
> > Agreed, but that doesn't help if the watchdog daemon is not open or if the
> > hardware watchdog interval is too small and the kernel mechanism is needed
> > to ping the watchdog.
>
> Makes sense.
>
> > > ...which they can do now, but it's not effective on RT because of the
> > > timer deferral through ktimersoftd.
> > >
> > > The solution, in my mind, and like I mentioned in my other email, is to
> > > opt-out of the ktimersoftd-deferral mechanism. This requires some
> > > tweaking with the kthread_worker bits to ensure safety in hardirq
> > > context, but that seems straightforward. See the below.
> >
> > Makes sense to me, though I have no idea what it would take to push
> > the necessary changes into the core kernel.
>
> As of now, this bug doesn't exist in mainline because the hrtimer
> deferral bits haven't landed yet, as you note below.
>
> > However, I must be missing something: Looking into the kernel code,
> > it seems to me that the spin_lock functions call the respective raw_
> > spinlock functions right away. With that in mind, why would the kernel
> > code change be necessary ? Also, I don't see HRTIMER_MODE_REL_HARD
> > defined anywhere. Is this RT specific ?
>
> Yes, there is no functional difference in mainline currently between a
> spin_lock_t and a raw_spin_lock_t. There is also no
> HRTIMER_MODE_REL_HARD like mentioned before. These are
> features/concepts currently only in the RT tree, but should be making
> their way into mainline soon.
>
> As far as path forward, I'd like to get some confirmation from Steffen
> and/or Tim that the proposed patch fixes their issue, then I'll cook
> some proper patches; the kthread_worker bits could go mainline now
> because there is no dependency, but the watchdog change will need to be
> RT-only for now.
>
SGTM.
Thanks,
Guenter
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
2018-09-20 20:48 ` Julia Cartwright
@ 2018-09-24 7:24 ` Steffen Trumtrar
2018-09-24 7:24 ` Steffen Trumtrar
1 sibling, 0 replies; 27+ messages in thread
From: Steffen Trumtrar @ 2018-09-24 7:24 UTC (permalink / raw)
To: Julia Cartwright
Cc: Guenter Roeck, linux-watchdog, Wim Van Sebroeck,
Christophe Leroy, linux-rt-users
Hi!
Julia Cartwright <julia@ni.com> writes:
> Hello all-
>
> On Wed, Sep 19, 2018 at 12:43:03PM -0700, Guenter Roeck wrote:
>> On Wed, Sep 19, 2018 at 08:46:19AM +0200, Steffen Trumtrar
>> wrote:
>> > On Tue, Sep 18, 2018 at 06:46:15AM -0700, Guenter Roeck
>> > wrote:
> [..]
>> > The problem I observe, is that the watchdog is trigged,
>> > because it doesn't get pinged.
>> > The ksoftirqd seems to be blocked although it runs at a much
>> > higher priority than the
>> > blocking userspace task.
>> >
>> Are you sure about that ? The other email seemed to suggest
>> that the userspace
>> task is running at higher priority.
>
> Also: ksoftirqd is irrelevant on RT for the kernel watchdog
> thread. The
> relevant thread is ktimersoftd, which is the thread responsible
> for
> invoking hrtimer expiry functions, like what's being used for
> watchdogd.
>
> [..]
>> Overall, we have a number possibilities to consider:
>>
>> - The kernel watchdog timer thread is not triggered at all
>> under some
>> circumstances, meaning it is not set properly. So far we have
>> no real
>> indication that this is the case (since the code works fine
>> unless some
>> userspace task takes all available CPU time).
>
> What do you mean by "not triggered". Do you mean
> woken-up/activated
> from a scheduling perspective? In the case I identified in my
> other
> email, the watchdogd thread wakeup doesn't even occur, even when
> the
> periodic ping timer expires, because ktimersoftd has been
> starved.
>
> I suspect that's what's going on for Steffen, but am not yet
> sure.
>
>> - The watchdog device is closed. The kernel watchdog timer
>> thread is
>> starved and does not get to run. The question is what to do
>> in this
>> situation. In a real time system, this is almost always a
>> fatal
>> condition. Should the system really be kept alive in this
>> situation ?
>
> Sometimes its the right decision, sometimes its not. The only
> sensible
> thing to do is to allow the user make the decision that's right
> for
> their application needs by allowing the relative prioritization
> of
> watchdogd and their application threads.
>
> ...which they can do now, but it's not effective on RT because
> of the
> timer deferral through ktimersoftd.
>
> The solution, in my mind, and like I mentioned in my other
> email, is to
> opt-out of the ktimersoftd-deferral mechanism. This requires
> some
> tweaking with the kthread_worker bits to ensure safety in
> hardirq
> context, but that seems straightforward. See the below.
>
I just tested your patch and it works for me \o/
Thanks,
Steffen
--
Pengutronix e.K. | Steffen Trumtrar
|
Industrial Linux Solutions |
http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany| Phone:
+49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax:
+49-5121-206917-5555|
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering
@ 2018-09-24 7:24 ` Steffen Trumtrar
0 siblings, 0 replies; 27+ messages in thread
From: Steffen Trumtrar @ 2018-09-24 7:24 UTC (permalink / raw)
To: Julia Cartwright
Cc: Guenter Roeck, linux-watchdog, Wim Van Sebroeck,
Christophe Leroy, linux-rt-users
Hi!
Julia Cartwright <julia@ni.com> writes:
> Hello all-
>
> On Wed, Sep 19, 2018 at 12:43:03PM -0700, Guenter Roeck wrote:
>> On Wed, Sep 19, 2018 at 08:46:19AM +0200, Steffen Trumtrar
>> wrote:
>> > On Tue, Sep 18, 2018 at 06:46:15AM -0700, Guenter Roeck
>> > wrote:
> [..]
>> > The problem I observe, is that the watchdog is trigged,
>> > because it doesn't get pinged.
>> > The ksoftirqd seems to be blocked although it runs at a much
>> > higher priority than the
>> > blocking userspace task.
>> >
>> Are you sure about that ? The other email seemed to suggest
>> that the userspace
>> task is running at higher priority.
>
> Also: ksoftirqd is irrelevant on RT for the kernel watchdog
> thread. The
> relevant thread is ktimersoftd, which is the thread responsible
> for
> invoking hrtimer expiry functions, like what's being used for
> watchdogd.
>
> [..]
>> Overall, we have a number possibilities to consider:
>>
>> - The kernel watchdog timer thread is not triggered at all
>> under some
>> circumstances, meaning it is not set properly. So far we have
>> no real
>> indication that this is the case (since the code works fine
>> unless some
>> userspace task takes all available CPU time).
>
> What do you mean by "not triggered". Do you mean
> woken-up/activated
> from a scheduling perspective? In the case I identified in my
> other
> email, the watchdogd thread wakeup doesn't even occur, even when
> the
> periodic ping timer expires, because ktimersoftd has been
> starved.
>
> I suspect that's what's going on for Steffen, but am not yet
> sure.
>
>> - The watchdog device is closed. The kernel watchdog timer
>> thread is
>> starved and does not get to run. The question is what to do
>> in this
>> situation. In a real time system, this is almost always a
>> fatal
>> condition. Should the system really be kept alive in this
>> situation ?
>
> Sometimes its the right decision, sometimes its not. The only
> sensible
> thing to do is to allow the user make the decision that's right
> for
> their application needs by allowing the relative prioritization
> of
> watchdogd and their application threads.
>
> ...which they can do now, but it's not effective on RT because
> of the
> timer deferral through ktimersoftd.
>
> The solution, in my mind, and like I mentioned in my other
> email, is to
> opt-out of the ktimersoftd-deferral mechanism. This requires
> some
> tweaking with the kthread_worker bits to ensure safety in
> hardirq
> context, but that seems straightforward. See the below.
>
I just tested your patch and it works for me \o/
Thanks,
Steffen
--
Pengutronix e.K. | Steffen Trumtrar
|
Industrial Linux Solutions |
http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany| Phone:
+49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax:
+49-5121-206917-5555|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-28 21:03 ` [PATCH 0/2] Fix watchdogd wakeup deferral on RT Julia Cartwright
2018-09-28 21:03 ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
@ 2018-09-28 21:03 ` Julia Cartwright
2018-09-28 22:38 ` kbuild test robot
` (3 more replies)
1 sibling, 4 replies; 27+ messages in thread
From: Julia Cartwright @ 2018-09-28 21:03 UTC (permalink / raw)
To: Sebastian Andrzej Siewior, Thomas Gleixner
Cc: linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
Guenter Roeck
When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are
deferred for execution into the context of ktimersoftd unless otherwise
annotated.
Deferring the expiry of the hrtimer used by the watchdog core, however,
is a waste, as the callback does nothing but queue a kthread work item
and wakeup watchdogd.
It's worst then that, too: the deferral through ktimersoftd also means
that for correct behavior a user must adjust the scheduling parameters
of both watchdogd _and_ ktimersoftd, which is unnecessary and has other
side effects (like causing unrelated expiry functions to execute at
potentially elevated priority).
Instead, mark the hrtimer used by the watchdog core as being _HARD to
allow it's execution directly from hardirq context. The work done in
this expiry function is well-bounded and minimal.
A user still must adjust the scheduling parameters of the watchdogd
to be correct w.r.t. their application needs.
Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
---
drivers/watchdog/watchdog_dev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
index ffbdc4642ea5..9c2b3e5cebdc 100644
--- a/drivers/watchdog/watchdog_dev.c
+++ b/drivers/watchdog/watchdog_dev.c
@@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
return -ENODEV;
kthread_init_work(&wd_data->work, watchdog_ping_work);
- hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
wd_data->timer.function = watchdog_timer_expired;
if (wdd->id == 0) {
--
2.18.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 1/2] kthread: convert worker lock to raw spinlock
2018-09-28 21:03 ` [PATCH 0/2] Fix watchdogd wakeup deferral on RT Julia Cartwright
@ 2018-09-28 21:03 ` Julia Cartwright
2018-10-05 16:46 ` Sebastian Andrzej Siewior
2018-10-05 18:10 ` Andrea Parri
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
1 sibling, 2 replies; 27+ messages in thread
From: Julia Cartwright @ 2018-09-28 21:03 UTC (permalink / raw)
To: Ingo Molnar, Thomas Gleixner, Peter Zijlstra
Cc: linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
Sebastian Andrzej Siewior, Guenter Roeck
In order to enable the queuing of kthread work items from hardirq
context even when PREEMPT_RT_FULL is enabled, convert the worker
spin_lock to a raw_spin_lock.
This is only acceptable to do because the work performed under the lock
is well-bounded and minimal.
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
---
include/linux/kthread.h | 2 +-
kernel/kthread.c | 42 ++++++++++++++++++++---------------------
2 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index c1961761311d..ad292898f7f2 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -85,7 +85,7 @@ enum {
struct kthread_worker {
unsigned int flags;
- spinlock_t lock;
+ raw_spinlock_t lock;
struct list_head work_list;
struct list_head delayed_work_list;
struct task_struct *task;
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 486dedbd9af5..c1d9ee6671c6 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
struct lock_class_key *key)
{
memset(worker, 0, sizeof(struct kthread_worker));
- spin_lock_init(&worker->lock);
+ raw_spin_lock_init(&worker->lock);
lockdep_set_class_and_name(&worker->lock, key, name);
INIT_LIST_HEAD(&worker->work_list);
INIT_LIST_HEAD(&worker->delayed_work_list);
@@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr)
if (kthread_should_stop()) {
__set_current_state(TASK_RUNNING);
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
worker->task = NULL;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
return 0;
}
work = NULL;
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
if (!list_empty(&worker->work_list)) {
work = list_first_entry(&worker->work_list,
struct kthread_work, node);
list_del_init(&work->node);
}
worker->current_work = work;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
if (work) {
__set_current_state(TASK_RUNNING);
@@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
bool ret = false;
unsigned long flags;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
if (!queuing_blocked(worker, work)) {
kthread_insert_work(worker, work, &worker->work_list);
ret = true;
}
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_queue_work);
@@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
if (WARN_ON_ONCE(!worker))
return;
- spin_lock(&worker->lock);
+ raw_spin_lock(&worker->lock);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
list_del_init(&work->node);
kthread_insert_work(worker, work, &worker->work_list);
- spin_unlock(&worker->lock);
+ raw_spin_unlock(&worker->lock);
}
EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
@@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
unsigned long flags;
bool ret = false;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
if (!queuing_blocked(worker, work)) {
__kthread_queue_delayed_work(worker, dwork, delay);
ret = true;
}
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work)
if (!worker)
return;
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work)
else
noop = true;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
if (!noop)
wait_for_completion(&fwork.done);
@@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
* any queuing is blocked by setting the canceling counter.
*/
work->canceling++;
- spin_unlock_irqrestore(&worker->lock, *flags);
+ raw_spin_unlock_irqrestore(&worker->lock, *flags);
del_timer_sync(&dwork->timer);
- spin_lock_irqsave(&worker->lock, *flags);
+ raw_spin_lock_irqsave(&worker->lock, *flags);
work->canceling--;
}
@@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
unsigned long flags;
int ret = false;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
/* Do not bother with canceling when never queued. */
if (!work->worker)
@@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
fast_queue:
__kthread_queue_delayed_work(worker, dwork, delay);
out:
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
if (!worker)
goto out;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
* In the meantime, block any queuing by setting the canceling counter.
*/
work->canceling++;
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
kthread_flush_work(work);
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
work->canceling--;
out_fast:
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
out:
return ret;
}
--
2.18.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 0/2] Fix watchdogd wakeup deferral on RT
2018-09-24 7:24 ` Steffen Trumtrar
(?)
@ 2018-09-28 21:03 ` Julia Cartwright
2018-09-28 21:03 ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
-1 siblings, 2 replies; 27+ messages in thread
From: Julia Cartwright @ 2018-09-28 21:03 UTC (permalink / raw)
To: Ingo Molnar, Thomas Gleixner, Peter Zijlstra, Sebastian Andrzej Siewior
Cc: linux-kernel, linux-rt-users, Guenter Roeck, Steffen Trumtrar,
Tim Sander
The following two patches solve an issue reported by Steffen Trumtrar
and Tim Sander to the linux-rt-users mailing list. Namely, the wakeup
of the watchdogd thread being starved by an RT task due to the hrtimer
deferral through ktimersoftd (on PREEMPT_RT_FULL).
The first patch adjusts the kthread_worker locking to make use of a raw
spinlock, making it suitable for work item queueing from hardirq context
on PREEMPT_RT. This patch can be applied directly to mainline without
any functional change.
The second patch adjusts the hrtimer used by the watchdog core to be a
_HARD timer (and thus not deferred through ktimersoftd w/ PREEMPT_RT).
This patch depends on hrtimer patches carried in the RT patch, and so
should therefore land there.
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Cc: Tim Sander <tim@krieglstein.org>
Julia Cartwright (2):
kthread: convert worker lock to raw spinlock
watchdog, rt: prevent deferral of watchdogd wakeup
drivers/watchdog/watchdog_dev.c | 2 +-
include/linux/kthread.h | 2 +-
kernel/kthread.c | 42 ++++++++++++++++-----------------
3 files changed, 23 insertions(+), 23 deletions(-)
--
2.18.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
@ 2018-09-28 22:38 ` kbuild test robot
2018-09-29 6:38 ` Thomas Gleixner
2018-09-28 23:20 ` kbuild test robot
` (2 subsequent siblings)
3 siblings, 1 reply; 27+ messages in thread
From: kbuild test robot @ 2018-09-28 22:38 UTC (permalink / raw)
To: Julia Cartwright
Cc: kbuild-all, Sebastian Andrzej Siewior, Thomas Gleixner,
linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
Guenter Roeck
[-- Attachment #1: Type: text/plain, Size: 4299 bytes --]
Hi Julia,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Julia-Cartwright/kthread-convert-worker-lock-to-raw-spinlock/20180929-052522
config: i386-randconfig-x008-201838 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
All errors (new ones prefixed by >>):
drivers//watchdog/watchdog_dev.c: In function 'watchdog_cdev_register':
>> drivers//watchdog/watchdog_dev.c:948:49: error: 'HRTIMER_MODE_REL_HARD' undeclared (first use in this function); did you mean 'HRTIMER_MODE_REL_SOFT'?
hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
^~~~~~~~~~~~~~~~~~~~~
HRTIMER_MODE_REL_SOFT
drivers//watchdog/watchdog_dev.c:948:49: note: each undeclared identifier is reported only once for each function it appears in
vim +948 drivers//watchdog/watchdog_dev.c
919
920 /*
921 * watchdog_cdev_register: register watchdog character device
922 * @wdd: watchdog device
923 * @devno: character device number
924 *
925 * Register a watchdog character device including handling the legacy
926 * /dev/watchdog node. /dev/watchdog is actually a miscdevice and
927 * thus we set it up like that.
928 */
929
930 static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
931 {
932 struct watchdog_core_data *wd_data;
933 int err;
934
935 wd_data = kzalloc(sizeof(struct watchdog_core_data), GFP_KERNEL);
936 if (!wd_data)
937 return -ENOMEM;
938 kref_init(&wd_data->kref);
939 mutex_init(&wd_data->lock);
940
941 wd_data->wdd = wdd;
942 wdd->wd_data = wd_data;
943
944 if (IS_ERR_OR_NULL(watchdog_kworker))
945 return -ENODEV;
946
947 kthread_init_work(&wd_data->work, watchdog_ping_work);
> 948 hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
949 wd_data->timer.function = watchdog_timer_expired;
950
951 if (wdd->id == 0) {
952 old_wd_data = wd_data;
953 watchdog_miscdev.parent = wdd->parent;
954 err = misc_register(&watchdog_miscdev);
955 if (err != 0) {
956 pr_err("%s: cannot register miscdev on minor=%d (err=%d).\n",
957 wdd->info->identity, WATCHDOG_MINOR, err);
958 if (err == -EBUSY)
959 pr_err("%s: a legacy watchdog module is probably present.\n",
960 wdd->info->identity);
961 old_wd_data = NULL;
962 kfree(wd_data);
963 return err;
964 }
965 }
966
967 /* Fill in the data structures */
968 cdev_init(&wd_data->cdev, &watchdog_fops);
969 wd_data->cdev.owner = wdd->ops->owner;
970
971 /* Add the device */
972 err = cdev_add(&wd_data->cdev, devno, 1);
973 if (err) {
974 pr_err("watchdog%d unable to add device %d:%d\n",
975 wdd->id, MAJOR(watchdog_devt), wdd->id);
976 if (wdd->id == 0) {
977 misc_deregister(&watchdog_miscdev);
978 old_wd_data = NULL;
979 kref_put(&wd_data->kref, watchdog_core_data_release);
980 }
981 return err;
982 }
983
984 /* Record time of most recent heartbeat as 'just before now'. */
985 wd_data->last_hw_keepalive = ktime_sub(ktime_get(), 1);
986
987 /*
988 * If the watchdog is running, prevent its driver from being unloaded,
989 * and schedule an immediate ping.
990 */
991 if (watchdog_hw_running(wdd)) {
992 __module_get(wdd->ops->owner);
993 kref_get(&wd_data->kref);
994 if (handle_boot_enabled)
995 hrtimer_start(&wd_data->timer, 0, HRTIMER_MODE_REL);
996 else
997 pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n",
998 wdd->id);
999 }
1000
1001 return 0;
1002 }
1003
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 29470 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
2018-09-28 22:38 ` kbuild test robot
@ 2018-09-28 23:20 ` kbuild test robot
2018-09-30 14:00 ` Guenter Roeck
2018-10-05 16:52 ` Sebastian Andrzej Siewior
3 siblings, 0 replies; 27+ messages in thread
From: kbuild test robot @ 2018-09-28 23:20 UTC (permalink / raw)
To: Julia Cartwright
Cc: kbuild-all, Sebastian Andrzej Siewior, Thomas Gleixner,
linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
Guenter Roeck
[-- Attachment #1: Type: text/plain, Size: 4217 bytes --]
Hi Julia,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Julia-Cartwright/kthread-convert-worker-lock-to-raw-spinlock/20180929-052522
config: i386-randconfig-s0-201838 (attached as .config)
compiler: gcc-6 (Debian 6.4.0-9) 6.4.0 20171026
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
All errors (new ones prefixed by >>):
drivers//watchdog/watchdog_dev.c: In function 'watchdog_cdev_register':
>> drivers//watchdog/watchdog_dev.c:948:49: error: 'HRTIMER_MODE_REL_HARD' undeclared (first use in this function)
hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
^~~~~~~~~~~~~~~~~~~~~
drivers//watchdog/watchdog_dev.c:948:49: note: each undeclared identifier is reported only once for each function it appears in
vim +/HRTIMER_MODE_REL_HARD +948 drivers//watchdog/watchdog_dev.c
919
920 /*
921 * watchdog_cdev_register: register watchdog character device
922 * @wdd: watchdog device
923 * @devno: character device number
924 *
925 * Register a watchdog character device including handling the legacy
926 * /dev/watchdog node. /dev/watchdog is actually a miscdevice and
927 * thus we set it up like that.
928 */
929
930 static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
931 {
932 struct watchdog_core_data *wd_data;
933 int err;
934
935 wd_data = kzalloc(sizeof(struct watchdog_core_data), GFP_KERNEL);
936 if (!wd_data)
937 return -ENOMEM;
938 kref_init(&wd_data->kref);
939 mutex_init(&wd_data->lock);
940
941 wd_data->wdd = wdd;
942 wdd->wd_data = wd_data;
943
944 if (IS_ERR_OR_NULL(watchdog_kworker))
945 return -ENODEV;
946
947 kthread_init_work(&wd_data->work, watchdog_ping_work);
> 948 hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
949 wd_data->timer.function = watchdog_timer_expired;
950
951 if (wdd->id == 0) {
952 old_wd_data = wd_data;
953 watchdog_miscdev.parent = wdd->parent;
954 err = misc_register(&watchdog_miscdev);
955 if (err != 0) {
956 pr_err("%s: cannot register miscdev on minor=%d (err=%d).\n",
957 wdd->info->identity, WATCHDOG_MINOR, err);
958 if (err == -EBUSY)
959 pr_err("%s: a legacy watchdog module is probably present.\n",
960 wdd->info->identity);
961 old_wd_data = NULL;
962 kfree(wd_data);
963 return err;
964 }
965 }
966
967 /* Fill in the data structures */
968 cdev_init(&wd_data->cdev, &watchdog_fops);
969 wd_data->cdev.owner = wdd->ops->owner;
970
971 /* Add the device */
972 err = cdev_add(&wd_data->cdev, devno, 1);
973 if (err) {
974 pr_err("watchdog%d unable to add device %d:%d\n",
975 wdd->id, MAJOR(watchdog_devt), wdd->id);
976 if (wdd->id == 0) {
977 misc_deregister(&watchdog_miscdev);
978 old_wd_data = NULL;
979 kref_put(&wd_data->kref, watchdog_core_data_release);
980 }
981 return err;
982 }
983
984 /* Record time of most recent heartbeat as 'just before now'. */
985 wd_data->last_hw_keepalive = ktime_sub(ktime_get(), 1);
986
987 /*
988 * If the watchdog is running, prevent its driver from being unloaded,
989 * and schedule an immediate ping.
990 */
991 if (watchdog_hw_running(wdd)) {
992 __module_get(wdd->ops->owner);
993 kref_get(&wd_data->kref);
994 if (handle_boot_enabled)
995 hrtimer_start(&wd_data->timer, 0, HRTIMER_MODE_REL);
996 else
997 pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n",
998 wdd->id);
999 }
1000
1001 return 0;
1002 }
1003
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 31829 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-28 22:38 ` kbuild test robot
@ 2018-09-29 6:38 ` Thomas Gleixner
2018-09-29 22:13 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 27+ messages in thread
From: Thomas Gleixner @ 2018-09-29 6:38 UTC (permalink / raw)
To: kbuild test robot
Cc: Julia Cartwright, kbuild-all, Sebastian Andrzej Siewior,
linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
Guenter Roeck
On Sat, 29 Sep 2018, kbuild test robot wrote:
> [also build test ERROR on v4.19-rc5 next-20180928]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
>
It's against the RT tree, so it won't work against next or upstream. I
think it would be good to have a machine parseable tag to describe which
tree/branch/commit patches are applicable to. If that tag is missing, the
bot can assume it's against upstream/next.
Thanks,
tglx
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-29 6:38 ` Thomas Gleixner
@ 2018-09-29 22:13 ` Sebastian Andrzej Siewior
2018-09-30 1:41 ` [kbuild-all] " Li, Philip
0 siblings, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-09-29 22:13 UTC (permalink / raw)
To: Thomas Gleixner
Cc: kbuild test robot, Julia Cartwright, kbuild-all, linux-kernel,
linux-rt-users, Steffen Trumtrar, Tim Sander, Guenter Roeck
On 2018-09-29 08:38:55 [+0200], Thomas Gleixner wrote:
> On Sat, 29 Sep 2018, kbuild test robot wrote:
> > [also build test ERROR on v4.19-rc5 next-20180928]
> > [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> >
>
> It's against the RT tree, so it won't work against next or upstream. I
> think it would be good to have a machine parseable tag to describe which
> tree/branch/commit patches are applicable to. If that tag is missing, the
> bot can assume it's against upstream/next.
I asked the testing team to ignore patches or test against the RT tree
if the patch is tagged [PATCH RT]. I think it worked since I haven't
seen those mails.
> Thanks,
>
> tglx
Sebastian
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [kbuild-all] [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-29 22:13 ` Sebastian Andrzej Siewior
@ 2018-09-30 1:41 ` Li, Philip
0 siblings, 0 replies; 27+ messages in thread
From: Li, Philip @ 2018-09-30 1:41 UTC (permalink / raw)
To: Sebastian Andrzej Siewior, Thomas Gleixner
Cc: linux-rt-users, Julia Cartwright, linux-kernel, kbuild-all,
Tim Sander, Steffen Trumtrar, Guenter Roeck, lkp
> Subject: Re: [kbuild-all] [PATCH RT 2/2] watchdog, rt: prevent deferral of
> watchdogd wakeup
>
> On 2018-09-29 08:38:55 [+0200], Thomas Gleixner wrote:
> > On Sat, 29 Sep 2018, kbuild test robot wrote:
> > > [also build test ERROR on v4.19-rc5 next-20180928]
> > > [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system]
> > >
> >
> > It's against the RT tree, so it won't work against next or upstream. I
> > think it would be good to have a machine parseable tag to describe which
> > tree/branch/commit patches are applicable to. If that tag is missing, the
> > bot can assume it's against upstream/next.
>
> I asked the testing team to ignore patches or test against the RT tree
> if the patch is tagged [PATCH RT]. I think it worked since I haven't
> seen those mails.
thanks Sebastian for the input, we will consider this to add to our TODO.
>
> > Thanks,
> >
> > tglx
>
> Sebastian
> _______________________________________________
> kbuild-all mailing list
> kbuild-all@lists.01.org
> https://lists.01.org/mailman/listinfo/kbuild-all
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
2018-09-28 22:38 ` kbuild test robot
2018-09-28 23:20 ` kbuild test robot
@ 2018-09-30 14:00 ` Guenter Roeck
2018-10-05 16:52 ` Sebastian Andrzej Siewior
3 siblings, 0 replies; 27+ messages in thread
From: Guenter Roeck @ 2018-09-30 14:00 UTC (permalink / raw)
To: Julia Cartwright, Sebastian Andrzej Siewior, Thomas Gleixner
Cc: linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander
On 09/28/2018 02:03 PM, Julia Cartwright wrote:
> When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are
> deferred for execution into the context of ktimersoftd unless otherwise
> annotated.
>
> Deferring the expiry of the hrtimer used by the watchdog core, however,
> is a waste, as the callback does nothing but queue a kthread work item
> and wakeup watchdogd.
>
> It's worst then that, too: the deferral through ktimersoftd also means
> that for correct behavior a user must adjust the scheduling parameters
> of both watchdogd _and_ ktimersoftd, which is unnecessary and has other
> side effects (like causing unrelated expiry functions to execute at
> potentially elevated priority).
>
> Instead, mark the hrtimer used by the watchdog core as being _HARD to
> allow it's execution directly from hardirq context. The work done in
> this expiry function is well-bounded and minimal.
>
> A user still must adjust the scheduling parameters of the watchdogd
> to be correct w.r.t. their application needs.
>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>
Acked-by: Guenter Roeck <linux@roeck-us.net>
> ---
> drivers/watchdog/watchdog_dev.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
> index ffbdc4642ea5..9c2b3e5cebdc 100644
> --- a/drivers/watchdog/watchdog_dev.c
> +++ b/drivers/watchdog/watchdog_dev.c
> @@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
> return -ENODEV;
>
> kthread_init_work(&wd_data->work, watchdog_ping_work);
> - hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> + hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
> wd_data->timer.function = watchdog_timer_expired;
>
> if (wdd->id == 0) {
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
2018-09-28 21:03 ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
@ 2018-10-05 16:46 ` Sebastian Andrzej Siewior
2018-10-05 18:10 ` Andrea Parri
1 sibling, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:46 UTC (permalink / raw)
To: Julia Cartwright
Cc: Ingo Molnar, Thomas Gleixner, Peter Zijlstra, linux-kernel,
linux-rt-users, Steffen Trumtrar, Tim Sander, Guenter Roeck
On 2018-09-28 21:03:51 [+0000], Julia Cartwright wrote:
> In order to enable the queuing of kthread work items from hardirq
> context even when PREEMPT_RT_FULL is enabled, convert the worker
> spin_lock to a raw_spin_lock.
>
> This is only acceptable to do because the work performed under the lock
> is well-bounded and minimal.
>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Sebastian
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
` (2 preceding siblings ...)
2018-09-30 14:00 ` Guenter Roeck
@ 2018-10-05 16:52 ` Sebastian Andrzej Siewior
3 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:52 UTC (permalink / raw)
To: Julia Cartwright
Cc: Thomas Gleixner, linux-kernel, linux-rt-users, Steffen Trumtrar,
Tim Sander, Guenter Roeck
On 2018-09-28 21:03:51 [+0000], Julia Cartwright wrote:
> When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are
> deferred for execution into the context of ktimersoftd unless otherwise
> annotated.
…
> Signed-off-by: Julia Cartwright <julia@ni.com>
did s@HRTIMER_MODE_REL@HRTIMER_MODE_REL_HARD@ and then applied.
Thank you Julia.
Sebastian
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
2018-09-28 21:03 ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
2018-10-05 16:46 ` Sebastian Andrzej Siewior
@ 2018-10-05 18:10 ` Andrea Parri
2018-10-09 10:56 ` Sebastian Andrzej Siewior
1 sibling, 1 reply; 27+ messages in thread
From: Andrea Parri @ 2018-10-05 18:10 UTC (permalink / raw)
To: Julia Cartwright
Cc: Ingo Molnar, Thomas Gleixner, Peter Zijlstra, linux-kernel,
linux-rt-users, Steffen Trumtrar, Tim Sander,
Sebastian Andrzej Siewior, Guenter Roeck
Hi Julia,
On Fri, Sep 28, 2018 at 09:03:51PM +0000, Julia Cartwright wrote:
> In order to enable the queuing of kthread work items from hardirq
> context even when PREEMPT_RT_FULL is enabled, convert the worker
> spin_lock to a raw_spin_lock.
>
> This is only acceptable to do because the work performed under the lock
> is well-bounded and minimal.
Clearly not my topic..., but out of curiosity: What do you mean by
"well-bounded" and "minimal"? Can you maybe point me to some doc.?
Andrea
>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>
> ---
> include/linux/kthread.h | 2 +-
> kernel/kthread.c | 42 ++++++++++++++++++++---------------------
> 2 files changed, 22 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/kthread.h b/include/linux/kthread.h
> index c1961761311d..ad292898f7f2 100644
> --- a/include/linux/kthread.h
> +++ b/include/linux/kthread.h
> @@ -85,7 +85,7 @@ enum {
>
> struct kthread_worker {
> unsigned int flags;
> - spinlock_t lock;
> + raw_spinlock_t lock;
> struct list_head work_list;
> struct list_head delayed_work_list;
> struct task_struct *task;
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 486dedbd9af5..c1d9ee6671c6 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
> struct lock_class_key *key)
> {
> memset(worker, 0, sizeof(struct kthread_worker));
> - spin_lock_init(&worker->lock);
> + raw_spin_lock_init(&worker->lock);
> lockdep_set_class_and_name(&worker->lock, key, name);
> INIT_LIST_HEAD(&worker->work_list);
> INIT_LIST_HEAD(&worker->delayed_work_list);
> @@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr)
>
> if (kthread_should_stop()) {
> __set_current_state(TASK_RUNNING);
> - spin_lock_irq(&worker->lock);
> + raw_spin_lock_irq(&worker->lock);
> worker->task = NULL;
> - spin_unlock_irq(&worker->lock);
> + raw_spin_unlock_irq(&worker->lock);
> return 0;
> }
>
> work = NULL;
> - spin_lock_irq(&worker->lock);
> + raw_spin_lock_irq(&worker->lock);
> if (!list_empty(&worker->work_list)) {
> work = list_first_entry(&worker->work_list,
> struct kthread_work, node);
> list_del_init(&work->node);
> }
> worker->current_work = work;
> - spin_unlock_irq(&worker->lock);
> + raw_spin_unlock_irq(&worker->lock);
>
> if (work) {
> __set_current_state(TASK_RUNNING);
> @@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
> bool ret = false;
> unsigned long flags;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
> if (!queuing_blocked(worker, work)) {
> kthread_insert_work(worker, work, &worker->work_list);
> ret = true;
> }
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> return ret;
> }
> EXPORT_SYMBOL_GPL(kthread_queue_work);
> @@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
> if (WARN_ON_ONCE(!worker))
> return;
>
> - spin_lock(&worker->lock);
> + raw_spin_lock(&worker->lock);
> /* Work must not be used with >1 worker, see kthread_queue_work(). */
> WARN_ON_ONCE(work->worker != worker);
>
> @@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
> list_del_init(&work->node);
> kthread_insert_work(worker, work, &worker->work_list);
>
> - spin_unlock(&worker->lock);
> + raw_spin_unlock(&worker->lock);
> }
> EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
>
> @@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
> unsigned long flags;
> bool ret = false;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
>
> if (!queuing_blocked(worker, work)) {
> __kthread_queue_delayed_work(worker, dwork, delay);
> ret = true;
> }
>
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> return ret;
> }
> EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
> @@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work)
> if (!worker)
> return;
>
> - spin_lock_irq(&worker->lock);
> + raw_spin_lock_irq(&worker->lock);
> /* Work must not be used with >1 worker, see kthread_queue_work(). */
> WARN_ON_ONCE(work->worker != worker);
>
> @@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work)
> else
> noop = true;
>
> - spin_unlock_irq(&worker->lock);
> + raw_spin_unlock_irq(&worker->lock);
>
> if (!noop)
> wait_for_completion(&fwork.done);
> @@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
> * any queuing is blocked by setting the canceling counter.
> */
> work->canceling++;
> - spin_unlock_irqrestore(&worker->lock, *flags);
> + raw_spin_unlock_irqrestore(&worker->lock, *flags);
> del_timer_sync(&dwork->timer);
> - spin_lock_irqsave(&worker->lock, *flags);
> + raw_spin_lock_irqsave(&worker->lock, *flags);
> work->canceling--;
> }
>
> @@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
> unsigned long flags;
> int ret = false;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
>
> /* Do not bother with canceling when never queued. */
> if (!work->worker)
> @@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
> fast_queue:
> __kthread_queue_delayed_work(worker, dwork, delay);
> out:
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> return ret;
> }
> EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
> @@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
> if (!worker)
> goto out;
>
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
> /* Work must not be used with >1 worker, see kthread_queue_work(). */
> WARN_ON_ONCE(work->worker != worker);
>
> @@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
> * In the meantime, block any queuing by setting the canceling counter.
> */
> work->canceling++;
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> kthread_flush_work(work);
> - spin_lock_irqsave(&worker->lock, flags);
> + raw_spin_lock_irqsave(&worker->lock, flags);
> work->canceling--;
>
> out_fast:
> - spin_unlock_irqrestore(&worker->lock, flags);
> + raw_spin_unlock_irqrestore(&worker->lock, flags);
> out:
> return ret;
> }
> --
> 2.18.0
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
2018-10-05 18:10 ` Andrea Parri
@ 2018-10-09 10:56 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-09 10:56 UTC (permalink / raw)
To: Andrea Parri
Cc: Julia Cartwright, Ingo Molnar, Thomas Gleixner, Peter Zijlstra,
linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
Guenter Roeck
On 2018-10-05 20:10:35 [+0200], Andrea Parri wrote:
>
> Clearly not my topic..., but out of curiosity: What do you mean by
> "well-bounded" and "minimal"? Can you maybe point me to some doc.?
it means that the lock is not held for an arbitrary amount of time like
by processing a list with thousand items. Well-bounded would mean not
process more than one or five (or so) at a time.
> Andrea
Sebastian
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
2019-02-12 16:25 [PATCH 1/2] kthread: convert worker lock to raw spinlock Sebastian Andrzej Siewior
@ 2019-02-13 12:13 ` Petr Mladek
0 siblings, 0 replies; 27+ messages in thread
From: Petr Mladek @ 2019-02-13 12:13 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: linux-kernel, Ingo Molnar, tglx, Julia Cartwright, Guenter Roeck,
Steffen Trumtrar, Tim Sander
On Tue 2019-02-12 17:25:53, Sebastian Andrzej Siewior wrote:
> From: Julia Cartwright <julia@ni.com>
>
> In order to enable the queuing of kthread work items from hardirq
> context even when PREEMPT_RT_FULL is enabled, convert the worker
> spin_lock to a raw_spin_lock.
>
> This is only acceptable to do because the work performed under the lock
> is well-bounded and minimal.
I could confirm that it is well-bounded and minimal. The most
expensive function probably is add_timer() called from
__kthread_queue_delayed_work(). It might spin a bit
to get timer->base->lock.
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Best Regards,
Petr
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 1/2] kthread: convert worker lock to raw spinlock
@ 2019-02-12 16:25 Sebastian Andrzej Siewior
2019-02-13 12:13 ` Petr Mladek
0 siblings, 1 reply; 27+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-02-12 16:25 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, tglx, Julia Cartwright, Sebastian Andrzej Siewior,
Guenter Roeck, Steffen Trumtrar, Tim Sander
From: Julia Cartwright <julia@ni.com>
In order to enable the queuing of kthread work items from hardirq
context even when PREEMPT_RT_FULL is enabled, convert the worker
spin_lock to a raw_spin_lock.
This is only acceptable to do because the work performed under the lock
is well-bounded and minimal.
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/kthread.h | 4 ++--
kernel/kthread.c | 42 ++++++++++++++++++++---------------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index c1961761311db..6b8c064f0cbcd 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -85,7 +85,7 @@ enum {
struct kthread_worker {
unsigned int flags;
- spinlock_t lock;
+ raw_spinlock_t lock;
struct list_head work_list;
struct list_head delayed_work_list;
struct task_struct *task;
@@ -106,7 +106,7 @@ struct kthread_delayed_work {
};
#define KTHREAD_WORKER_INIT(worker) { \
- .lock = __SPIN_LOCK_UNLOCKED((worker).lock), \
+ .lock = __RAW_SPIN_LOCK_UNLOCKED((worker).lock), \
.work_list = LIST_HEAD_INIT((worker).work_list), \
.delayed_work_list = LIST_HEAD_INIT((worker).delayed_work_list),\
}
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 087d18d771b53..5641b55783a6f 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -599,7 +599,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
struct lock_class_key *key)
{
memset(worker, 0, sizeof(struct kthread_worker));
- spin_lock_init(&worker->lock);
+ raw_spin_lock_init(&worker->lock);
lockdep_set_class_and_name(&worker->lock, key, name);
INIT_LIST_HEAD(&worker->work_list);
INIT_LIST_HEAD(&worker->delayed_work_list);
@@ -641,21 +641,21 @@ int kthread_worker_fn(void *worker_ptr)
if (kthread_should_stop()) {
__set_current_state(TASK_RUNNING);
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
worker->task = NULL;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
return 0;
}
work = NULL;
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
if (!list_empty(&worker->work_list)) {
work = list_first_entry(&worker->work_list,
struct kthread_work, node);
list_del_init(&work->node);
}
worker->current_work = work;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
if (work) {
__set_current_state(TASK_RUNNING);
@@ -812,12 +812,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
bool ret = false;
unsigned long flags;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
if (!queuing_blocked(worker, work)) {
kthread_insert_work(worker, work, &worker->work_list);
ret = true;
}
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_queue_work);
@@ -843,7 +843,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
if (WARN_ON_ONCE(!worker))
return;
- spin_lock(&worker->lock);
+ raw_spin_lock(&worker->lock);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -852,7 +852,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
list_del_init(&work->node);
kthread_insert_work(worker, work, &worker->work_list);
- spin_unlock(&worker->lock);
+ raw_spin_unlock(&worker->lock);
}
EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
@@ -908,14 +908,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
unsigned long flags;
bool ret = false;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
if (!queuing_blocked(worker, work)) {
__kthread_queue_delayed_work(worker, dwork, delay);
ret = true;
}
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@@ -951,7 +951,7 @@ void kthread_flush_work(struct kthread_work *work)
if (!worker)
return;
- spin_lock_irq(&worker->lock);
+ raw_spin_lock_irq(&worker->lock);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -963,7 +963,7 @@ void kthread_flush_work(struct kthread_work *work)
else
noop = true;
- spin_unlock_irq(&worker->lock);
+ raw_spin_unlock_irq(&worker->lock);
if (!noop)
wait_for_completion(&fwork.done);
@@ -996,9 +996,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
* any queuing is blocked by setting the canceling counter.
*/
work->canceling++;
- spin_unlock_irqrestore(&worker->lock, *flags);
+ raw_spin_unlock_irqrestore(&worker->lock, *flags);
del_timer_sync(&dwork->timer);
- spin_lock_irqsave(&worker->lock, *flags);
+ raw_spin_lock_irqsave(&worker->lock, *flags);
work->canceling--;
}
@@ -1045,7 +1045,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
unsigned long flags;
int ret = false;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
/* Do not bother with canceling when never queued. */
if (!work->worker)
@@ -1062,7 +1062,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
fast_queue:
__kthread_queue_delayed_work(worker, dwork, delay);
out:
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@@ -1076,7 +1076,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
if (!worker)
goto out;
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
/* Work must not be used with >1 worker, see kthread_queue_work(). */
WARN_ON_ONCE(work->worker != worker);
@@ -1090,13 +1090,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
* In the meantime, block any queuing by setting the canceling counter.
*/
work->canceling++;
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
kthread_flush_work(work);
- spin_lock_irqsave(&worker->lock, flags);
+ raw_spin_lock_irqsave(&worker->lock, flags);
work->canceling--;
out_fast:
- spin_unlock_irqrestore(&worker->lock, flags);
+ raw_spin_unlock_irqrestore(&worker->lock, flags);
out:
return ret;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
end of thread, other threads:[~2019-02-13 12:13 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-18 13:21 [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering Steffen Trumtrar
2018-09-18 13:46 ` Guenter Roeck
2018-09-19 6:46 ` Steffen Trumtrar
2018-09-19 19:43 ` Guenter Roeck
2018-09-20 20:48 ` Julia Cartwright
2018-09-21 13:34 ` Guenter Roeck
2018-09-21 16:42 ` Julia Cartwright
2018-09-21 20:21 ` Guenter Roeck
2018-09-24 7:24 ` Steffen Trumtrar
2018-09-24 7:24 ` Steffen Trumtrar
2018-09-28 21:03 ` [PATCH 0/2] Fix watchdogd wakeup deferral on RT Julia Cartwright
2018-09-28 21:03 ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
2018-10-05 16:46 ` Sebastian Andrzej Siewior
2018-10-05 18:10 ` Andrea Parri
2018-10-09 10:56 ` Sebastian Andrzej Siewior
2018-09-28 21:03 ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
2018-09-28 22:38 ` kbuild test robot
2018-09-29 6:38 ` Thomas Gleixner
2018-09-29 22:13 ` Sebastian Andrzej Siewior
2018-09-30 1:41 ` [kbuild-all] " Li, Philip
2018-09-28 23:20 ` kbuild test robot
2018-09-30 14:00 ` Guenter Roeck
2018-10-05 16:52 ` Sebastian Andrzej Siewior
2018-09-20 8:18 ` [BUG] dw_wdt watchdog on linux-rt 4.18.5-rt4 not triggering Tim Sander
2018-09-18 18:14 ` Julia Cartwright
2019-02-12 16:25 [PATCH 1/2] kthread: convert worker lock to raw spinlock Sebastian Andrzej Siewior
2019-02-13 12:13 ` Petr Mladek
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.