linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-28 21:03 ` [PATCH 0/2] Fix watchdogd wakeup deferral on RT Julia Cartwright
  2018-09-28 21:03   ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
@ 2018-09-28 21:03   ` Julia Cartwright
  2018-09-28 22:38     ` kbuild test robot
                       ` (3 more replies)
  1 sibling, 4 replies; 15+ messages in thread
From: Julia Cartwright @ 2018-09-28 21:03 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, Thomas Gleixner
  Cc: linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
	Guenter Roeck

When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are
deferred for execution into the context of ktimersoftd unless otherwise
annotated.

Deferring the expiry of the hrtimer used by the watchdog core, however,
is a waste, as the callback does nothing but queue a kthread work item
and wakeup watchdogd.

It's worst then that, too: the deferral through ktimersoftd also means
that for correct behavior a user must adjust the scheduling parameters
of both watchdogd _and_ ktimersoftd, which is unnecessary and has other
side effects (like causing unrelated expiry functions to execute at
potentially elevated priority).

Instead, mark the hrtimer used by the watchdog core as being _HARD to
allow it's execution directly from hardirq context.  The work done in
this expiry function is well-bounded and minimal.

A user still must adjust the scheduling parameters of the watchdogd
to be correct w.r.t. their application needs.

Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
---
 drivers/watchdog/watchdog_dev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
index ffbdc4642ea5..9c2b3e5cebdc 100644
--- a/drivers/watchdog/watchdog_dev.c
+++ b/drivers/watchdog/watchdog_dev.c
@@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
 		return -ENODEV;
 
 	kthread_init_work(&wd_data->work, watchdog_ping_work);
-	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	wd_data->timer.function = watchdog_timer_expired;
 
 	if (wdd->id == 0) {
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 1/2] kthread: convert worker lock to raw spinlock
  2018-09-28 21:03 ` [PATCH 0/2] Fix watchdogd wakeup deferral on RT Julia Cartwright
@ 2018-09-28 21:03   ` Julia Cartwright
  2018-10-05 16:46     ` Sebastian Andrzej Siewior
  2018-10-05 18:10     ` Andrea Parri
  2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
  1 sibling, 2 replies; 15+ messages in thread
From: Julia Cartwright @ 2018-09-28 21:03 UTC (permalink / raw)
  To: Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
	Sebastian Andrzej Siewior, Guenter Roeck

In order to enable the queuing of kthread work items from hardirq
context even when PREEMPT_RT_FULL is enabled, convert the worker
spin_lock to a raw_spin_lock.

This is only acceptable to do because the work performed under the lock
is well-bounded and minimal.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
---
 include/linux/kthread.h |  2 +-
 kernel/kthread.c        | 42 ++++++++++++++++++++---------------------
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index c1961761311d..ad292898f7f2 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -85,7 +85,7 @@ enum {
 
 struct kthread_worker {
 	unsigned int		flags;
-	spinlock_t		lock;
+	raw_spinlock_t		lock;
 	struct list_head	work_list;
 	struct list_head	delayed_work_list;
 	struct task_struct	*task;
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 486dedbd9af5..c1d9ee6671c6 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
 				struct lock_class_key *key)
 {
 	memset(worker, 0, sizeof(struct kthread_worker));
-	spin_lock_init(&worker->lock);
+	raw_spin_lock_init(&worker->lock);
 	lockdep_set_class_and_name(&worker->lock, key, name);
 	INIT_LIST_HEAD(&worker->work_list);
 	INIT_LIST_HEAD(&worker->delayed_work_list);
@@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr)
 
 	if (kthread_should_stop()) {
 		__set_current_state(TASK_RUNNING);
-		spin_lock_irq(&worker->lock);
+		raw_spin_lock_irq(&worker->lock);
 		worker->task = NULL;
-		spin_unlock_irq(&worker->lock);
+		raw_spin_unlock_irq(&worker->lock);
 		return 0;
 	}
 
 	work = NULL;
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	if (!list_empty(&worker->work_list)) {
 		work = list_first_entry(&worker->work_list,
 					struct kthread_work, node);
 		list_del_init(&work->node);
 	}
 	worker->current_work = work;
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (work) {
 		__set_current_state(TASK_RUNNING);
@@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
 	bool ret = false;
 	unsigned long flags;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	if (!queuing_blocked(worker, work)) {
 		kthread_insert_work(worker, work, &worker->work_list);
 		ret = true;
 	}
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_work);
@@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
 	if (WARN_ON_ONCE(!worker))
 		return;
 
-	spin_lock(&worker->lock);
+	raw_spin_lock(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
 	list_del_init(&work->node);
 	kthread_insert_work(worker, work, &worker->work_list);
 
-	spin_unlock(&worker->lock);
+	raw_spin_unlock(&worker->lock);
 }
 EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
 
@@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
 	unsigned long flags;
 	bool ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	if (!queuing_blocked(worker, work)) {
 		__kthread_queue_delayed_work(worker, dwork, delay);
 		ret = true;
 	}
 
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work)
 	if (!worker)
 		return;
 
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work)
 	else
 		noop = true;
 
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (!noop)
 		wait_for_completion(&fwork.done);
@@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
 		 * any queuing is blocked by setting the canceling counter.
 		 */
 		work->canceling++;
-		spin_unlock_irqrestore(&worker->lock, *flags);
+		raw_spin_unlock_irqrestore(&worker->lock, *flags);
 		del_timer_sync(&dwork->timer);
-		spin_lock_irqsave(&worker->lock, *flags);
+		raw_spin_lock_irqsave(&worker->lock, *flags);
 		work->canceling--;
 	}
 
@@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 	unsigned long flags;
 	int ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	/* Do not bother with canceling when never queued. */
 	if (!work->worker)
@@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 fast_queue:
 	__kthread_queue_delayed_work(worker, dwork, delay);
 out:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
 	if (!worker)
 		goto out;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
 	 * In the meantime, block any queuing by setting the canceling counter.
 	 */
 	work->canceling++;
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	kthread_flush_work(work);
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	work->canceling--;
 
 out_fast:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 out:
 	return ret;
 }
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 0/2] Fix watchdogd wakeup deferral on RT
       [not found] <73in2vl5mj.fsf@pengutronix.de>
@ 2018-09-28 21:03 ` Julia Cartwright
  2018-09-28 21:03   ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
  2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
  0 siblings, 2 replies; 15+ messages in thread
From: Julia Cartwright @ 2018-09-28 21:03 UTC (permalink / raw)
  To: Ingo Molnar, Thomas Gleixner, Peter Zijlstra, Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-rt-users, Guenter Roeck, Steffen Trumtrar,
	Tim Sander

The following two patches solve an issue reported by Steffen Trumtrar
and Tim Sander to the linux-rt-users mailing list.  Namely, the wakeup
of the watchdogd thread being starved by an RT task due to the hrtimer
deferral through ktimersoftd (on PREEMPT_RT_FULL).

The first patch adjusts the kthread_worker locking to make use of a raw
spinlock, making it suitable for work item queueing from hardirq context
on PREEMPT_RT.  This patch can be applied directly to mainline without
any functional change.

The second patch adjusts the hrtimer used by the watchdog core to be a
_HARD timer (and thus not deferred through ktimersoftd w/ PREEMPT_RT).
This patch depends on hrtimer patches carried in the RT patch, and so
should therefore land there.

Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Cc: Tim Sander <tim@krieglstein.org>

Julia Cartwright (2):
  kthread: convert worker lock to raw spinlock
  watchdog, rt: prevent deferral of watchdogd wakeup

 drivers/watchdog/watchdog_dev.c |  2 +-
 include/linux/kthread.h         |  2 +-
 kernel/kthread.c                | 42 ++++++++++++++++-----------------
 3 files changed, 23 insertions(+), 23 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
@ 2018-09-28 22:38     ` kbuild test robot
  2018-09-29  6:38       ` Thomas Gleixner
  2018-09-28 23:20     ` kbuild test robot
                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: kbuild test robot @ 2018-09-28 22:38 UTC (permalink / raw)
  To: Julia Cartwright
  Cc: kbuild-all, Sebastian Andrzej Siewior, Thomas Gleixner,
	linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
	Guenter Roeck

[-- Attachment #1: Type: text/plain, Size: 4299 bytes --]

Hi Julia,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Julia-Cartwright/kthread-convert-worker-lock-to-raw-spinlock/20180929-052522
config: i386-randconfig-x008-201838 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   drivers//watchdog/watchdog_dev.c: In function 'watchdog_cdev_register':
>> drivers//watchdog/watchdog_dev.c:948:49: error: 'HRTIMER_MODE_REL_HARD' undeclared (first use in this function); did you mean 'HRTIMER_MODE_REL_SOFT'?
     hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
                                                    ^~~~~~~~~~~~~~~~~~~~~
                                                    HRTIMER_MODE_REL_SOFT
   drivers//watchdog/watchdog_dev.c:948:49: note: each undeclared identifier is reported only once for each function it appears in

vim +948 drivers//watchdog/watchdog_dev.c

   919	
   920	/*
   921	 *	watchdog_cdev_register: register watchdog character device
   922	 *	@wdd: watchdog device
   923	 *	@devno: character device number
   924	 *
   925	 *	Register a watchdog character device including handling the legacy
   926	 *	/dev/watchdog node. /dev/watchdog is actually a miscdevice and
   927	 *	thus we set it up like that.
   928	 */
   929	
   930	static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
   931	{
   932		struct watchdog_core_data *wd_data;
   933		int err;
   934	
   935		wd_data = kzalloc(sizeof(struct watchdog_core_data), GFP_KERNEL);
   936		if (!wd_data)
   937			return -ENOMEM;
   938		kref_init(&wd_data->kref);
   939		mutex_init(&wd_data->lock);
   940	
   941		wd_data->wdd = wdd;
   942		wdd->wd_data = wd_data;
   943	
   944		if (IS_ERR_OR_NULL(watchdog_kworker))
   945			return -ENODEV;
   946	
   947		kthread_init_work(&wd_data->work, watchdog_ping_work);
 > 948		hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
   949		wd_data->timer.function = watchdog_timer_expired;
   950	
   951		if (wdd->id == 0) {
   952			old_wd_data = wd_data;
   953			watchdog_miscdev.parent = wdd->parent;
   954			err = misc_register(&watchdog_miscdev);
   955			if (err != 0) {
   956				pr_err("%s: cannot register miscdev on minor=%d (err=%d).\n",
   957					wdd->info->identity, WATCHDOG_MINOR, err);
   958				if (err == -EBUSY)
   959					pr_err("%s: a legacy watchdog module is probably present.\n",
   960						wdd->info->identity);
   961				old_wd_data = NULL;
   962				kfree(wd_data);
   963				return err;
   964			}
   965		}
   966	
   967		/* Fill in the data structures */
   968		cdev_init(&wd_data->cdev, &watchdog_fops);
   969		wd_data->cdev.owner = wdd->ops->owner;
   970	
   971		/* Add the device */
   972		err = cdev_add(&wd_data->cdev, devno, 1);
   973		if (err) {
   974			pr_err("watchdog%d unable to add device %d:%d\n",
   975				wdd->id,  MAJOR(watchdog_devt), wdd->id);
   976			if (wdd->id == 0) {
   977				misc_deregister(&watchdog_miscdev);
   978				old_wd_data = NULL;
   979				kref_put(&wd_data->kref, watchdog_core_data_release);
   980			}
   981			return err;
   982		}
   983	
   984		/* Record time of most recent heartbeat as 'just before now'. */
   985		wd_data->last_hw_keepalive = ktime_sub(ktime_get(), 1);
   986	
   987		/*
   988		 * If the watchdog is running, prevent its driver from being unloaded,
   989		 * and schedule an immediate ping.
   990		 */
   991		if (watchdog_hw_running(wdd)) {
   992			__module_get(wdd->ops->owner);
   993			kref_get(&wd_data->kref);
   994			if (handle_boot_enabled)
   995				hrtimer_start(&wd_data->timer, 0, HRTIMER_MODE_REL);
   996			else
   997				pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n",
   998					wdd->id);
   999		}
  1000	
  1001		return 0;
  1002	}
  1003	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 29470 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
  2018-09-28 22:38     ` kbuild test robot
@ 2018-09-28 23:20     ` kbuild test robot
  2018-09-30 14:00     ` Guenter Roeck
  2018-10-05 16:52     ` Sebastian Andrzej Siewior
  3 siblings, 0 replies; 15+ messages in thread
From: kbuild test robot @ 2018-09-28 23:20 UTC (permalink / raw)
  To: Julia Cartwright
  Cc: kbuild-all, Sebastian Andrzej Siewior, Thomas Gleixner,
	linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
	Guenter Roeck

[-- Attachment #1: Type: text/plain, Size: 4217 bytes --]

Hi Julia,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Julia-Cartwright/kthread-convert-worker-lock-to-raw-spinlock/20180929-052522
config: i386-randconfig-s0-201838 (attached as .config)
compiler: gcc-6 (Debian 6.4.0-9) 6.4.0 20171026
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   drivers//watchdog/watchdog_dev.c: In function 'watchdog_cdev_register':
>> drivers//watchdog/watchdog_dev.c:948:49: error: 'HRTIMER_MODE_REL_HARD' undeclared (first use in this function)
     hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
                                                    ^~~~~~~~~~~~~~~~~~~~~
   drivers//watchdog/watchdog_dev.c:948:49: note: each undeclared identifier is reported only once for each function it appears in

vim +/HRTIMER_MODE_REL_HARD +948 drivers//watchdog/watchdog_dev.c

   919	
   920	/*
   921	 *	watchdog_cdev_register: register watchdog character device
   922	 *	@wdd: watchdog device
   923	 *	@devno: character device number
   924	 *
   925	 *	Register a watchdog character device including handling the legacy
   926	 *	/dev/watchdog node. /dev/watchdog is actually a miscdevice and
   927	 *	thus we set it up like that.
   928	 */
   929	
   930	static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
   931	{
   932		struct watchdog_core_data *wd_data;
   933		int err;
   934	
   935		wd_data = kzalloc(sizeof(struct watchdog_core_data), GFP_KERNEL);
   936		if (!wd_data)
   937			return -ENOMEM;
   938		kref_init(&wd_data->kref);
   939		mutex_init(&wd_data->lock);
   940	
   941		wd_data->wdd = wdd;
   942		wdd->wd_data = wd_data;
   943	
   944		if (IS_ERR_OR_NULL(watchdog_kworker))
   945			return -ENODEV;
   946	
   947		kthread_init_work(&wd_data->work, watchdog_ping_work);
 > 948		hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
   949		wd_data->timer.function = watchdog_timer_expired;
   950	
   951		if (wdd->id == 0) {
   952			old_wd_data = wd_data;
   953			watchdog_miscdev.parent = wdd->parent;
   954			err = misc_register(&watchdog_miscdev);
   955			if (err != 0) {
   956				pr_err("%s: cannot register miscdev on minor=%d (err=%d).\n",
   957					wdd->info->identity, WATCHDOG_MINOR, err);
   958				if (err == -EBUSY)
   959					pr_err("%s: a legacy watchdog module is probably present.\n",
   960						wdd->info->identity);
   961				old_wd_data = NULL;
   962				kfree(wd_data);
   963				return err;
   964			}
   965		}
   966	
   967		/* Fill in the data structures */
   968		cdev_init(&wd_data->cdev, &watchdog_fops);
   969		wd_data->cdev.owner = wdd->ops->owner;
   970	
   971		/* Add the device */
   972		err = cdev_add(&wd_data->cdev, devno, 1);
   973		if (err) {
   974			pr_err("watchdog%d unable to add device %d:%d\n",
   975				wdd->id,  MAJOR(watchdog_devt), wdd->id);
   976			if (wdd->id == 0) {
   977				misc_deregister(&watchdog_miscdev);
   978				old_wd_data = NULL;
   979				kref_put(&wd_data->kref, watchdog_core_data_release);
   980			}
   981			return err;
   982		}
   983	
   984		/* Record time of most recent heartbeat as 'just before now'. */
   985		wd_data->last_hw_keepalive = ktime_sub(ktime_get(), 1);
   986	
   987		/*
   988		 * If the watchdog is running, prevent its driver from being unloaded,
   989		 * and schedule an immediate ping.
   990		 */
   991		if (watchdog_hw_running(wdd)) {
   992			__module_get(wdd->ops->owner);
   993			kref_get(&wd_data->kref);
   994			if (handle_boot_enabled)
   995				hrtimer_start(&wd_data->timer, 0, HRTIMER_MODE_REL);
   996			else
   997				pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n",
   998					wdd->id);
   999		}
  1000	
  1001		return 0;
  1002	}
  1003	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 31829 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-28 22:38     ` kbuild test robot
@ 2018-09-29  6:38       ` Thomas Gleixner
  2018-09-29 22:13         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2018-09-29  6:38 UTC (permalink / raw)
  To: kbuild test robot
  Cc: Julia Cartwright, kbuild-all, Sebastian Andrzej Siewior,
	linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
	Guenter Roeck

On Sat, 29 Sep 2018, kbuild test robot wrote:
> [also build test ERROR on v4.19-rc5 next-20180928]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 

It's against the RT tree, so it won't work against next or upstream. I
think it would be good to have a machine parseable tag to describe which
tree/branch/commit patches are applicable to. If that tag is missing, the
bot can assume it's against upstream/next.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-29  6:38       ` Thomas Gleixner
@ 2018-09-29 22:13         ` Sebastian Andrzej Siewior
  2018-09-30  1:41           ` [kbuild-all] " Li, Philip
  0 siblings, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-09-29 22:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: kbuild test robot, Julia Cartwright, kbuild-all, linux-kernel,
	linux-rt-users, Steffen Trumtrar, Tim Sander, Guenter Roeck

On 2018-09-29 08:38:55 [+0200], Thomas Gleixner wrote:
> On Sat, 29 Sep 2018, kbuild test robot wrote:
> > [also build test ERROR on v4.19-rc5 next-20180928]
> > [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> > 
> 
> It's against the RT tree, so it won't work against next or upstream. I
> think it would be good to have a machine parseable tag to describe which
> tree/branch/commit patches are applicable to. If that tag is missing, the
> bot can assume it's against upstream/next.

I asked the testing team to ignore patches or test against the RT tree
if the patch is tagged [PATCH RT]. I think it worked since I haven't
seen those mails.

> Thanks,
> 
> 	tglx

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [kbuild-all] [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-29 22:13         ` Sebastian Andrzej Siewior
@ 2018-09-30  1:41           ` Li, Philip
  0 siblings, 0 replies; 15+ messages in thread
From: Li, Philip @ 2018-09-30  1:41 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, Thomas Gleixner
  Cc: linux-rt-users, Julia Cartwright, linux-kernel, kbuild-all,
	Tim Sander, Steffen Trumtrar, Guenter Roeck, lkp

> Subject: Re: [kbuild-all] [PATCH RT 2/2] watchdog, rt: prevent deferral of
> watchdogd wakeup
> 
> On 2018-09-29 08:38:55 [+0200], Thomas Gleixner wrote:
> > On Sat, 29 Sep 2018, kbuild test robot wrote:
> > > [also build test ERROR on v4.19-rc5 next-20180928]
> > > [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system]
> > >
> >
> > It's against the RT tree, so it won't work against next or upstream. I
> > think it would be good to have a machine parseable tag to describe which
> > tree/branch/commit patches are applicable to. If that tag is missing, the
> > bot can assume it's against upstream/next.
> 
> I asked the testing team to ignore patches or test against the RT tree
> if the patch is tagged [PATCH RT]. I think it worked since I haven't
> seen those mails.
thanks Sebastian for the input, we will consider this to add to our TODO.

> 
> > Thanks,
> >
> > 	tglx
> 
> Sebastian
> _______________________________________________
> kbuild-all mailing list
> kbuild-all@lists.01.org
> https://lists.01.org/mailman/listinfo/kbuild-all

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
  2018-09-28 22:38     ` kbuild test robot
  2018-09-28 23:20     ` kbuild test robot
@ 2018-09-30 14:00     ` Guenter Roeck
  2018-10-05 16:52     ` Sebastian Andrzej Siewior
  3 siblings, 0 replies; 15+ messages in thread
From: Guenter Roeck @ 2018-09-30 14:00 UTC (permalink / raw)
  To: Julia Cartwright, Sebastian Andrzej Siewior, Thomas Gleixner
  Cc: linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander

On 09/28/2018 02:03 PM, Julia Cartwright wrote:
> When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are
> deferred for execution into the context of ktimersoftd unless otherwise
> annotated.
> 
> Deferring the expiry of the hrtimer used by the watchdog core, however,
> is a waste, as the callback does nothing but queue a kthread work item
> and wakeup watchdogd.
> 
> It's worst then that, too: the deferral through ktimersoftd also means
> that for correct behavior a user must adjust the scheduling parameters
> of both watchdogd _and_ ktimersoftd, which is unnecessary and has other
> side effects (like causing unrelated expiry functions to execute at
> potentially elevated priority).
> 
> Instead, mark the hrtimer used by the watchdog core as being _HARD to
> allow it's execution directly from hardirq context.  The work done in
> this expiry function is well-bounded and minimal.
> 
> A user still must adjust the scheduling parameters of the watchdogd
> to be correct w.r.t. their application needs.
> 
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>

Acked-by: Guenter Roeck <linux@roeck-us.net>

> ---
>   drivers/watchdog/watchdog_dev.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
> index ffbdc4642ea5..9c2b3e5cebdc 100644
> --- a/drivers/watchdog/watchdog_dev.c
> +++ b/drivers/watchdog/watchdog_dev.c
> @@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
>   		return -ENODEV;
>   
>   	kthread_init_work(&wd_data->work, watchdog_ping_work);
> -	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> +	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
>   	wd_data->timer.function = watchdog_timer_expired;
>   
>   	if (wdd->id == 0) {
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
  2018-09-28 21:03   ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
@ 2018-10-05 16:46     ` Sebastian Andrzej Siewior
  2018-10-05 18:10     ` Andrea Parri
  1 sibling, 0 replies; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:46 UTC (permalink / raw)
  To: Julia Cartwright
  Cc: Ingo Molnar, Thomas Gleixner, Peter Zijlstra, linux-kernel,
	linux-rt-users, Steffen Trumtrar, Tim Sander, Guenter Roeck

On 2018-09-28 21:03:51 [+0000], Julia Cartwright wrote:
> In order to enable the queuing of kthread work items from hardirq
> context even when PREEMPT_RT_FULL is enabled, convert the worker
> spin_lock to a raw_spin_lock.
> 
> This is only acceptable to do because the work performed under the lock
> is well-bounded and minimal.
> 
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>

Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup
  2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
                       ` (2 preceding siblings ...)
  2018-09-30 14:00     ` Guenter Roeck
@ 2018-10-05 16:52     ` Sebastian Andrzej Siewior
  3 siblings, 0 replies; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:52 UTC (permalink / raw)
  To: Julia Cartwright
  Cc: Thomas Gleixner, linux-kernel, linux-rt-users, Steffen Trumtrar,
	Tim Sander, Guenter Roeck

On 2018-09-28 21:03:51 [+0000], Julia Cartwright wrote:
> When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are
> deferred for execution into the context of ktimersoftd unless otherwise
> annotated.
> Signed-off-by: Julia Cartwright <julia@ni.com>

did s@HRTIMER_MODE_REL@HRTIMER_MODE_REL_HARD@ and then applied.

Thank you Julia.

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
  2018-09-28 21:03   ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
  2018-10-05 16:46     ` Sebastian Andrzej Siewior
@ 2018-10-05 18:10     ` Andrea Parri
  2018-10-09 10:56       ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 15+ messages in thread
From: Andrea Parri @ 2018-10-05 18:10 UTC (permalink / raw)
  To: Julia Cartwright
  Cc: Ingo Molnar, Thomas Gleixner, Peter Zijlstra, linux-kernel,
	linux-rt-users, Steffen Trumtrar, Tim Sander,
	Sebastian Andrzej Siewior, Guenter Roeck

Hi Julia,

On Fri, Sep 28, 2018 at 09:03:51PM +0000, Julia Cartwright wrote:
> In order to enable the queuing of kthread work items from hardirq
> context even when PREEMPT_RT_FULL is enabled, convert the worker
> spin_lock to a raw_spin_lock.
> 
> This is only acceptable to do because the work performed under the lock
> is well-bounded and minimal.

Clearly not my topic..., but out of curiosity:  What do you mean by
"well-bounded" and "minimal"?  Can you maybe point me to some doc.?

  Andrea


> 
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>
> ---
>  include/linux/kthread.h |  2 +-
>  kernel/kthread.c        | 42 ++++++++++++++++++++---------------------
>  2 files changed, 22 insertions(+), 22 deletions(-)
> 
> diff --git a/include/linux/kthread.h b/include/linux/kthread.h
> index c1961761311d..ad292898f7f2 100644
> --- a/include/linux/kthread.h
> +++ b/include/linux/kthread.h
> @@ -85,7 +85,7 @@ enum {
>  
>  struct kthread_worker {
>  	unsigned int		flags;
> -	spinlock_t		lock;
> +	raw_spinlock_t		lock;
>  	struct list_head	work_list;
>  	struct list_head	delayed_work_list;
>  	struct task_struct	*task;
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 486dedbd9af5..c1d9ee6671c6 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -597,7 +597,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
>  				struct lock_class_key *key)
>  {
>  	memset(worker, 0, sizeof(struct kthread_worker));
> -	spin_lock_init(&worker->lock);
> +	raw_spin_lock_init(&worker->lock);
>  	lockdep_set_class_and_name(&worker->lock, key, name);
>  	INIT_LIST_HEAD(&worker->work_list);
>  	INIT_LIST_HEAD(&worker->delayed_work_list);
> @@ -639,21 +639,21 @@ int kthread_worker_fn(void *worker_ptr)
>  
>  	if (kthread_should_stop()) {
>  		__set_current_state(TASK_RUNNING);
> -		spin_lock_irq(&worker->lock);
> +		raw_spin_lock_irq(&worker->lock);
>  		worker->task = NULL;
> -		spin_unlock_irq(&worker->lock);
> +		raw_spin_unlock_irq(&worker->lock);
>  		return 0;
>  	}
>  
>  	work = NULL;
> -	spin_lock_irq(&worker->lock);
> +	raw_spin_lock_irq(&worker->lock);
>  	if (!list_empty(&worker->work_list)) {
>  		work = list_first_entry(&worker->work_list,
>  					struct kthread_work, node);
>  		list_del_init(&work->node);
>  	}
>  	worker->current_work = work;
> -	spin_unlock_irq(&worker->lock);
> +	raw_spin_unlock_irq(&worker->lock);
>  
>  	if (work) {
>  		__set_current_state(TASK_RUNNING);
> @@ -810,12 +810,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
>  	bool ret = false;
>  	unsigned long flags;
>  
> -	spin_lock_irqsave(&worker->lock, flags);
> +	raw_spin_lock_irqsave(&worker->lock, flags);
>  	if (!queuing_blocked(worker, work)) {
>  		kthread_insert_work(worker, work, &worker->work_list);
>  		ret = true;
>  	}
> -	spin_unlock_irqrestore(&worker->lock, flags);
> +	raw_spin_unlock_irqrestore(&worker->lock, flags);
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(kthread_queue_work);
> @@ -841,7 +841,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
>  	if (WARN_ON_ONCE(!worker))
>  		return;
>  
> -	spin_lock(&worker->lock);
> +	raw_spin_lock(&worker->lock);
>  	/* Work must not be used with >1 worker, see kthread_queue_work(). */
>  	WARN_ON_ONCE(work->worker != worker);
>  
> @@ -850,7 +850,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
>  	list_del_init(&work->node);
>  	kthread_insert_work(worker, work, &worker->work_list);
>  
> -	spin_unlock(&worker->lock);
> +	raw_spin_unlock(&worker->lock);
>  }
>  EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
>  
> @@ -906,14 +906,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
>  	unsigned long flags;
>  	bool ret = false;
>  
> -	spin_lock_irqsave(&worker->lock, flags);
> +	raw_spin_lock_irqsave(&worker->lock, flags);
>  
>  	if (!queuing_blocked(worker, work)) {
>  		__kthread_queue_delayed_work(worker, dwork, delay);
>  		ret = true;
>  	}
>  
> -	spin_unlock_irqrestore(&worker->lock, flags);
> +	raw_spin_unlock_irqrestore(&worker->lock, flags);
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
> @@ -949,7 +949,7 @@ void kthread_flush_work(struct kthread_work *work)
>  	if (!worker)
>  		return;
>  
> -	spin_lock_irq(&worker->lock);
> +	raw_spin_lock_irq(&worker->lock);
>  	/* Work must not be used with >1 worker, see kthread_queue_work(). */
>  	WARN_ON_ONCE(work->worker != worker);
>  
> @@ -961,7 +961,7 @@ void kthread_flush_work(struct kthread_work *work)
>  	else
>  		noop = true;
>  
> -	spin_unlock_irq(&worker->lock);
> +	raw_spin_unlock_irq(&worker->lock);
>  
>  	if (!noop)
>  		wait_for_completion(&fwork.done);
> @@ -994,9 +994,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
>  		 * any queuing is blocked by setting the canceling counter.
>  		 */
>  		work->canceling++;
> -		spin_unlock_irqrestore(&worker->lock, *flags);
> +		raw_spin_unlock_irqrestore(&worker->lock, *flags);
>  		del_timer_sync(&dwork->timer);
> -		spin_lock_irqsave(&worker->lock, *flags);
> +		raw_spin_lock_irqsave(&worker->lock, *flags);
>  		work->canceling--;
>  	}
>  
> @@ -1043,7 +1043,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
>  	unsigned long flags;
>  	int ret = false;
>  
> -	spin_lock_irqsave(&worker->lock, flags);
> +	raw_spin_lock_irqsave(&worker->lock, flags);
>  
>  	/* Do not bother with canceling when never queued. */
>  	if (!work->worker)
> @@ -1060,7 +1060,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
>  fast_queue:
>  	__kthread_queue_delayed_work(worker, dwork, delay);
>  out:
> -	spin_unlock_irqrestore(&worker->lock, flags);
> +	raw_spin_unlock_irqrestore(&worker->lock, flags);
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
> @@ -1074,7 +1074,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
>  	if (!worker)
>  		goto out;
>  
> -	spin_lock_irqsave(&worker->lock, flags);
> +	raw_spin_lock_irqsave(&worker->lock, flags);
>  	/* Work must not be used with >1 worker, see kthread_queue_work(). */
>  	WARN_ON_ONCE(work->worker != worker);
>  
> @@ -1088,13 +1088,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
>  	 * In the meantime, block any queuing by setting the canceling counter.
>  	 */
>  	work->canceling++;
> -	spin_unlock_irqrestore(&worker->lock, flags);
> +	raw_spin_unlock_irqrestore(&worker->lock, flags);
>  	kthread_flush_work(work);
> -	spin_lock_irqsave(&worker->lock, flags);
> +	raw_spin_lock_irqsave(&worker->lock, flags);
>  	work->canceling--;
>  
>  out_fast:
> -	spin_unlock_irqrestore(&worker->lock, flags);
> +	raw_spin_unlock_irqrestore(&worker->lock, flags);
>  out:
>  	return ret;
>  }
> -- 
> 2.18.0
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
  2018-10-05 18:10     ` Andrea Parri
@ 2018-10-09 10:56       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-09 10:56 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Julia Cartwright, Ingo Molnar, Thomas Gleixner, Peter Zijlstra,
	linux-kernel, linux-rt-users, Steffen Trumtrar, Tim Sander,
	Guenter Roeck

On 2018-10-05 20:10:35 [+0200], Andrea Parri wrote:
> 
> Clearly not my topic..., but out of curiosity:  What do you mean by
> "well-bounded" and "minimal"?  Can you maybe point me to some doc.?

it means that the lock is not held for an arbitrary amount of time like
by processing a list with thousand items. Well-bounded would mean not
process more than one or five (or so) at a time.

>   Andrea

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] kthread: convert worker lock to raw spinlock
  2019-02-12 16:25 [PATCH 1/2] kthread: convert worker lock to raw spinlock Sebastian Andrzej Siewior
@ 2019-02-13 12:13 ` Petr Mladek
  0 siblings, 0 replies; 15+ messages in thread
From: Petr Mladek @ 2019-02-13 12:13 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Ingo Molnar, tglx, Julia Cartwright, Guenter Roeck,
	Steffen Trumtrar, Tim Sander

On Tue 2019-02-12 17:25:53, Sebastian Andrzej Siewior wrote:
> From: Julia Cartwright <julia@ni.com>
> 
> In order to enable the queuing of kthread work items from hardirq
> context even when PREEMPT_RT_FULL is enabled, convert the worker
> spin_lock to a raw_spin_lock.
> 
> This is only acceptable to do because the work performed under the lock
> is well-bounded and minimal.

I could confirm that it is well-bounded and minimal. The most
expensive function probably is add_timer() called from
__kthread_queue_delayed_work(). It might spin a bit
to get timer->base->lock.

> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Guenter Roeck <linux@roeck-us.net>
> Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
> Reported-by: Tim Sander <tim@krieglstein.org>
> Signed-off-by: Julia Cartwright <julia@ni.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Reviewed-by: Petr Mladek <pmladek@suse.com>

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/2] kthread: convert worker lock to raw spinlock
@ 2019-02-12 16:25 Sebastian Andrzej Siewior
  2019-02-13 12:13 ` Petr Mladek
  0 siblings, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-02-12 16:25 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, tglx, Julia Cartwright, Sebastian Andrzej Siewior,
	Guenter Roeck, Steffen Trumtrar, Tim Sander

From: Julia Cartwright <julia@ni.com>

In order to enable the queuing of kthread work items from hardirq
context even when PREEMPT_RT_FULL is enabled, convert the worker
spin_lock to a raw_spin_lock.

This is only acceptable to do because the work performed under the lock
is well-bounded and minimal.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/kthread.h |  4 ++--
 kernel/kthread.c        | 42 ++++++++++++++++++++---------------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index c1961761311db..6b8c064f0cbcd 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -85,7 +85,7 @@ enum {
 
 struct kthread_worker {
 	unsigned int		flags;
-	spinlock_t		lock;
+	raw_spinlock_t		lock;
 	struct list_head	work_list;
 	struct list_head	delayed_work_list;
 	struct task_struct	*task;
@@ -106,7 +106,7 @@ struct kthread_delayed_work {
 };
 
 #define KTHREAD_WORKER_INIT(worker)	{				\
-	.lock = __SPIN_LOCK_UNLOCKED((worker).lock),			\
+	.lock = __RAW_SPIN_LOCK_UNLOCKED((worker).lock),		\
 	.work_list = LIST_HEAD_INIT((worker).work_list),		\
 	.delayed_work_list = LIST_HEAD_INIT((worker).delayed_work_list),\
 	}
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 087d18d771b53..5641b55783a6f 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -599,7 +599,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
 				struct lock_class_key *key)
 {
 	memset(worker, 0, sizeof(struct kthread_worker));
-	spin_lock_init(&worker->lock);
+	raw_spin_lock_init(&worker->lock);
 	lockdep_set_class_and_name(&worker->lock, key, name);
 	INIT_LIST_HEAD(&worker->work_list);
 	INIT_LIST_HEAD(&worker->delayed_work_list);
@@ -641,21 +641,21 @@ int kthread_worker_fn(void *worker_ptr)
 
 	if (kthread_should_stop()) {
 		__set_current_state(TASK_RUNNING);
-		spin_lock_irq(&worker->lock);
+		raw_spin_lock_irq(&worker->lock);
 		worker->task = NULL;
-		spin_unlock_irq(&worker->lock);
+		raw_spin_unlock_irq(&worker->lock);
 		return 0;
 	}
 
 	work = NULL;
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	if (!list_empty(&worker->work_list)) {
 		work = list_first_entry(&worker->work_list,
 					struct kthread_work, node);
 		list_del_init(&work->node);
 	}
 	worker->current_work = work;
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (work) {
 		__set_current_state(TASK_RUNNING);
@@ -812,12 +812,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
 	bool ret = false;
 	unsigned long flags;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	if (!queuing_blocked(worker, work)) {
 		kthread_insert_work(worker, work, &worker->work_list);
 		ret = true;
 	}
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_work);
@@ -843,7 +843,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
 	if (WARN_ON_ONCE(!worker))
 		return;
 
-	spin_lock(&worker->lock);
+	raw_spin_lock(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -852,7 +852,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t)
 	list_del_init(&work->node);
 	kthread_insert_work(worker, work, &worker->work_list);
 
-	spin_unlock(&worker->lock);
+	raw_spin_unlock(&worker->lock);
 }
 EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
 
@@ -908,14 +908,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
 	unsigned long flags;
 	bool ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	if (!queuing_blocked(worker, work)) {
 		__kthread_queue_delayed_work(worker, dwork, delay);
 		ret = true;
 	}
 
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@@ -951,7 +951,7 @@ void kthread_flush_work(struct kthread_work *work)
 	if (!worker)
 		return;
 
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -963,7 +963,7 @@ void kthread_flush_work(struct kthread_work *work)
 	else
 		noop = true;
 
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (!noop)
 		wait_for_completion(&fwork.done);
@@ -996,9 +996,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
 		 * any queuing is blocked by setting the canceling counter.
 		 */
 		work->canceling++;
-		spin_unlock_irqrestore(&worker->lock, *flags);
+		raw_spin_unlock_irqrestore(&worker->lock, *flags);
 		del_timer_sync(&dwork->timer);
-		spin_lock_irqsave(&worker->lock, *flags);
+		raw_spin_lock_irqsave(&worker->lock, *flags);
 		work->canceling--;
 	}
 
@@ -1045,7 +1045,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 	unsigned long flags;
 	int ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	/* Do not bother with canceling when never queued. */
 	if (!work->worker)
@@ -1062,7 +1062,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 fast_queue:
 	__kthread_queue_delayed_work(worker, dwork, delay);
 out:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@@ -1076,7 +1076,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
 	if (!worker)
 		goto out;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -1090,13 +1090,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
 	 * In the meantime, block any queuing by setting the canceling counter.
 	 */
 	work->canceling++;
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	kthread_flush_work(work);
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	work->canceling--;
 
 out_fast:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 out:
 	return ret;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-02-13 12:13 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <73in2vl5mj.fsf@pengutronix.de>
2018-09-28 21:03 ` [PATCH 0/2] Fix watchdogd wakeup deferral on RT Julia Cartwright
2018-09-28 21:03   ` [PATCH 1/2] kthread: convert worker lock to raw spinlock Julia Cartwright
2018-10-05 16:46     ` Sebastian Andrzej Siewior
2018-10-05 18:10     ` Andrea Parri
2018-10-09 10:56       ` Sebastian Andrzej Siewior
2018-09-28 21:03   ` [PATCH RT 2/2] watchdog, rt: prevent deferral of watchdogd wakeup Julia Cartwright
2018-09-28 22:38     ` kbuild test robot
2018-09-29  6:38       ` Thomas Gleixner
2018-09-29 22:13         ` Sebastian Andrzej Siewior
2018-09-30  1:41           ` [kbuild-all] " Li, Philip
2018-09-28 23:20     ` kbuild test robot
2018-09-30 14:00     ` Guenter Roeck
2018-10-05 16:52     ` Sebastian Andrzej Siewior
2019-02-12 16:25 [PATCH 1/2] kthread: convert worker lock to raw spinlock Sebastian Andrzej Siewior
2019-02-13 12:13 ` Petr Mladek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).