linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET] workqueue: use irqsafe timer in delayed_work
@ 2012-08-08 21:37 Tejun Heo
  2012-08-08 21:37 ` [PATCH 1/7] workqueue: cosmetic whitespace updates for macro definitions Tejun Heo
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:37 UTC (permalink / raw)
  To: linux-kernel; +Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen

Hello,

Because IRQs can happen between delayed_work->timer being dispatched
and delayed_work_timer_fn() actually queueing delayed_work->work,
try_to_grab_pending() couldn't be used from IRQ handlers.  If it hits
the window, it will return -EAGAIN perpetually.  This makes it
impossible to steal PENDING from IRQ handlers using
try_to_grab_pending() leading to the following issues.

* mod_delayed_work() can't be used from IRQ handlers.

* __cancel_delayed_work() can't use the usual try_to_grab_pending()
  which handles all three states but instead only deals with the first
  state using a separate implementation.  There's no way to make a
  delayed_work not pending from IRQ handlers.

* The context / behavior differences among cancel_delayed_work(),
  __cancel_delayed_work(), cancel_delayed_work_sync() are subtle and
  confusing (the first two are mostly historical tho).

This patchset makes delayed_work use the irqsafe timer added by the
pending "timer: clean up initializers and implement irqsafe timers"
patchset[1].  This enables try_to_grab_pending() to be used from any
context which in turn makes mod_delayed_work() usable from IRQ
handlers.  cancel_delayed_work() is reimplemented using
try_to_grab_pending() so that it also can be used from IRQ handlers
and its behavior is consitent with other canceling operations.
__cancel_delayed_work() is no longer necessary and deprecated.

 0001-workqueue-cosmetic-whitespace-updates-for-macro-defi.patch
 0002-workqueue-make-deferrable-delayed_work-initializer-n.patch
 0003-workqueue-clean-up-delayed_work-initializers-and-add.patch
 0004-workqueue-use-irqsafe-timer-for-delayed_work.patch
 0005-workqueue-use-mod_delayed_work-instead-of-__cancel-q.patch
 0006-workqueue-reimplement-cancel_delayed_work-using-try_.patch
 0007-workqueue-deprecate-__cancel_delayed_work.patch

0001-0003 are prep patches.

0004 makes delayed_work use irqsafe timers.  This makes
try_to_grab_pending() and mod_delayed_work() safe to use from any
context.

0005 converts all __cancel_delayed_work() + queue_delayed_work()
sequences to mod_delayed_work().  The link_watch conversion needs
David's ack.

0006 reimplements cancel_delayed_work() using try_to_grab_pending().

0007 replaces __cancel_delayed_work() calls with cancel_delayed_work()
and deprecates the underscored one.

This patchset is on top of

  [2] wq/for-3.7 (8fcd63664f "workqueue: fix CPU binding of flush_delayed...")
+ [1] timer: clean up initializers and implement irqsafe timers

and available in the following git branch.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git review-delayed_work-irqsafe

diffstat follows.

 arch/powerpc/platforms/cell/cpufreq_spudemand.c |    2 
 block/blk-core.c                                |    8 -
 block/blk-throttle.c                            |    7 -
 drivers/block/floppy.c                          |    5 
 drivers/cpufreq/cpufreq_conservative.c          |    2 
 drivers/cpufreq/cpufreq_ondemand.c              |    2 
 drivers/devfreq/devfreq.c                       |    2 
 drivers/infiniband/core/mad.c                   |   16 --
 drivers/input/keyboard/qt2160.c                 |    3 
 drivers/input/mouse/synaptics_i2c.c             |    7 -
 drivers/net/ethernet/mellanox/mlx4/sense.c      |    2 
 drivers/power/ab8500_btemp.c                    |    2 
 drivers/power/ab8500_charger.c                  |    8 -
 drivers/power/ab8500_fg.c                       |    8 -
 drivers/power/abx500_chargalg.c                 |    4 
 drivers/power/max17040_battery.c                |    2 
 drivers/video/omap2/displays/panel-taal.c       |    6 
 drivers/video/omap2/dss/dsi.c                   |    6 
 include/linux/workqueue.h                       |  155 ++++++++++--------------
 kernel/workqueue.c                              |   50 ++++++-
 mm/slab.c                                       |    2 
 mm/vmstat.c                                     |    2 
 net/core/link_watch.c                           |   21 ---
 net/core/neighbour.c                            |    2 
 net/ipv4/inetpeer.c                             |    2 
 net/sunrpc/cache.c                              |    2 
 26 files changed, 159 insertions(+), 169 deletions(-)

--
tejun

[1] http://thread.gmane.org/gmane.linux.kernel/1340224
[2] git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-3.7

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/7] workqueue: cosmetic whitespace updates for macro definitions
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
@ 2012-08-08 21:37 ` Tejun Heo
  2012-08-08 21:37 ` [PATCH 2/7] workqueue: make deferrable delayed_work initializer names consistent Tejun Heo
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen, Tejun Heo

Consistently use the last tab position for '\' line continuation in
complex macro definitions.  This is to help the following patches.

This patch is cosmetic.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/linux/workqueue.h |  126 ++++++++++++++++++++++----------------------
 1 files changed, 63 insertions(+), 63 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index b14d5d5..0b94714 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -126,43 +126,43 @@ struct execute_work {
 #define __WORK_INIT_LOCKDEP_MAP(n, k)
 #endif
 
-#define __WORK_INITIALIZER(n, f) {				\
-	.data = WORK_DATA_STATIC_INIT(),			\
-	.entry	= { &(n).entry, &(n).entry },			\
-	.func = (f),						\
-	__WORK_INIT_LOCKDEP_MAP(#n, &(n))			\
+#define __WORK_INITIALIZER(n, f) {					\
+	.data = WORK_DATA_STATIC_INIT(),				\
+	.entry	= { &(n).entry, &(n).entry },				\
+	.func = (f),							\
+	__WORK_INIT_LOCKDEP_MAP(#n, &(n))				\
 	}
 
-#define __DELAYED_WORK_INITIALIZER(n, f) {			\
-	.work = __WORK_INITIALIZER((n).work, (f)),		\
-	.timer = TIMER_INITIALIZER(delayed_work_timer_fn,	\
-				0, (unsigned long)&(n)),	\
+#define __DELAYED_WORK_INITIALIZER(n, f) {				\
+	.work = __WORK_INITIALIZER((n).work, (f)),			\
+	.timer = TIMER_INITIALIZER(delayed_work_timer_fn,		\
+				0, (unsigned long)&(n)),		\
 	}
 
-#define __DEFERRED_WORK_INITIALIZER(n, f) {			\
-	.work = __WORK_INITIALIZER((n).work, (f)),		\
-	.timer = TIMER_DEFERRED_INITIALIZER(delayed_work_timer_fn, \
-				0, (unsigned long)&(n)),	\
+#define __DEFERRED_WORK_INITIALIZER(n, f) {				\
+	.work = __WORK_INITIALIZER((n).work, (f)),			\
+	.timer = TIMER_DEFERRED_INITIALIZER(delayed_work_timer_fn,	\
+				0, (unsigned long)&(n)),		\
 	}
 
-#define DECLARE_WORK(n, f)					\
+#define DECLARE_WORK(n, f)						\
 	struct work_struct n = __WORK_INITIALIZER(n, f)
 
-#define DECLARE_DELAYED_WORK(n, f)				\
+#define DECLARE_DELAYED_WORK(n, f)					\
 	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f)
 
-#define DECLARE_DEFERRED_WORK(n, f)				\
+#define DECLARE_DEFERRED_WORK(n, f)					\
 	struct delayed_work n = __DEFERRED_WORK_INITIALIZER(n, f)
 
 /*
  * initialize a work item's function pointer
  */
-#define PREPARE_WORK(_work, _func)				\
-	do {							\
-		(_work)->func = (_func);			\
+#define PREPARE_WORK(_work, _func)					\
+	do {								\
+		(_work)->func = (_func);				\
 	} while (0)
 
-#define PREPARE_DELAYED_WORK(_work, _func)			\
+#define PREPARE_DELAYED_WORK(_work, _func)				\
 	PREPARE_WORK(&(_work)->work, (_func))
 
 #ifdef CONFIG_DEBUG_OBJECTS_WORK
@@ -192,7 +192,7 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
 									\
 		__init_work((_work), _onstack);				\
 		(_work)->data = (atomic_long_t) WORK_DATA_INIT();	\
-		lockdep_init_map(&(_work)->lockdep_map, #_work, &__key, 0);\
+		lockdep_init_map(&(_work)->lockdep_map, #_work, &__key, 0); \
 		INIT_LIST_HEAD(&(_work)->entry);			\
 		PREPARE_WORK((_work), (_func));				\
 	} while (0)
@@ -206,38 +206,38 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
 	} while (0)
 #endif
 
-#define INIT_WORK(_work, _func)					\
-	do {							\
-		__INIT_WORK((_work), (_func), 0);		\
+#define INIT_WORK(_work, _func)						\
+	do {								\
+		__INIT_WORK((_work), (_func), 0);			\
 	} while (0)
 
-#define INIT_WORK_ONSTACK(_work, _func)				\
-	do {							\
-		__INIT_WORK((_work), (_func), 1);		\
+#define INIT_WORK_ONSTACK(_work, _func)					\
+	do {								\
+		__INIT_WORK((_work), (_func), 1);			\
 	} while (0)
 
-#define INIT_DELAYED_WORK(_work, _func)				\
-	do {							\
-		INIT_WORK(&(_work)->work, (_func));		\
-		init_timer(&(_work)->timer);			\
-		(_work)->timer.function = delayed_work_timer_fn;\
-		(_work)->timer.data = (unsigned long)(_work);	\
+#define INIT_DELAYED_WORK(_work, _func)					\
+	do {								\
+		INIT_WORK(&(_work)->work, (_func));			\
+		init_timer(&(_work)->timer);				\
+		(_work)->timer.function = delayed_work_timer_fn;	\
+		(_work)->timer.data = (unsigned long)(_work);		\
 	} while (0)
 
-#define INIT_DELAYED_WORK_ONSTACK(_work, _func)			\
-	do {							\
-		INIT_WORK_ONSTACK(&(_work)->work, (_func));	\
-		init_timer_on_stack(&(_work)->timer);		\
-		(_work)->timer.function = delayed_work_timer_fn;\
-		(_work)->timer.data = (unsigned long)(_work);	\
+#define INIT_DELAYED_WORK_ONSTACK(_work, _func)				\
+	do {								\
+		INIT_WORK_ONSTACK(&(_work)->work, (_func));		\
+		init_timer_on_stack(&(_work)->timer);			\
+		(_work)->timer.function = delayed_work_timer_fn;	\
+		(_work)->timer.data = (unsigned long)(_work);		\
 	} while (0)
 
-#define INIT_DELAYED_WORK_DEFERRABLE(_work, _func)		\
-	do {							\
-		INIT_WORK(&(_work)->work, (_func));		\
-		init_timer_deferrable(&(_work)->timer);		\
-		(_work)->timer.function = delayed_work_timer_fn;\
-		(_work)->timer.data = (unsigned long)(_work);	\
+#define INIT_DELAYED_WORK_DEFERRABLE(_work, _func)			\
+	do {								\
+		INIT_WORK(&(_work)->work, (_func));			\
+		init_timer_deferrable(&(_work)->timer);			\
+		(_work)->timer.function = delayed_work_timer_fn;	\
+		(_work)->timer.data = (unsigned long)(_work);		\
 	} while (0)
 
 /**
@@ -340,22 +340,22 @@ __alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
  * Pointer to the allocated workqueue on success, %NULL on failure.
  */
 #ifdef CONFIG_LOCKDEP
-#define alloc_workqueue(fmt, flags, max_active, args...)	\
-({								\
-	static struct lock_class_key __key;			\
-	const char *__lock_name;				\
-								\
-	if (__builtin_constant_p(fmt))				\
-		__lock_name = (fmt);				\
-	else							\
-		__lock_name = #fmt;				\
-								\
-	__alloc_workqueue_key((fmt), (flags), (max_active),	\
-			      &__key, __lock_name, ##args);	\
+#define alloc_workqueue(fmt, flags, max_active, args...)		\
+({									\
+	static struct lock_class_key __key;				\
+	const char *__lock_name;					\
+									\
+	if (__builtin_constant_p(fmt))					\
+		__lock_name = (fmt);					\
+	else								\
+		__lock_name = #fmt;					\
+									\
+	__alloc_workqueue_key((fmt), (flags), (max_active),		\
+			      &__key, __lock_name, ##args);		\
 })
 #else
-#define alloc_workqueue(fmt, flags, max_active, args...)	\
-	__alloc_workqueue_key((fmt), (flags), (max_active),	\
+#define alloc_workqueue(fmt, flags, max_active, args...)		\
+	__alloc_workqueue_key((fmt), (flags), (max_active),		\
 			      NULL, NULL, ##args)
 #endif
 
@@ -372,14 +372,14 @@ __alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
  * RETURNS:
  * Pointer to the allocated workqueue on success, %NULL on failure.
  */
-#define alloc_ordered_workqueue(fmt, flags, args...)		\
+#define alloc_ordered_workqueue(fmt, flags, args...)			\
 	alloc_workqueue(fmt, WQ_UNBOUND | (flags), 1, ##args)
 
-#define create_workqueue(name)					\
+#define create_workqueue(name)						\
 	alloc_workqueue((name), WQ_MEM_RECLAIM, 1)
-#define create_freezable_workqueue(name)			\
+#define create_freezable_workqueue(name)				\
 	alloc_workqueue((name), WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, 1)
-#define create_singlethread_workqueue(name)			\
+#define create_singlethread_workqueue(name)				\
 	alloc_workqueue((name), WQ_UNBOUND | WQ_MEM_RECLAIM, 1)
 
 extern void destroy_workqueue(struct workqueue_struct *wq);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/7] workqueue: make deferrable delayed_work initializer names consistent
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
  2012-08-08 21:37 ` [PATCH 1/7] workqueue: cosmetic whitespace updates for macro definitions Tejun Heo
@ 2012-08-08 21:37 ` Tejun Heo
  2012-08-08 21:37 ` [PATCH 3/7] workqueue: clean up delayed_work initializers and add missing one Tejun Heo
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen, Tejun Heo

Initalizers for deferrable delayed_work are confused.

* __DEFERRED_WORK_INITIALIZER()
* DECLARE_DEFERRED_WORK()
* INIT_DELAYED_WORK_DEFERRABLE()

Rename them to

* __DEFERRABLE_WORK_INITIALIZER()
* DECLARE_DEFERRABLE_WORK()
* INIT_DEFERRABLE_WORK()

This patch doesn't cause any functional changes.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 arch/powerpc/platforms/cell/cpufreq_spudemand.c |    2 +-
 drivers/cpufreq/cpufreq_conservative.c          |    2 +-
 drivers/cpufreq/cpufreq_ondemand.c              |    2 +-
 drivers/devfreq/devfreq.c                       |    2 +-
 drivers/net/ethernet/mellanox/mlx4/sense.c      |    2 +-
 drivers/power/ab8500_btemp.c                    |    2 +-
 drivers/power/ab8500_charger.c                  |    8 ++++----
 drivers/power/ab8500_fg.c                       |    8 ++++----
 drivers/power/abx500_chargalg.c                 |    4 ++--
 drivers/power/max17040_battery.c                |    2 +-
 drivers/video/omap2/displays/panel-taal.c       |    6 +++---
 drivers/video/omap2/dss/dsi.c                   |    4 ++--
 include/linux/workqueue.h                       |    8 ++++----
 mm/slab.c                                       |    2 +-
 mm/vmstat.c                                     |    2 +-
 net/core/neighbour.c                            |    2 +-
 net/ipv4/inetpeer.c                             |    2 +-
 net/sunrpc/cache.c                              |    2 +-
 18 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/platforms/cell/cpufreq_spudemand.c b/arch/powerpc/platforms/cell/cpufreq_spudemand.c
index 23bc9db..82607d6 100644
--- a/arch/powerpc/platforms/cell/cpufreq_spudemand.c
+++ b/arch/powerpc/platforms/cell/cpufreq_spudemand.c
@@ -76,7 +76,7 @@ static void spu_gov_work(struct work_struct *work)
 static void spu_gov_init_work(struct spu_gov_info_struct *info)
 {
 	int delay = usecs_to_jiffies(info->poll_int);
-	INIT_DELAYED_WORK_DEFERRABLE(&info->work, spu_gov_work);
+	INIT_DEFERRABLE_WORK(&info->work, spu_gov_work);
 	schedule_delayed_work_on(info->policy->cpu, &info->work, delay);
 }
 
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 235a340..55f0354 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -466,7 +466,7 @@ static inline void dbs_timer_init(struct cpu_dbs_info_s *dbs_info)
 	delay -= jiffies % delay;
 
 	dbs_info->enable = 1;
-	INIT_DELAYED_WORK_DEFERRABLE(&dbs_info->work, do_dbs_timer);
+	INIT_DEFERRABLE_WORK(&dbs_info->work, do_dbs_timer);
 	schedule_delayed_work_on(dbs_info->cpu, &dbs_info->work, delay);
 }
 
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 836e9b0..14c1af5 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -644,7 +644,7 @@ static inline void dbs_timer_init(struct cpu_dbs_info_s *dbs_info)
 		delay -= jiffies % delay;
 
 	dbs_info->sample_type = DBS_NORMAL_SAMPLE;
-	INIT_DELAYED_WORK_DEFERRABLE(&dbs_info->work, do_dbs_timer);
+	INIT_DEFERRABLE_WORK(&dbs_info->work, do_dbs_timer);
 	schedule_delayed_work_on(dbs_info->cpu, &dbs_info->work, delay);
 }
 
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index 70c31d4..b146d76 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -607,7 +607,7 @@ static int __init devfreq_start_polling(void)
 	mutex_lock(&devfreq_list_lock);
 	polling = false;
 	devfreq_wq = create_freezable_workqueue("devfreq_wq");
-	INIT_DELAYED_WORK_DEFERRABLE(&devfreq_work, devfreq_monitor);
+	INIT_DEFERRABLE_WORK(&devfreq_work, devfreq_monitor);
 	mutex_unlock(&devfreq_list_lock);
 
 	devfreq_monitor(&devfreq_work.work);
diff --git a/drivers/net/ethernet/mellanox/mlx4/sense.c b/drivers/net/ethernet/mellanox/mlx4/sense.c
index 8024982..37b2378 100644
--- a/drivers/net/ethernet/mellanox/mlx4/sense.c
+++ b/drivers/net/ethernet/mellanox/mlx4/sense.c
@@ -153,5 +153,5 @@ void  mlx4_sense_init(struct mlx4_dev *dev)
 	for (port = 1; port <= dev->caps.num_ports; port++)
 		sense->do_sense_port[port] = 1;
 
-	INIT_DELAYED_WORK_DEFERRABLE(&sense->sense_poll, mlx4_sense_port);
+	INIT_DEFERRABLE_WORK(&sense->sense_poll, mlx4_sense_port);
 }
diff --git a/drivers/power/ab8500_btemp.c b/drivers/power/ab8500_btemp.c
index bba3cca..3041514 100644
--- a/drivers/power/ab8500_btemp.c
+++ b/drivers/power/ab8500_btemp.c
@@ -1018,7 +1018,7 @@ static int __devinit ab8500_btemp_probe(struct platform_device *pdev)
 	}
 
 	/* Init work for measuring temperature periodically */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->btemp_periodic_work,
+	INIT_DEFERRABLE_WORK(&di->btemp_periodic_work,
 		ab8500_btemp_periodic_work);
 
 	/* Identify the battery */
diff --git a/drivers/power/ab8500_charger.c b/drivers/power/ab8500_charger.c
index d4f0c98..0701dbc 100644
--- a/drivers/power/ab8500_charger.c
+++ b/drivers/power/ab8500_charger.c
@@ -2618,9 +2618,9 @@ static int __devinit ab8500_charger_probe(struct platform_device *pdev)
 	}
 
 	/* Init work for HW failure check */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->check_hw_failure_work,
+	INIT_DEFERRABLE_WORK(&di->check_hw_failure_work,
 		ab8500_charger_check_hw_failure_work);
-	INIT_DELAYED_WORK_DEFERRABLE(&di->check_usbchgnotok_work,
+	INIT_DEFERRABLE_WORK(&di->check_usbchgnotok_work,
 		ab8500_charger_check_usbchargernotok_work);
 
 	/*
@@ -2632,10 +2632,10 @@ static int __devinit ab8500_charger_probe(struct platform_device *pdev)
 	 * watchdog have to be kicked by the charger driver
 	 * when the AC charger is disabled
 	 */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->kick_wd_work,
+	INIT_DEFERRABLE_WORK(&di->kick_wd_work,
 		ab8500_charger_kick_watchdog_work);
 
-	INIT_DELAYED_WORK_DEFERRABLE(&di->check_vbat_work,
+	INIT_DEFERRABLE_WORK(&di->check_vbat_work,
 		ab8500_charger_check_vbat_work);
 
 	/* Init work for charger detection */
diff --git a/drivers/power/ab8500_fg.c b/drivers/power/ab8500_fg.c
index bf02225..5c9e7c2 100644
--- a/drivers/power/ab8500_fg.c
+++ b/drivers/power/ab8500_fg.c
@@ -2516,19 +2516,19 @@ static int __devinit ab8500_fg_probe(struct platform_device *pdev)
 	INIT_WORK(&di->fg_acc_cur_work, ab8500_fg_acc_cur_work);
 
 	/* Init work for reinitialising the fg algorithm */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->fg_reinit_work,
+	INIT_DEFERRABLE_WORK(&di->fg_reinit_work,
 		ab8500_fg_reinit_work);
 
 	/* Work delayed Queue to run the state machine */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->fg_periodic_work,
+	INIT_DEFERRABLE_WORK(&di->fg_periodic_work,
 		ab8500_fg_periodic_work);
 
 	/* Work to check low battery condition */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->fg_low_bat_work,
+	INIT_DEFERRABLE_WORK(&di->fg_low_bat_work,
 		ab8500_fg_low_bat_work);
 
 	/* Init work for HW failure check */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->fg_check_hw_failure_work,
+	INIT_DEFERRABLE_WORK(&di->fg_check_hw_failure_work,
 		ab8500_fg_check_hw_failure_work);
 
 	/* Initialize OVV, and other registers */
diff --git a/drivers/power/abx500_chargalg.c b/drivers/power/abx500_chargalg.c
index 804b88c..4d30280 100644
--- a/drivers/power/abx500_chargalg.c
+++ b/drivers/power/abx500_chargalg.c
@@ -1848,9 +1848,9 @@ static int __devinit abx500_chargalg_probe(struct platform_device *pdev)
 	}
 
 	/* Init work for chargalg */
-	INIT_DELAYED_WORK_DEFERRABLE(&di->chargalg_periodic_work,
+	INIT_DEFERRABLE_WORK(&di->chargalg_periodic_work,
 		abx500_chargalg_periodic_work);
-	INIT_DELAYED_WORK_DEFERRABLE(&di->chargalg_wd_work,
+	INIT_DEFERRABLE_WORK(&di->chargalg_wd_work,
 		abx500_chargalg_wd_work);
 
 	/* Init work for chargalg */
diff --git a/drivers/power/max17040_battery.c b/drivers/power/max17040_battery.c
index c284143..58e6783 100644
--- a/drivers/power/max17040_battery.c
+++ b/drivers/power/max17040_battery.c
@@ -232,7 +232,7 @@ static int __devinit max17040_probe(struct i2c_client *client,
 	max17040_reset(client);
 	max17040_get_version(client);
 
-	INIT_DELAYED_WORK_DEFERRABLE(&chip->work, max17040_work);
+	INIT_DEFERRABLE_WORK(&chip->work, max17040_work);
 	schedule_delayed_work(&chip->work, MAX17040_DELAY);
 
 	return 0;
diff --git a/drivers/video/omap2/displays/panel-taal.c b/drivers/video/omap2/displays/panel-taal.c
index 3f5acc7..6b5e6e0 100644
--- a/drivers/video/omap2/displays/panel-taal.c
+++ b/drivers/video/omap2/displays/panel-taal.c
@@ -906,7 +906,7 @@ static int taal_probe(struct omap_dss_device *dssdev)
 		r = -ENOMEM;
 		goto err_wq;
 	}
-	INIT_DELAYED_WORK_DEFERRABLE(&td->esd_work, taal_esd_work);
+	INIT_DEFERRABLE_WORK(&td->esd_work, taal_esd_work);
 	INIT_DELAYED_WORK(&td->ulps_work, taal_ulps_work);
 
 	dev_set_drvdata(&dssdev->dev, td);
@@ -962,8 +962,8 @@ static int taal_probe(struct omap_dss_device *dssdev)
 			goto err_irq;
 		}
 
-		INIT_DELAYED_WORK_DEFERRABLE(&td->te_timeout_work,
-					taal_te_timeout_work_callback);
+		INIT_DEFERRABLE_WORK(&td->te_timeout_work,
+				     taal_te_timeout_work_callback);
 
 		dev_dbg(&dssdev->dev, "Using GPIO TE\n");
 	}
diff --git a/drivers/video/omap2/dss/dsi.c b/drivers/video/omap2/dss/dsi.c
index b07e886..fd40f26 100644
--- a/drivers/video/omap2/dss/dsi.c
+++ b/drivers/video/omap2/dss/dsi.c
@@ -4863,8 +4863,8 @@ static int __init omap_dsihw_probe(struct platform_device *dsidev)
 	mutex_init(&dsi->lock);
 	sema_init(&dsi->bus_lock, 1);
 
-	INIT_DELAYED_WORK_DEFERRABLE(&dsi->framedone_timeout_work,
-			dsi_framedone_timeout_work_callback);
+	INIT_DEFERRABLE_WORK(&dsi->framedone_timeout_work,
+			     dsi_framedone_timeout_work_callback);
 
 #ifdef DSI_CATCH_MISSING_TE
 	init_timer(&dsi->te_timer);
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 0b94714..1c1a65b 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -139,7 +139,7 @@ struct execute_work {
 				0, (unsigned long)&(n)),		\
 	}
 
-#define __DEFERRED_WORK_INITIALIZER(n, f) {				\
+#define __DEFERRABLE_WORK_INITIALIZER(n, f) {				\
 	.work = __WORK_INITIALIZER((n).work, (f)),			\
 	.timer = TIMER_DEFERRED_INITIALIZER(delayed_work_timer_fn,	\
 				0, (unsigned long)&(n)),		\
@@ -151,8 +151,8 @@ struct execute_work {
 #define DECLARE_DELAYED_WORK(n, f)					\
 	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f)
 
-#define DECLARE_DEFERRED_WORK(n, f)					\
-	struct delayed_work n = __DEFERRED_WORK_INITIALIZER(n, f)
+#define DECLARE_DEFERRABLE_WORK(n, f)					\
+	struct delayed_work n = __DEFERRABLE_WORK_INITIALIZER(n, f)
 
 /*
  * initialize a work item's function pointer
@@ -232,7 +232,7 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
 		(_work)->timer.data = (unsigned long)(_work);		\
 	} while (0)
 
-#define INIT_DELAYED_WORK_DEFERRABLE(_work, _func)			\
+#define INIT_DEFERRABLE_WORK(_work, _func)				\
 	do {								\
 		INIT_WORK(&(_work)->work, (_func));			\
 		init_timer_deferrable(&(_work)->timer);			\
diff --git a/mm/slab.c b/mm/slab.c
index f8b0d53..35b5cb0 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -900,7 +900,7 @@ static void __cpuinit start_cpu_timer(int cpu)
 	 */
 	if (keventd_up() && reap_work->work.func == NULL) {
 		init_reap_node(cpu);
-		INIT_DELAYED_WORK_DEFERRABLE(reap_work, cache_reap);
+		INIT_DEFERRABLE_WORK(reap_work, cache_reap);
 		schedule_delayed_work_on(cpu, reap_work,
 					__round_jiffies_relative(HZ, cpu));
 	}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index df7a674..b3e3b9d 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1157,7 +1157,7 @@ static void __cpuinit start_cpu_timer(int cpu)
 {
 	struct delayed_work *work = &per_cpu(vmstat_work, cpu);
 
-	INIT_DELAYED_WORK_DEFERRABLE(work, vmstat_update);
+	INIT_DEFERRABLE_WORK(work, vmstat_update);
 	schedule_delayed_work_on(cpu, work, __round_jiffies_relative(HZ, cpu));
 }
 
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 117afaf..112c6e2 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1545,7 +1545,7 @@ static void neigh_table_init_no_netlink(struct neigh_table *tbl)
 		panic("cannot allocate neighbour cache hashes");
 
 	rwlock_init(&tbl->lock);
-	INIT_DELAYED_WORK_DEFERRABLE(&tbl->gc_work, neigh_periodic_work);
+	INIT_DEFERRABLE_WORK(&tbl->gc_work, neigh_periodic_work);
 	schedule_delayed_work(&tbl->gc_work, tbl->parms.reachable_time);
 	setup_timer(&tbl->proxy_timer, neigh_proxy_process, (unsigned long)tbl);
 	skb_queue_head_init_class(&tbl->proxy_queue,
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index e1e0a4e..7b55c86 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -194,7 +194,7 @@ void __init inet_initpeers(void)
 			0, SLAB_HWCACHE_ALIGN | SLAB_PANIC,
 			NULL);
 
-	INIT_DELAYED_WORK_DEFERRABLE(&gc_work, inetpeer_gc_worker);
+	INIT_DEFERRABLE_WORK(&gc_work, inetpeer_gc_worker);
 }
 
 static int addr_compare(const struct inetpeer_addr *a,
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 2afd2a8..2a68bb3 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -1635,7 +1635,7 @@ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
 
 void __init cache_initialize(void)
 {
-	INIT_DELAYED_WORK_DEFERRABLE(&cache_cleaner, do_cache_clean);
+	INIT_DEFERRABLE_WORK(&cache_cleaner, do_cache_clean);
 }
 
 int cache_register_net(struct cache_detail *cd, struct net *net)
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/7] workqueue: clean up delayed_work initializers and add missing one
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
  2012-08-08 21:37 ` [PATCH 1/7] workqueue: cosmetic whitespace updates for macro definitions Tejun Heo
  2012-08-08 21:37 ` [PATCH 2/7] workqueue: make deferrable delayed_work initializer names consistent Tejun Heo
@ 2012-08-08 21:37 ` Tejun Heo
  2012-08-08 21:37 ` [PATCH 4/7] workqueue: use irqsafe timer for delayed_work Tejun Heo
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen, Tejun Heo

Reimplement delayed_work initializers using new timer initializers
which take timer flags.  This reduces code duplications and will ease
further initializer changes.  This patch also adds a missing
initializer - INIT_DEFERRABLE_WORK_ONSTACK().

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/linux/workqueue.h |   48 +++++++++++++++++++++-----------------------
 1 files changed, 23 insertions(+), 25 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 1c1a65b..83755f4 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -133,26 +133,20 @@ struct execute_work {
 	__WORK_INIT_LOCKDEP_MAP(#n, &(n))				\
 	}
 
-#define __DELAYED_WORK_INITIALIZER(n, f) {				\
+#define __DELAYED_WORK_INITIALIZER(n, f, tflags) {			\
 	.work = __WORK_INITIALIZER((n).work, (f)),			\
-	.timer = TIMER_INITIALIZER(delayed_work_timer_fn,		\
-				0, (unsigned long)&(n)),		\
-	}
-
-#define __DEFERRABLE_WORK_INITIALIZER(n, f) {				\
-	.work = __WORK_INITIALIZER((n).work, (f)),			\
-	.timer = TIMER_DEFERRED_INITIALIZER(delayed_work_timer_fn,	\
-				0, (unsigned long)&(n)),		\
+	.timer = __TIMER_INITIALIZER(delayed_work_timer_fn,		\
+			0, (unsigned long)&(n), (tflags)),		\
 	}
 
 #define DECLARE_WORK(n, f)						\
 	struct work_struct n = __WORK_INITIALIZER(n, f)
 
 #define DECLARE_DELAYED_WORK(n, f)					\
-	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f)
+	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f, 0)
 
 #define DECLARE_DEFERRABLE_WORK(n, f)					\
-	struct delayed_work n = __DEFERRABLE_WORK_INITIALIZER(n, f)
+	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f, TIMER_DEFERRABLE)
 
 /*
  * initialize a work item's function pointer
@@ -216,29 +210,33 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
 		__INIT_WORK((_work), (_func), 1);			\
 	} while (0)
 
-#define INIT_DELAYED_WORK(_work, _func)					\
+#define __INIT_DELAYED_WORK(_work, _func, _tflags)			\
 	do {								\
 		INIT_WORK(&(_work)->work, (_func));			\
-		init_timer(&(_work)->timer);				\
-		(_work)->timer.function = delayed_work_timer_fn;	\
-		(_work)->timer.data = (unsigned long)(_work);		\
+		__setup_timer(&(_work)->timer, delayed_work_timer_fn,	\
+			      (unsigned long)(_work), (_tflags));	\
 	} while (0)
 
-#define INIT_DELAYED_WORK_ONSTACK(_work, _func)				\
+#define __INIT_DELAYED_WORK_ONSTACK(_work, _func, _tflags)		\
 	do {								\
 		INIT_WORK_ONSTACK(&(_work)->work, (_func));		\
-		init_timer_on_stack(&(_work)->timer);			\
-		(_work)->timer.function = delayed_work_timer_fn;	\
-		(_work)->timer.data = (unsigned long)(_work);		\
+		__setup_timer_on_stack(&(_work)->timer,			\
+				       delayed_work_timer_fn,		\
+				       (unsigned long)(_work),		\
+				       (_tflags));			\
 	} while (0)
 
+#define INIT_DELAYED_WORK(_work, _func)					\
+	__INIT_DELAYED_WORK(_work, _func, 0)
+
+#define INIT_DELAYED_WORK_ONSTACK(_work, _func)				\
+	__INIT_DELAYED_WORK_ONSTACK(_work, _func, 0)
+
 #define INIT_DEFERRABLE_WORK(_work, _func)				\
-	do {								\
-		INIT_WORK(&(_work)->work, (_func));			\
-		init_timer_deferrable(&(_work)->timer);			\
-		(_work)->timer.function = delayed_work_timer_fn;	\
-		(_work)->timer.data = (unsigned long)(_work);		\
-	} while (0)
+	__INIT_DELAYED_WORK(_work, _func, TIMER_DEFERRABLE)
+
+#define INIT_DEFERRABLE_WORK_ONSTACK(_work, _func)			\
+	__INIT_DELAYED_WORK_ONSTACK(_work, _func, TIMER_DEFERRABLE)
 
 /**
  * work_pending - Find out whether a work item is currently pending
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/7] workqueue: use irqsafe timer for delayed_work
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
                   ` (2 preceding siblings ...)
  2012-08-08 21:37 ` [PATCH 3/7] workqueue: clean up delayed_work initializers and add missing one Tejun Heo
@ 2012-08-08 21:37 ` Tejun Heo
  2012-08-08 21:38 ` [PATCH 5/7] workqueue: use mod_delayed_work() instead of __cancel + queue Tejun Heo
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen, Tejun Heo

Up to now, for delayed_works, try_to_grab_pending() couldn't be used
from IRQ handlers because IRQs may happen while
delayed_work_timer_fn() is in progress leading to indefinite -EAGAIN.

This patch makes delayed_work use the new TIMER_IRQSAFE flag for
delayed_work->timer.  This makes try_to_grab_pending() and thus
mod_delayed_work_on() safe to call from IRQ handlers.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/linux/workqueue.h |    8 +++++---
 kernel/workqueue.c        |   20 +++++++++++---------
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 83755f4..093968e 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -136,7 +136,8 @@ struct execute_work {
 #define __DELAYED_WORK_INITIALIZER(n, f, tflags) {			\
 	.work = __WORK_INITIALIZER((n).work, (f)),			\
 	.timer = __TIMER_INITIALIZER(delayed_work_timer_fn,		\
-			0, (unsigned long)&(n), (tflags)),		\
+				     0, (unsigned long)&(n),		\
+				     (tflags) | TIMER_IRQSAFE),		\
 	}
 
 #define DECLARE_WORK(n, f)						\
@@ -214,7 +215,8 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
 	do {								\
 		INIT_WORK(&(_work)->work, (_func));			\
 		__setup_timer(&(_work)->timer, delayed_work_timer_fn,	\
-			      (unsigned long)(_work), (_tflags));	\
+			      (unsigned long)(_work),			\
+			      (_tflags) | TIMER_IRQSAFE);		\
 	} while (0)
 
 #define __INIT_DELAYED_WORK_ONSTACK(_work, _func, _tflags)		\
@@ -223,7 +225,7 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
 		__setup_timer_on_stack(&(_work)->timer,			\
 				       delayed_work_timer_fn,		\
 				       (unsigned long)(_work),		\
-				       (_tflags));			\
+				       (_tflags) | TIMER_IRQSAFE);	\
 	} while (0)
 
 #define INIT_DELAYED_WORK(_work, _func)					\
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 11723c5..9087599 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1041,16 +1041,14 @@ static void cwq_dec_nr_in_flight(struct cpu_workqueue_struct *cwq, int color,
  *		for arbitrarily long
  *
  * On >= 0 return, the caller owns @work's PENDING bit.  To avoid getting
- * preempted while holding PENDING and @work off queue, preemption must be
- * disabled on entry.  This ensures that we don't return -EAGAIN while
- * another task is preempted in this function.
+ * interrupted while holding PENDING and @work off queue, irq must be
+ * disabled on entry.  This, combined with delayed_work->timer being
+ * irqsafe, ensures that we return -EAGAIN for finite short period of time.
  *
  * On successful return, >= 0, irq is disabled and the caller is
  * responsible for releasing it using local_irq_restore(*@flags).
  *
- * This function is safe to call from any context other than IRQ handler.
- * An IRQ handler may run on top of delayed_work_timer_fn() which can make
- * this function return -EAGAIN perpetually.
+ * This function is safe to call from any context including IRQ handler.
  */
 static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
 			       unsigned long *flags)
@@ -1065,6 +1063,11 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
 	if (is_dwork) {
 		struct delayed_work *dwork = to_delayed_work(work);
 
+		/*
+		 * dwork->timer is irqsafe.  If del_timer() fails, it's
+		 * guaranteed that the timer is not queued anywhere and not
+		 * running on the local CPU.
+		 */
 		if (likely(del_timer(&dwork->timer)))
 			return 1;
 	}
@@ -1318,9 +1321,8 @@ void delayed_work_timer_fn(unsigned long __data)
 	struct delayed_work *dwork = (struct delayed_work *)__data;
 	struct cpu_workqueue_struct *cwq = get_work_cwq(&dwork->work);
 
-	local_irq_disable();
+	/* should have been called from irqsafe timer with irq already off */
 	__queue_work(dwork->cpu, cwq->wq, &dwork->work);
-	local_irq_enable();
 }
 EXPORT_SYMBOL_GPL(delayed_work_timer_fn);
 
@@ -1429,7 +1431,7 @@ EXPORT_SYMBOL_GPL(queue_delayed_work);
  * Returns %false if @dwork was idle and queued, %true if @dwork was
  * pending and its timer was modified.
  *
- * This function is safe to call from any context other than IRQ handler.
+ * This function is safe to call from any context including IRQ handler.
  * See try_to_grab_pending() for details.
  */
 bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/7] workqueue: use mod_delayed_work() instead of __cancel + queue
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
                   ` (3 preceding siblings ...)
  2012-08-08 21:37 ` [PATCH 4/7] workqueue: use irqsafe timer for delayed_work Tejun Heo
@ 2012-08-08 21:38 ` Tejun Heo
  2012-08-08 21:38 ` [PATCH 6/7] workqueue: reimplement cancel_delayed_work() using try_to_grab_pending() Tejun Heo
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen, Tejun Heo

Now that mod_delayed_work() is safe to call from IRQ handlers,
__cancel_delayed_work() followed by queue_delayed_work() can be
replaced with mod_delayed_work().

Most conversions are straight-forward except for the following.

* net/core/link_watch.c: linkwatch_schedule_work() was doing a quite
  elaborate dancing around its delayed_work.  Collapse it such that
  linkwatch_work is queued for immediate execution if LW_URGENT and
  existing timer is kept otherwise.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 block/blk-core.c                    |    6 ++----
 block/blk-throttle.c                |    7 +------
 drivers/block/floppy.c              |    3 +--
 drivers/infiniband/core/mad.c       |   14 +++++---------
 drivers/input/keyboard/qt2160.c     |    3 +--
 drivers/input/mouse/synaptics_i2c.c |    7 +------
 net/core/link_watch.c               |   21 ++++++---------------
 7 files changed, 17 insertions(+), 44 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 4b4dbdf..4b8b606 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -319,10 +319,8 @@ EXPORT_SYMBOL(__blk_run_queue);
  */
 void blk_run_queue_async(struct request_queue *q)
 {
-	if (likely(!blk_queue_stopped(q))) {
-		__cancel_delayed_work(&q->delay_work);
-		queue_delayed_work(kblockd_workqueue, &q->delay_work, 0);
-	}
+	if (likely(!blk_queue_stopped(q)))
+		mod_delayed_work(kblockd_workqueue, &q->delay_work, 0);
 }
 EXPORT_SYMBOL(blk_run_queue_async);
 
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index e287c19..3d3dcae 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -930,12 +930,7 @@ throtl_schedule_delayed_work(struct throtl_data *td, unsigned long delay)
 
 	/* schedule work if limits changed even if no bio is queued */
 	if (total_nr_queued(td) || td->limits_changed) {
-		/*
-		 * We might have a work scheduled to be executed in future.
-		 * Cancel that and schedule a new one.
-		 */
-		__cancel_delayed_work(dwork);
-		queue_delayed_work(kthrotld_workqueue, dwork, delay);
+		mod_delayed_work(kthrotld_workqueue, dwork, delay);
 		throtl_log(td, "schedule work. delay=%lu jiffies=%lu",
 				delay, jiffies);
 	}
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index a7d6347..55a5bc0 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -672,7 +672,6 @@ static void __reschedule_timeout(int drive, const char *message)
 
 	if (drive == current_reqD)
 		drive = current_drive;
-	__cancel_delayed_work(&fd_timeout);
 
 	if (drive < 0 || drive >= N_DRIVE) {
 		delay = 20UL * HZ;
@@ -680,7 +679,7 @@ static void __reschedule_timeout(int drive, const char *message)
 	} else
 		delay = UDP->timeout;
 
-	queue_delayed_work(floppy_wq, &fd_timeout, delay);
+	mod_delayed_work(floppy_wq, &fd_timeout, delay);
 	if (UDP->flags & FD_DEBUG)
 		DPRINT("reschedule timeout %s\n", message);
 	timeout_message = message;
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index b0d0bc8..b593814 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -2013,13 +2013,11 @@ static void adjust_timeout(struct ib_mad_agent_private *mad_agent_priv)
 		if (time_after(mad_agent_priv->timeout,
 			       mad_send_wr->timeout)) {
 			mad_agent_priv->timeout = mad_send_wr->timeout;
-			__cancel_delayed_work(&mad_agent_priv->timed_work);
 			delay = mad_send_wr->timeout - jiffies;
 			if ((long)delay <= 0)
 				delay = 1;
-			queue_delayed_work(mad_agent_priv->qp_info->
-					   port_priv->wq,
-					   &mad_agent_priv->timed_work, delay);
+			mod_delayed_work(mad_agent_priv->qp_info->port_priv->wq,
+					 &mad_agent_priv->timed_work, delay);
 		}
 	}
 }
@@ -2052,11 +2050,9 @@ static void wait_for_response(struct ib_mad_send_wr_private *mad_send_wr)
 	list_add(&mad_send_wr->agent_list, list_item);
 
 	/* Reschedule a work item if we have a shorter timeout */
-	if (mad_agent_priv->wait_list.next == &mad_send_wr->agent_list) {
-		__cancel_delayed_work(&mad_agent_priv->timed_work);
-		queue_delayed_work(mad_agent_priv->qp_info->port_priv->wq,
-				   &mad_agent_priv->timed_work, delay);
-	}
+	if (mad_agent_priv->wait_list.next == &mad_send_wr->agent_list)
+		mod_delayed_work(mad_agent_priv->qp_info->port_priv->wq,
+				 &mad_agent_priv->timed_work, delay);
 }
 
 void ib_reset_mad_timeout(struct ib_mad_send_wr_private *mad_send_wr,
diff --git a/drivers/input/keyboard/qt2160.c b/drivers/input/keyboard/qt2160.c
index e7a5e36..76b7d43 100644
--- a/drivers/input/keyboard/qt2160.c
+++ b/drivers/input/keyboard/qt2160.c
@@ -156,8 +156,7 @@ static irqreturn_t qt2160_irq(int irq, void *_qt2160)
 
 	spin_lock_irqsave(&qt2160->lock, flags);
 
-	__cancel_delayed_work(&qt2160->dwork);
-	schedule_delayed_work(&qt2160->dwork, 0);
+	mod_delayed_work(system_wq, &qt2160->dwork, 0);
 
 	spin_unlock_irqrestore(&qt2160->lock, flags);
 
diff --git a/drivers/input/mouse/synaptics_i2c.c b/drivers/input/mouse/synaptics_i2c.c
index f1467570..063a174 100644
--- a/drivers/input/mouse/synaptics_i2c.c
+++ b/drivers/input/mouse/synaptics_i2c.c
@@ -376,12 +376,7 @@ static void synaptics_i2c_reschedule_work(struct synaptics_i2c *touch,
 
 	spin_lock_irqsave(&touch->lock, flags);
 
-	/*
-	 * If work is already scheduled then subsequent schedules will not
-	 * change the scheduled time that's why we have to cancel it first.
-	 */
-	__cancel_delayed_work(&touch->dwork);
-	schedule_delayed_work(&touch->dwork, delay);
+	mod_delayed_work(system_wq, &touch->dwork, delay);
 
 	spin_unlock_irqrestore(&touch->lock, flags);
 }
diff --git a/net/core/link_watch.c b/net/core/link_watch.c
index c3519c6..8e397a6 100644
--- a/net/core/link_watch.c
+++ b/net/core/link_watch.c
@@ -120,22 +120,13 @@ static void linkwatch_schedule_work(int urgent)
 		delay = 0;
 
 	/*
-	 * This is true if we've scheduled it immeditately or if we don't
-	 * need an immediate execution and it's already pending.
+	 * If urgent, schedule immediate execution; otherwise, don't
+	 * override the existing timer.
 	 */
-	if (schedule_delayed_work(&linkwatch_work, delay) == !delay)
-		return;
-
-	/* Don't bother if there is nothing urgent. */
-	if (!test_bit(LW_URGENT, &linkwatch_flags))
-		return;
-
-	/* It's already running which is good enough. */
-	if (!__cancel_delayed_work(&linkwatch_work))
-		return;
-
-	/* Otherwise we reschedule it again for immediate execution. */
-	schedule_delayed_work(&linkwatch_work, 0);
+	if (test_bit(LW_URGENT, &linkwatch_flags))
+		mod_delayed_work(system_wq, &linkwatch_work, 0);
+	else
+		schedule_delayed_work(&linkwatch_work, delay);
 }
 
 
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/7] workqueue: reimplement cancel_delayed_work() using try_to_grab_pending()
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
                   ` (4 preceding siblings ...)
  2012-08-08 21:38 ` [PATCH 5/7] workqueue: use mod_delayed_work() instead of __cancel + queue Tejun Heo
@ 2012-08-08 21:38 ` Tejun Heo
  2012-08-08 21:38 ` [PATCH 7/7] workqueue: deprecate __cancel_delayed_work() Tejun Heo
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen, Tejun Heo

cancel_delayed_work() can't be called from IRQ handlers due to its use
of del_timer_sync() and can't cancel work items which are already
transferred from timer to worklist.

Also, unlike other flush and cancel functions, a canceled delayed_work
would still point to the last associated cpu_workqueue.  If the
workqueue is destroyed afterwards and the work item is re-used on a
different workqueue, the queueing code can oops trying to dereference
already freed cpu_workqueue.

This patch reimplements cancel_delayed_work() using
try_to_grab_pending() and set_work_cpu_and_clear_pending().  This
allows the function to be called from IRQ handlers and makes its
behavior consistent with other flush / cancel functions.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 include/linux/workqueue.h |   17 +----------------
 kernel/workqueue.c        |   30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+), 16 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 093968e..6306157 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -417,6 +417,7 @@ extern bool cancel_work_sync(struct work_struct *work);
 
 extern bool flush_delayed_work(struct delayed_work *dwork);
 extern bool flush_delayed_work_sync(struct delayed_work *work);
+extern bool cancel_delayed_work(struct delayed_work *dwork);
 extern bool cancel_delayed_work_sync(struct delayed_work *dwork);
 
 extern void workqueue_set_max_active(struct workqueue_struct *wq,
@@ -426,22 +427,6 @@ extern unsigned int work_cpu(struct work_struct *work);
 extern unsigned int work_busy(struct work_struct *work);
 
 /*
- * Kill off a pending schedule_delayed_work().  Note that the work callback
- * function may still be running on return from cancel_delayed_work(), unless
- * it returns 1 and the work doesn't re-arm itself. Run flush_workqueue() or
- * cancel_work_sync() to wait on it.
- */
-static inline bool cancel_delayed_work(struct delayed_work *work)
-{
-	bool ret;
-
-	ret = del_timer_sync(&work->timer);
-	if (ret)
-		work_clear_pending(&work->work);
-	return ret;
-}
-
-/*
  * Like above, but uses del_timer() instead of del_timer_sync(). This means,
  * if it returns 0 the timer function may be running and the queueing is in
  * progress.
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9087599..7413242 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3031,6 +3031,36 @@ bool flush_delayed_work_sync(struct delayed_work *dwork)
 EXPORT_SYMBOL(flush_delayed_work_sync);
 
 /**
+ * cancel_delayed_work - cancel a delayed work
+ * @dwork: delayed_work to cancel
+ *
+ * Kill off a pending delayed_work.  Returns %true if @dwork was pending
+ * and canceled; %false if wasn't pending.  Note that the work callback
+ * function may still be running on return, unless it returns %true and the
+ * work doesn't re-arm itself.  Explicitly flush or use cancel_work_sync()
+ * to wait on it.
+ *
+ * This function is safe to call from any context including IRQ handler.
+ */
+bool cancel_delayed_work(struct delayed_work *dwork)
+{
+	unsigned long flags;
+	int ret;
+
+	do {
+		ret = try_to_grab_pending(&dwork->work, true, &flags);
+	} while (unlikely(ret == -EAGAIN));
+
+	if (unlikely(ret < 0))
+		return false;
+
+	set_work_cpu_and_clear_pending(&dwork->work, work_cpu(&dwork->work));
+	local_irq_restore(flags);
+	return true;
+}
+EXPORT_SYMBOL(cancel_delayed_work);
+
+/**
  * cancel_delayed_work_sync - cancel a delayed work and wait for it to finish
  * @dwork: the delayed work cancel
  *
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/7] workqueue: deprecate __cancel_delayed_work()
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
                   ` (5 preceding siblings ...)
  2012-08-08 21:38 ` [PATCH 6/7] workqueue: reimplement cancel_delayed_work() using try_to_grab_pending() Tejun Heo
@ 2012-08-08 21:38 ` Tejun Heo
  2012-08-09 13:21   ` Jens Axboe
  2012-08-13 23:37 ` [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
  2012-08-21 20:30 ` Tejun Heo
  8 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2012-08-08 21:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen,
	Tejun Heo, Jens Axboe, Jiri Kosina, Roland Dreier

Now that cancel_delayed_work() can be safely called from IRQ handlers,
there's no reason to use __cancel_delayed_work().  Use
cancel_delayed_work() instead of __cancel_delayed_work() and mark the
latter deprecated.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Roland Dreier <roland@kernel.org>
Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 block/blk-core.c              |    2 +-
 drivers/block/floppy.c        |    2 +-
 drivers/infiniband/core/mad.c |    2 +-
 drivers/video/omap2/dss/dsi.c |    2 +-
 include/linux/workqueue.h     |    2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 4b8b606..dc04a90 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -262,7 +262,7 @@ EXPORT_SYMBOL(blk_start_queue);
  **/
 void blk_stop_queue(struct request_queue *q)
 {
-	__cancel_delayed_work(&q->delay_work);
+	cancel_delayed_work(&q->delay_work);
 	queue_flag_set(QUEUE_FLAG_STOPPED, q);
 }
 EXPORT_SYMBOL(blk_stop_queue);
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 55a5bc0..17c675c 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -890,7 +890,7 @@ static void unlock_fdc(void)
 
 	raw_cmd = NULL;
 	command_status = FD_COMMAND_NONE;
-	__cancel_delayed_work(&fd_timeout);
+	cancel_delayed_work(&fd_timeout);
 	do_floppy = NULL;
 	cont = NULL;
 	clear_bit(0, &fdc_busy);
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index b593814..dc3fd1e 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -2004,7 +2004,7 @@ static void adjust_timeout(struct ib_mad_agent_private *mad_agent_priv)
 	unsigned long delay;
 
 	if (list_empty(&mad_agent_priv->wait_list)) {
-		__cancel_delayed_work(&mad_agent_priv->timed_work);
+		cancel_delayed_work(&mad_agent_priv->timed_work);
 	} else {
 		mad_send_wr = list_entry(mad_agent_priv->wait_list.next,
 					 struct ib_mad_send_wr_private,
diff --git a/drivers/video/omap2/dss/dsi.c b/drivers/video/omap2/dss/dsi.c
index fd40f26..05ee046 100644
--- a/drivers/video/omap2/dss/dsi.c
+++ b/drivers/video/omap2/dss/dsi.c
@@ -4306,7 +4306,7 @@ static void dsi_framedone_irq_callback(void *data, u32 mask)
 	 * and is sending the data.
 	 */
 
-	__cancel_delayed_work(&dsi->framedone_timeout_work);
+	cancel_delayed_work(&dsi->framedone_timeout_work);
 
 	dsi_handle_framedone(dsidev, 0);
 }
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 6306157..6cd8f91 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -431,7 +431,7 @@ extern unsigned int work_busy(struct work_struct *work);
  * if it returns 0 the timer function may be running and the queueing is in
  * progress.
  */
-static inline bool __cancel_delayed_work(struct delayed_work *work)
+static inline bool __deprecated __cancel_delayed_work(struct delayed_work *work)
 {
 	bool ret;
 
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 7/7] workqueue: deprecate __cancel_delayed_work()
  2012-08-08 21:38 ` [PATCH 7/7] workqueue: deprecate __cancel_delayed_work() Tejun Heo
@ 2012-08-09 13:21   ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2012-08-09 13:21 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, mingo, akpm, tglx, peterz, davem,
	tomi.valkeinen, Jiri Kosina, Roland Dreier

On 08/08/2012 11:38 PM, Tejun Heo wrote:
> Now that cancel_delayed_work() can be safely called from IRQ handlers,
> there's no reason to use __cancel_delayed_work().  Use
> cancel_delayed_work() instead of __cancel_delayed_work() and mark the
> latter deprecated.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Jiri Kosina <jkosina@suse.cz>
> Cc: Roland Dreier <roland@kernel.org>
> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>

You can add my acked by to this one.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCHSET] workqueue: use irqsafe timer in delayed_work
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
                   ` (6 preceding siblings ...)
  2012-08-08 21:38 ` [PATCH 7/7] workqueue: deprecate __cancel_delayed_work() Tejun Heo
@ 2012-08-13 23:37 ` Tejun Heo
  2012-08-21 20:30 ` Tejun Heo
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-13 23:37 UTC (permalink / raw)
  To: linux-kernel; +Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen

Hello,

On Wed, Aug 08, 2012 at 02:37:55PM -0700, Tejun Heo wrote:
> This patchset makes delayed_work use the irqsafe timer added by the
> pending "timer: clean up initializers and implement irqsafe timers"
> patchset[1].  This enables try_to_grab_pending() to be used from any
> context which in turn makes mod_delayed_work() usable from IRQ
> handlers.  cancel_delayed_work() is reimplemented using
> try_to_grab_pending() so that it also can be used from IRQ handlers
> and its behavior is consitent with other canceling operations.
> __cancel_delayed_work() is no longer necessary and deprecated.

I'll soon push this to linux-next through wq/for-3.7 together with
irqsafe timer patchset.  Plese scream if you don't like that.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCHSET] workqueue: use irqsafe timer in delayed_work
  2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
                   ` (7 preceding siblings ...)
  2012-08-13 23:37 ` [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
@ 2012-08-21 20:30 ` Tejun Heo
  8 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2012-08-21 20:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: torvalds, mingo, akpm, tglx, peterz, davem, tomi.valkeinen

On Wed, Aug 08, 2012 at 02:37:55PM -0700, Tejun Heo wrote:
> This patchset makes delayed_work use the irqsafe timer added by the
> pending "timer: clean up initializers and implement irqsafe timers"
> patchset[1].  This enables try_to_grab_pending() to be used from any
> context which in turn makes mod_delayed_work() usable from IRQ
> handlers.  cancel_delayed_work() is reimplemented using
> try_to_grab_pending() so that it also can be used from IRQ handlers
> and its behavior is consitent with other canceling operations.
> __cancel_delayed_work() is no longer necessary and deprecated.

Applied to wq/for-3.7 after pulling in tip/timers/core.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-08-21 20:30 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-08 21:37 [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
2012-08-08 21:37 ` [PATCH 1/7] workqueue: cosmetic whitespace updates for macro definitions Tejun Heo
2012-08-08 21:37 ` [PATCH 2/7] workqueue: make deferrable delayed_work initializer names consistent Tejun Heo
2012-08-08 21:37 ` [PATCH 3/7] workqueue: clean up delayed_work initializers and add missing one Tejun Heo
2012-08-08 21:37 ` [PATCH 4/7] workqueue: use irqsafe timer for delayed_work Tejun Heo
2012-08-08 21:38 ` [PATCH 5/7] workqueue: use mod_delayed_work() instead of __cancel + queue Tejun Heo
2012-08-08 21:38 ` [PATCH 6/7] workqueue: reimplement cancel_delayed_work() using try_to_grab_pending() Tejun Heo
2012-08-08 21:38 ` [PATCH 7/7] workqueue: deprecate __cancel_delayed_work() Tejun Heo
2012-08-09 13:21   ` Jens Axboe
2012-08-13 23:37 ` [PATCHSET] workqueue: use irqsafe timer in delayed_work Tejun Heo
2012-08-21 20:30 ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).