linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers()
@ 2012-08-29 16:51 Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 1/9 V3] workqueue: ensure the wq_worker_sleeping() see the right flags Lai Jiangshan
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

Patch 1~4   fix possible bugs.

Patch 1     fix possible double-write bug
Patch 2,5,7 makes the waiting logic more clear
Patch 3,4   fix bugs from manage VS hotplug
Patch 7,8,9 explicit logic to wait in busy-work-rebind and make rebind_workers()
	    single pass.

Lai Jiangshan (9):
  workqueue: ensure the wq_worker_sleeping() see the right flags
  workqueue: fix deadlock in rebind_workers()
  workqueue: add POOL_MANAGING_WORKERS
  workqueue: add non_manager_role_manager_mutex_unlock()
  workqueue: move rebind_hold to idle_rebind
  workqueue: simple clear WORKER_REBIND
  workqueue: explicit way to wait for idles workers to finish
  workqueue: single pass rebind_workers
  workqueue: merge the role of rebind_hold to idle_done

 kernel/workqueue.c |  151 +++++++++++++++++++++++++++++++++-------------------
 1 files changed, 96 insertions(+), 55 deletions(-)

-- 
1.7.4.4


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/9 V3] workqueue: ensure the wq_worker_sleeping() see the right flags
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 2/9 V3] workqueue: fix deadlock in rebind_workers() Lai Jiangshan
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

The compiler may compile this code into TWO write/modify instructions.
		worker->flags &= ~WORKER_UNBOUND;
		worker->flags |= WORKER_REBIND;

so the other CPU may see the temporary of worker->flags which has
not WORKER_UNBOUND nor WORKER_REBIND, it will wrongly do local wake up.

so we use one write/modify instruction explicitly instead.

This bug will not occur on idle workers, because they have another
WORKER_NOT_RUNNING flags.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |    7 +++++--
 1 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 692d976..4f252d0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1434,10 +1434,13 @@ retry:
 	/* rebind busy workers */
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
+		unsigned long worker_flags = worker->flags;
 
 		/* morph UNBOUND to REBIND */
-		worker->flags &= ~WORKER_UNBOUND;
-		worker->flags |= WORKER_REBIND;
+		worker_flags &= ~WORKER_UNBOUND;
+		worker_flags |= WORKER_REBIND;
+		/* ensure the wq_worker_sleeping() see the right flags */
+		ACCESS_ONCE(worker->flags) = worker_flags;
 
 		if (test_and_set_bit(WORK_STRUCT_PENDING_BIT,
 				     work_data_bits(rebind_work)))
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/9 V3] workqueue: fix deadlock in rebind_workers()
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 1/9 V3] workqueue: ensure the wq_worker_sleeping() see the right flags Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS Lai Jiangshan
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan


Current idle_worker_rebind() has a bug.

idle_worker_rebind() path			HOTPLUG path
						online
							rebind_workers()
wait_event(gcwq->rebind_hold)
	woken up but no scheduled			rebind_workers() returns (*)
						the same cpu offline
						the same cpu online again
							rebind_workers()
								set WORKER_REBIND
	scheduled,see the WORKER_REBIND
	wait rebind_workers() clear it    <--bug-->		wait idle_worker_rebind()
								rebound.

The two thread wait each other. It is bug.

This fix focuses in (*), rebind_workers() can't returns until all idles
finish waiting on gcwq->rebind_hold(aka: until all idles release the reference
of gcwq->rebind_hold). We add a ref_done to do it. rebind_workers() will
waits on ref_done for all idles to finish wait.

It is now tree-times-hand-shake.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   61 ++++++++++++++++++++++++++++++++++++++++-----------
 1 files changed, 48 insertions(+), 13 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4f252d0..1363b39 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1305,8 +1305,22 @@ __acquires(&gcwq->lock)
 }
 
 struct idle_rebind {
-	int			cnt;		/* # workers to be rebound */
-	struct completion	done;		/* all workers rebound */
+	int		  idle_cnt;	/* # idle workers to be rebound */
+	struct completion idle_done;	/* all idle workers rebound */
+
+	/*
+	 * notify the rebind_workers() that:
+	 * 0. All worker leave rebind_hold:
+	 * 1. All idle workers are rebound.
+	 * 2. No idle worker has ref to this struct
+	 *
+	 * @ref_cnt: # idle workers has ref to this struct
+	 * @ref_done: any idle workers has no ref to this struct,
+	 *	      nor rebind_hold.
+	 *	      it also implies that all idle workers are rebound.
+	 */
+	int		  ref_cnt;
+	struct completion ref_done;
 };
 
 /*
@@ -1320,12 +1334,18 @@ static void idle_worker_rebind(struct worker *worker)
 
 	/* CPU must be online at this point */
 	WARN_ON(!worker_maybe_bind_and_lock(worker));
-	if (!--worker->idle_rebind->cnt)
-		complete(&worker->idle_rebind->done);
+	if (!--worker->idle_rebind->idle_cnt)
+		complete(&worker->idle_rebind->idle_done);
 	spin_unlock_irq(&worker->pool->gcwq->lock);
 
 	/* we did our part, wait for rebind_workers() to finish up */
 	wait_event(gcwq->rebind_hold, !(worker->flags & WORKER_REBIND));
+
+	/* noify if all idle worker are done(rebond & wait) */
+	spin_lock_irq(&worker->pool->gcwq->lock);
+	if (!--worker->idle_rebind->ref_cnt)
+		complete(&worker->idle_rebind->ref_done);
+	spin_unlock_irq(&worker->pool->gcwq->lock);
 }
 
 /*
@@ -1384,14 +1404,18 @@ static void rebind_workers(struct global_cwq *gcwq)
 		lockdep_assert_held(&pool->manager_mutex);
 
 	/*
-	 * Rebind idle workers.  Interlocked both ways.  We wait for
-	 * workers to rebind via @idle_rebind.done.  Workers will wait for
-	 * us to finish up by watching %WORKER_REBIND.
+	 * Rebind idle workers.  Interlocked both ways in triple waits.
+	 * We wait for workers to rebind via @idle_rebind.idle_done.
+	 * Workers will wait for us to finish up by watching %WORKER_REBIND.
+	 * And them we wait for workers to leave rebind_hold
+	 * via @idle_rebind.ref_done.
 	 */
-	init_completion(&idle_rebind.done);
+	init_completion(&idle_rebind.idle_done);
+	init_completion(&idle_rebind.ref_done);
+	idle_rebind.ref_cnt = 1;
 retry:
-	idle_rebind.cnt = 1;
-	INIT_COMPLETION(idle_rebind.done);
+	idle_rebind.idle_cnt = 1;
+	INIT_COMPLETION(idle_rebind.idle_done);
 
 	/* set REBIND and kick idle ones, we'll wait for these later */
 	for_each_worker_pool(pool, gcwq) {
@@ -1403,7 +1427,8 @@ retry:
 			worker->flags &= ~WORKER_UNBOUND;
 			worker->flags |= WORKER_REBIND;
 
-			idle_rebind.cnt++;
+			idle_rebind.idle_cnt++;
+			idle_rebind.ref_cnt++;
 			worker->idle_rebind = &idle_rebind;
 
 			/* worker_thread() will call idle_worker_rebind() */
@@ -1411,9 +1436,9 @@ retry:
 		}
 	}
 
-	if (--idle_rebind.cnt) {
+	if (--idle_rebind.idle_cnt) {
 		spin_unlock_irq(&gcwq->lock);
-		wait_for_completion(&idle_rebind.done);
+		wait_for_completion(&idle_rebind.idle_done);
 		spin_lock_irq(&gcwq->lock);
 		/* busy ones might have become idle while waiting, retry */
 		goto retry;
@@ -1452,6 +1477,16 @@ retry:
 			    worker->scheduled.next,
 			    work_color_to_flags(WORK_NO_COLOR));
 	}
+
+	/*
+	 * we will leave rebind_workers(), have to wait until no worker
+	 * has ref to this idle_rebind nor rebind_hold.
+	 */
+	if (--idle_rebind.ref_cnt) {
+		spin_unlock_irq(&gcwq->lock);
+		wait_for_completion(&idle_rebind.ref_done);
+		spin_lock_irq(&gcwq->lock);
+	}
 }
 
 static struct worker *alloc_worker(void)
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 1/9 V3] workqueue: ensure the wq_worker_sleeping() see the right flags Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 2/9 V3] workqueue: fix deadlock in rebind_workers() Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 18:21   ` Tejun Heo
  2012-08-29 16:51 ` [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock() Lai Jiangshan
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

When hotplug happens, the plug code will also grab the manager_mutex,
it will break too_many_workers()'s assumption, and make too_many_workers()
ugly(kick the timer wrongly, no found bug).

To avoid assumption-coruption, we add the original POOL_MANAGING_WORKERS back.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1363b39..0673598 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -66,6 +66,7 @@ enum {
 
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
+	POOL_MANAGING_WORKERS   = 1 << 1,       /* managing workers */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -652,7 +653,7 @@ static bool need_to_manage_workers(struct worker_pool *pool)
 /* Do we have too many workers and should some go away? */
 static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = mutex_is_locked(&pool->manager_mutex);
+	bool managing = pool->flags & POOL_MANAGING_WORKERS;
 	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
 	int nr_busy = pool->nr_workers - nr_idle;
 
@@ -1836,6 +1837,7 @@ static bool manage_workers(struct worker *worker)
 		return ret;
 
 	pool->flags &= ~POOL_MANAGE_WORKERS;
+	pool->flags |= POOL_MANAGING_WORKERS;
 
 	/*
 	 * Destroy and then create so that may_start_working() is true
@@ -1844,6 +1846,7 @@ static bool manage_workers(struct worker *worker)
 	ret |= maybe_destroy_workers(pool);
 	ret |= maybe_create_worker(pool);
 
+	pool->flags &= ~POOL_MANAGING_WORKERS;
 	mutex_unlock(&pool->manager_mutex);
 	return ret;
 }
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
                   ` (2 preceding siblings ...)
  2012-08-29 16:51 ` [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 18:25   ` Tejun Heo
  2012-08-29 16:51 ` [PATCH 5/9 V3] workqueue: move rebind_hold to idle_rebind Lai Jiangshan
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

If hotplug code grabbed the manager_mutex and worker_thread try to create
a worker, the manage_worker() will return false and worker_thread go to
process work items. Now, on the CPU, all workers are processing work items,
no idle_worker left/ready for managing. It breaks the concept of workqueue
and it is bug.

So when this case happens, the last idle should not go to process work,
it should go to sleep as usual and wait normal events. but it should
also be notified by the event that hotplug code release the manager_mutex.

So we add non_manager_role_manager_mutex_unlock() to do this notify.

By the way, if manager_mutex is grabbed by a real manager,
POOL_MANAGING_WORKERS will be set, the last idle can go to process work.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   42 ++++++++++++++++++++++++++++++++++--------
 1 files changed, 34 insertions(+), 8 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0673598..e40898a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1305,6 +1305,24 @@ __acquires(&gcwq->lock)
 	}
 }
 
+/*
+ * Release pool->manager_mutex which grabbed by current thread but manager
+ *
+ * Current thread has held the manager_mutex and it may caused
+ * the worker_thread who tried to create worker go to sleep,
+ * wake one and let it try to create worker again or proccess work.
+ *
+ * CONTEXT:
+ *  spin_lock_irq(gcwq->lock).
+ */
+static void non_manager_role_manager_mutex_unlock(struct worker_pool *pool)
+{
+	mutex_unlock(&pool->manager_mutex);
+
+	if (need_more_worker(pool))
+		wake_up_worker(pool);
+}
+
 struct idle_rebind {
 	int		  idle_cnt;	/* # idle workers to be rebound */
 	struct completion idle_done;	/* all idle workers rebound */
@@ -2136,11 +2154,12 @@ woke_up:
 recheck:
 	/* no more worker necessary? */
 	if (!need_more_worker(pool))
-		goto sleep;
+		goto manage;
 
 	/* do we need to manage? */
-	if (unlikely(!may_start_working(pool)) && manage_workers(worker))
-		goto recheck;
+	if (unlikely(!may_start_working(pool)) &&
+	    !(pool->flags & POOL_MANAGING_WORKERS))
+		goto manage;
 
 	/*
 	 * ->scheduled list can only be filled while a worker is
@@ -2173,13 +2192,20 @@ recheck:
 	} while (keep_working(pool));
 
 	worker_set_flags(worker, WORKER_PREP, false);
-sleep:
+manage:
 	if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
-	 * gcwq->lock is held and there's no work to process and no
-	 * need to manage, sleep.  Workers are woken up only while
+	 * gcwq->lock is held and it will leave when one of these cases:
+	 * case1) there's no work to process and no need to manage, sleep.
+	 * case2) there are works to process but it is the last idle and
+	 *        failed to grab manager_lock to create worker. also sleep!
+	 *        current manager_lock owner will wake up it to
+	 *        process work or do manage.
+	 *        See non_manager_role_manager_mutex_unlock().
+	 *
+	 * Workers are woken up only while
 	 * holding gcwq->lock or from local cpu, so setting the
 	 * current state before releasing gcwq->lock is enough to
 	 * prevent losing any event.
@@ -3419,9 +3445,9 @@ static void gcwq_release_management_and_unlock(struct global_cwq *gcwq)
 {
 	struct worker_pool *pool;
 
-	spin_unlock_irq(&gcwq->lock);
 	for_each_worker_pool(pool, gcwq)
-		mutex_unlock(&pool->manager_mutex);
+		non_manager_role_manager_mutex_unlock(pool);
+	spin_unlock_irq(&gcwq->lock);
 }
 
 static void gcwq_unbind_fn(struct work_struct *work)
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/9 V3] workqueue: move rebind_hold to idle_rebind
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
                   ` (3 preceding siblings ...)
  2012-08-29 16:51 ` [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock() Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 6/9 V3] workqueue: simple clear WORKER_REBIND Lai Jiangshan
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

By the help of @idle_rebind.ref_done, the life time of idle_rebind
is expanded enough and it can include the whole reference-time
of @rebind_hold, so we can move @rebind_hold from gcwq to idle_rebind.

Also we change it to completion instead. we need to ease the pain of WORKER_REBIND.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   20 ++++++++------------
 1 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e40898a..eeb5752 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -185,8 +185,6 @@ struct global_cwq {
 						/* L: hash of busy workers */
 
 	struct worker_pool	pools[2];	/* normal and highpri pools */
-
-	wait_queue_head_t	rebind_hold;	/* rebind hold wait */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -1327,15 +1325,16 @@ struct idle_rebind {
 	int		  idle_cnt;	/* # idle workers to be rebound */
 	struct completion idle_done;	/* all idle workers rebound */
 
+	/* idle workers wait all idles to be rebound */
+	struct completion rebind_hold;
+
 	/*
 	 * notify the rebind_workers() that:
-	 * 0. All worker leave rebind_hold:
 	 * 1. All idle workers are rebound.
 	 * 2. No idle worker has ref to this struct
 	 *
 	 * @ref_cnt: # idle workers has ref to this struct
 	 * @ref_done: any idle workers has no ref to this struct,
-	 *	      nor rebind_hold.
 	 *	      it also implies that all idle workers are rebound.
 	 */
 	int		  ref_cnt;
@@ -1349,8 +1348,6 @@ struct idle_rebind {
  */
 static void idle_worker_rebind(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
-
 	/* CPU must be online at this point */
 	WARN_ON(!worker_maybe_bind_and_lock(worker));
 	if (!--worker->idle_rebind->idle_cnt)
@@ -1358,7 +1355,7 @@ static void idle_worker_rebind(struct worker *worker)
 	spin_unlock_irq(&worker->pool->gcwq->lock);
 
 	/* we did our part, wait for rebind_workers() to finish up */
-	wait_event(gcwq->rebind_hold, !(worker->flags & WORKER_REBIND));
+	wait_for_completion(&worker->idle_rebind->rebind_hold);
 
 	/* noify if all idle worker are done(rebond & wait) */
 	spin_lock_irq(&worker->pool->gcwq->lock);
@@ -1425,11 +1422,12 @@ static void rebind_workers(struct global_cwq *gcwq)
 	/*
 	 * Rebind idle workers.  Interlocked both ways in triple waits.
 	 * We wait for workers to rebind via @idle_rebind.idle_done.
-	 * Workers will wait for us to finish up by watching %WORKER_REBIND.
+	 * Workers will wait for us via @idle_rebind.rebind_hold.
 	 * And them we wait for workers to leave rebind_hold
 	 * via @idle_rebind.ref_done.
 	 */
 	init_completion(&idle_rebind.idle_done);
+	init_completion(&idle_rebind.rebind_hold);
 	init_completion(&idle_rebind.ref_done);
 	idle_rebind.ref_cnt = 1;
 retry:
@@ -1473,7 +1471,7 @@ retry:
 		list_for_each_entry(worker, &pool->idle_list, entry)
 			worker->flags &= ~WORKER_REBIND;
 
-	wake_up_all(&gcwq->rebind_hold);
+	complete_all(&idle_rebind.rebind_hold);
 
 	/* rebind busy workers */
 	for_each_busy_worker(worker, i, pos, gcwq) {
@@ -1499,7 +1497,7 @@ retry:
 
 	/*
 	 * we will leave rebind_workers(), have to wait until no worker
-	 * has ref to this idle_rebind nor rebind_hold.
+	 * has ref to this idle_rebind.
 	 */
 	if (--idle_rebind.ref_cnt) {
 		spin_unlock_irq(&gcwq->lock);
@@ -3789,8 +3787,6 @@ static int __init init_workqueues(void)
 			mutex_init(&pool->manager_mutex);
 			ida_init(&pool->worker_ida);
 		}
-
-		init_waitqueue_head(&gcwq->rebind_hold);
 	}
 
 	/* create the initial worker */
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/9 V3] workqueue: simple clear WORKER_REBIND
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
                   ` (4 preceding siblings ...)
  2012-08-29 16:51 ` [PATCH 5/9 V3] workqueue: move rebind_hold to idle_rebind Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 16:51 ` [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish Lai Jiangshan
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

WORKER_REBIND is not used for other purpose,
idle_worker_rebind() can directly clear it.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   13 ++-----------
 1 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index eeb5752..d88aa2e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1350,6 +1350,7 @@ static void idle_worker_rebind(struct worker *worker)
 {
 	/* CPU must be online at this point */
 	WARN_ON(!worker_maybe_bind_and_lock(worker));
+	worker_clr_flags(worker, WORKER_REBIND);
 	if (!--worker->idle_rebind->idle_cnt)
 		complete(&worker->idle_rebind->idle_done);
 	spin_unlock_irq(&worker->pool->gcwq->lock);
@@ -1437,7 +1438,7 @@ retry:
 	/* set REBIND and kick idle ones, we'll wait for these later */
 	for_each_worker_pool(pool, gcwq) {
 		list_for_each_entry(worker, &pool->idle_list, entry) {
-			if (worker->flags & WORKER_REBIND)
+			if (!(worker->flags & WORKER_UNBOUND))
 				continue;
 
 			/* morph UNBOUND to REBIND */
@@ -1461,16 +1462,6 @@ retry:
 		goto retry;
 	}
 
-	/*
-	 * All idle workers are rebound and waiting for %WORKER_REBIND to
-	 * be cleared inside idle_worker_rebind().  Clear and release.
-	 * Clearing %WORKER_REBIND from this foreign context is safe
-	 * because these workers are still guaranteed to be idle.
-	 */
-	for_each_worker_pool(pool, gcwq)
-		list_for_each_entry(worker, &pool->idle_list, entry)
-			worker->flags &= ~WORKER_REBIND;
-
 	complete_all(&idle_rebind.rebind_hold);
 
 	/* rebind busy workers */
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
                   ` (5 preceding siblings ...)
  2012-08-29 16:51 ` [PATCH 6/9 V3] workqueue: simple clear WORKER_REBIND Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 18:34   ` Tejun Heo
  2012-08-29 16:51 ` [PATCH 8/9 V3] workqueue: single pass rebind_workers Lai Jiangshan
  2012-08-29 16:52 ` [PATCH 9/9 V3] workqueue: merge the role of rebind_hold to idle_done Lai Jiangshan
  8 siblings, 1 reply; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

busy_worker_rebind_fn() can't return until all idle workers are rebound.
This order is ensured by rebind_workers() currently.

We use mutex_lock(&worker->pool->manager_mutex) to wait for all idle workers
to be rebound. this is an explicit way and it will ease the pain of
the rebind_workers().

The sleeping mutex_lock(&worker->pool->manager_mutex) must be put in the top of
busy_worker_rebind_fn(), because this busy worker thread can sleep
before the WORKER_REBIND is cleared, but can't sleep after
the WORKER_REBIND cleared.

It adds a small overhead to the unlikely path.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d88aa2e..719d6ec 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1376,9 +1376,15 @@ static void busy_worker_rebind_fn(struct work_struct *work)
 	struct worker *worker = container_of(work, struct worker, rebind_work);
 	struct global_cwq *gcwq = worker->pool->gcwq;
 
+	/*
+	 * Waiting all idle workers are rebound by competing on
+	 * pool->manager_mutex.
+	 */
+	mutex_lock(&worker->pool->manager_mutex);
+
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
-
+	non_manager_role_manager_mutex_unlock(worker->pool);
 	spin_unlock_irq(&gcwq->lock);
 }
 
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 8/9 V3] workqueue: single pass rebind_workers
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
                   ` (6 preceding siblings ...)
  2012-08-29 16:51 ` [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish Lai Jiangshan
@ 2012-08-29 16:51 ` Lai Jiangshan
  2012-08-29 18:40   ` Tejun Heo
  2012-08-29 16:52 ` [PATCH 9/9 V3] workqueue: merge the role of rebind_hold to idle_done Lai Jiangshan
  8 siblings, 1 reply; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:51 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

busy_worker_rebind_fn() can't return until all idle workers are rebound,
the code of busy_worker_rebind_fn() ensure this.

So we can change the order of the code of rebind_workers(),
and make it is a single pass to do the rebind_workers().

It makes the code much clean and better readability.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   18 +++---------------
 1 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 719d6ec..7e6145b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1437,16 +1437,11 @@ static void rebind_workers(struct global_cwq *gcwq)
 	init_completion(&idle_rebind.rebind_hold);
 	init_completion(&idle_rebind.ref_done);
 	idle_rebind.ref_cnt = 1;
-retry:
 	idle_rebind.idle_cnt = 1;
-	INIT_COMPLETION(idle_rebind.idle_done);
 
 	/* set REBIND and kick idle ones, we'll wait for these later */
 	for_each_worker_pool(pool, gcwq) {
 		list_for_each_entry(worker, &pool->idle_list, entry) {
-			if (!(worker->flags & WORKER_UNBOUND))
-				continue;
-
 			/* morph UNBOUND to REBIND */
 			worker->flags &= ~WORKER_UNBOUND;
 			worker->flags |= WORKER_REBIND;
@@ -1460,16 +1455,6 @@ retry:
 		}
 	}
 
-	if (--idle_rebind.idle_cnt) {
-		spin_unlock_irq(&gcwq->lock);
-		wait_for_completion(&idle_rebind.idle_done);
-		spin_lock_irq(&gcwq->lock);
-		/* busy ones might have become idle while waiting, retry */
-		goto retry;
-	}
-
-	complete_all(&idle_rebind.rebind_hold);
-
 	/* rebind busy workers */
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -1497,7 +1482,10 @@ retry:
 	 * has ref to this idle_rebind.
 	 */
 	if (--idle_rebind.ref_cnt) {
+		--idle_rebind.idle_cnt;
 		spin_unlock_irq(&gcwq->lock);
+		wait_for_completion(&idle_rebind.idle_done);
+		complete_all(&idle_rebind.rebind_hold);
 		wait_for_completion(&idle_rebind.ref_done);
 		spin_lock_irq(&gcwq->lock);
 	}
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 9/9 V3] workqueue: merge the role of rebind_hold to idle_done
  2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
                   ` (7 preceding siblings ...)
  2012-08-29 16:51 ` [PATCH 8/9 V3] workqueue: single pass rebind_workers Lai Jiangshan
@ 2012-08-29 16:52 ` Lai Jiangshan
  8 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-29 16:52 UTC (permalink / raw)
  To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan

Currently is single pass, we can wait on idle_done instead wait on rebind_hold.
So we can remove rebind_hold and make the code simpler.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   25 +++++++++----------------
 1 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7e6145b..8253727 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1325,9 +1325,6 @@ struct idle_rebind {
 	int		  idle_cnt;	/* # idle workers to be rebound */
 	struct completion idle_done;	/* all idle workers rebound */
 
-	/* idle workers wait all idles to be rebound */
-	struct completion rebind_hold;
-
 	/*
 	 * notify the rebind_workers() that:
 	 * 1. All idle workers are rebound.
@@ -1352,11 +1349,11 @@ static void idle_worker_rebind(struct worker *worker)
 	WARN_ON(!worker_maybe_bind_and_lock(worker));
 	worker_clr_flags(worker, WORKER_REBIND);
 	if (!--worker->idle_rebind->idle_cnt)
-		complete(&worker->idle_rebind->idle_done);
+		complete_all(&worker->idle_rebind->idle_done);
 	spin_unlock_irq(&worker->pool->gcwq->lock);
 
-	/* we did our part, wait for rebind_workers() to finish up */
-	wait_for_completion(&worker->idle_rebind->rebind_hold);
+	/* It did its part, wait for all other idles to finish up */
+	wait_for_completion(&worker->idle_rebind->idle_done);
 
 	/* noify if all idle worker are done(rebond & wait) */
 	spin_lock_irq(&worker->pool->gcwq->lock);
@@ -1427,14 +1424,12 @@ static void rebind_workers(struct global_cwq *gcwq)
 		lockdep_assert_held(&pool->manager_mutex);
 
 	/*
-	 * Rebind idle workers.  Interlocked both ways in triple waits.
-	 * We wait for workers to rebind via @idle_rebind.idle_done.
-	 * Workers will wait for us via @idle_rebind.rebind_hold.
-	 * And them we wait for workers to leave rebind_hold
-	 * via @idle_rebind.ref_done.
+	 * Rebind idle workers.
+	 * Workers wait each other to rebind via @idle_rebind.idle_done.
+	 * We wait for all idle workers to 1) rebind and 2) finish wait
+	 * and 3) release the ref of @idle_rebind via @idle_rebind.ref_done.
 	 */
 	init_completion(&idle_rebind.idle_done);
-	init_completion(&idle_rebind.rebind_hold);
 	init_completion(&idle_rebind.ref_done);
 	idle_rebind.ref_cnt = 1;
 	idle_rebind.idle_cnt = 1;
@@ -1478,14 +1473,12 @@ static void rebind_workers(struct global_cwq *gcwq)
 	}
 
 	/*
-	 * we will leave rebind_workers(), have to wait until no worker
-	 * has ref to this idle_rebind.
+	 * we will leave rebind_workers(), have to wait until all idles
+	 * are rebound and finish wait.
 	 */
 	if (--idle_rebind.ref_cnt) {
 		--idle_rebind.idle_cnt;
 		spin_unlock_irq(&gcwq->lock);
-		wait_for_completion(&idle_rebind.idle_done);
-		complete_all(&idle_rebind.rebind_hold);
 		wait_for_completion(&idle_rebind.ref_done);
 		spin_lock_irq(&gcwq->lock);
 	}
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS
  2012-08-29 16:51 ` [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS Lai Jiangshan
@ 2012-08-29 18:21   ` Tejun Heo
  2012-08-30  2:38     ` Lai Jiangshan
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-08-29 18:21 UTC (permalink / raw)
  To: Lai Jiangshan; +Cc: linux-kernel

Hello, Lai.

On Thu, Aug 30, 2012 at 12:51:54AM +0800, Lai Jiangshan wrote:
> When hotplug happens, the plug code will also grab the manager_mutex,
> it will break too_many_workers()'s assumption, and make too_many_workers()
> ugly(kick the timer wrongly, no found bug).
> 
> To avoid assumption-coruption, we add the original POOL_MANAGING_WORKERS back.

I don't think we're gaining anything with this and I'd like to confine
management state within the mutex only.  If too_many_workers() firing
spuriously while CPU up/down is in progress, just add a comment
explaining why it's a non-problem (actual worker management never
happens while cpu up/down holds manager positions).

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
  2012-08-29 16:51 ` [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock() Lai Jiangshan
@ 2012-08-29 18:25   ` Tejun Heo
  2012-08-30  9:16     ` Lai Jiangshan
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-08-29 18:25 UTC (permalink / raw)
  To: Lai Jiangshan; +Cc: linux-kernel

On Thu, Aug 30, 2012 at 12:51:55AM +0800, Lai Jiangshan wrote:
> If hotplug code grabbed the manager_mutex and worker_thread try to create
> a worker, the manage_worker() will return false and worker_thread go to
> process work items. Now, on the CPU, all workers are processing work items,
> no idle_worker left/ready for managing. It breaks the concept of workqueue
> and it is bug.
> 
> So when this case happens, the last idle should not go to process work,
> it should go to sleep as usual and wait normal events. but it should
> also be notified by the event that hotplug code release the manager_mutex.
> 
> So we add non_manager_role_manager_mutex_unlock() to do this notify.

Hmmm... how about just running rebind_workers() from a work item?
That way, it would be guaranteed that there alwyas will be an extra
worker available on rebind completion.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish
  2012-08-29 16:51 ` [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish Lai Jiangshan
@ 2012-08-29 18:34   ` Tejun Heo
  0 siblings, 0 replies; 18+ messages in thread
From: Tejun Heo @ 2012-08-29 18:34 UTC (permalink / raw)
  To: Lai Jiangshan; +Cc: linux-kernel

Hello,

On Thu, Aug 30, 2012 at 12:51:58AM +0800, Lai Jiangshan wrote:
> busy_worker_rebind_fn() can't return until all idle workers are rebound.
> This order is ensured by rebind_workers() currently.
> 
> We use mutex_lock(&worker->pool->manager_mutex) to wait for all idle workers
> to be rebound. this is an explicit way and it will ease the pain of
> the rebind_workers().
> 
> The sleeping mutex_lock(&worker->pool->manager_mutex) must be put in the top of
> busy_worker_rebind_fn(), because this busy worker thread can sleep
> before the WORKER_REBIND is cleared, but can't sleep after
> the WORKER_REBIND cleared.

I really can't say I like this overloading of manager_mutex.  CPU
up/down are actually behaving like the manager while it's holding the
manager_mutex at least.  I don't really think this
non-manager-role-manager-mutex usage is worthwhile whatever
simplification it makes to rebind path.  Rebind path being a bit ugly
/ complex is better than it muddling non-cpu-hotplug path further.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 8/9 V3] workqueue: single pass rebind_workers
  2012-08-29 16:51 ` [PATCH 8/9 V3] workqueue: single pass rebind_workers Lai Jiangshan
@ 2012-08-29 18:40   ` Tejun Heo
  0 siblings, 0 replies; 18+ messages in thread
From: Tejun Heo @ 2012-08-29 18:40 UTC (permalink / raw)
  To: Lai Jiangshan; +Cc: linux-kernel

Hello, Lai.

On Thu, Aug 30, 2012 at 12:51:59AM +0800, Lai Jiangshan wrote:
> busy_worker_rebind_fn() can't return until all idle workers are rebound,
> the code of busy_worker_rebind_fn() ensure this.
> 
> So we can change the order of the code of rebind_workers(),
> and make it is a single pass to do the rebind_workers().
> 
> It makes the code much clean and better readability.

But, yeah, I do like this approach better.  Would it be possible to do
this without overloading manager_mutex?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS
  2012-08-29 18:21   ` Tejun Heo
@ 2012-08-30  2:38     ` Lai Jiangshan
  0 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-30  2:38 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel

On 08/30/2012 02:21 AM, Tejun Heo wrote:
> Hello, Lai.
> 
> On Thu, Aug 30, 2012 at 12:51:54AM +0800, Lai Jiangshan wrote:
>> When hotplug happens, the plug code will also grab the manager_mutex,
>> it will break too_many_workers()'s assumption, and make too_many_workers()
>> ugly(kick the timer wrongly, no found bug).
>>
>> To avoid assumption-coruption, we add the original POOL_MANAGING_WORKERS back.
> 
> I don't think we're gaining anything with this and I'd like to confine
> management state within the mutex only.  If too_many_workers() firing
> spuriously while CPU up/down is in progress, just add a comment
> explaining why it's a non-problem

OK, I drop this patch, Could you add the comment, I'm not good at English.

> (actual worker management never
> happens while cpu up/down holds manager positions).
> 

I don't agree this claim. It happens "rarely", not "never", otherwise I missed something.


Thanks,
Lai

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
  2012-08-29 18:25   ` Tejun Heo
@ 2012-08-30  9:16     ` Lai Jiangshan
  2012-08-30  9:17       ` Tejun Heo
  0 siblings, 1 reply; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-30  9:16 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel

On 08/30/2012 02:25 AM, Tejun Heo wrote:
> On Thu, Aug 30, 2012 at 12:51:55AM +0800, Lai Jiangshan wrote:
>> If hotplug code grabbed the manager_mutex and worker_thread try to create
>> a worker, the manage_worker() will return false and worker_thread go to
>> process work items. Now, on the CPU, all workers are processing work items,
>> no idle_worker left/ready for managing. It breaks the concept of workqueue
>> and it is bug.
>>
>> So when this case happens, the last idle should not go to process work,
>> it should go to sleep as usual and wait normal events. but it should
>> also be notified by the event that hotplug code release the manager_mutex.
>>
>> So we add non_manager_role_manager_mutex_unlock() to do this notify.
> 
> Hmmm... how about just running rebind_workers() from a work item?
> That way, it would be guaranteed that there alwyas will be an extra
> worker available on rebind completion.
> 
> Thanks.
> 

gcwq_unbind_fn() is unsafe even it is called from a work item.
so we need non_manager_role_manager_mutex_unlock().

If rebind_workers() is called from a work item, it is safe when there is
no CPU_INTENSIVE items. but we can't disable CPU_INTENSIVE items,
so it is still unsafe, we need non_manager_role_manager_mutex_unlock() too.

non_manager_role_manager_mutex_unlock() approach is good to fix it.
I'm writing V4 patch/approach to fix it too, it is a little more complicated,
but it has some benefit over non_manager_role_manager_mutex_unlock() approach.

Thanks.
Lai

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
  2012-08-30  9:16     ` Lai Jiangshan
@ 2012-08-30  9:17       ` Tejun Heo
  2012-08-31  1:08         ` Lai Jiangshan
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-08-30  9:17 UTC (permalink / raw)
  To: Lai Jiangshan; +Cc: linux-kernel

Hello, Lai.

On Thu, Aug 30, 2012 at 05:16:01PM +0800, Lai Jiangshan wrote:
> gcwq_unbind_fn() is unsafe even it is called from a work item.
> so we need non_manager_role_manager_mutex_unlock().
> 
> If rebind_workers() is called from a work item, it is safe when there is
> no CPU_INTENSIVE items. but we can't disable CPU_INTENSIVE items,
> so it is still unsafe, we need non_manager_role_manager_mutex_unlock() too.

Can you please elaborate?  Why is it not safe if there are
CPU_INTENSIVE items?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
  2012-08-30  9:17       ` Tejun Heo
@ 2012-08-31  1:08         ` Lai Jiangshan
  0 siblings, 0 replies; 18+ messages in thread
From: Lai Jiangshan @ 2012-08-31  1:08 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel

On 08/30/2012 05:17 PM, Tejun Heo wrote:
> Hello, Lai.
> 
> On Thu, Aug 30, 2012 at 05:16:01PM +0800, Lai Jiangshan wrote:
>> gcwq_unbind_fn() is unsafe even it is called from a work item.
>> so we need non_manager_role_manager_mutex_unlock().
>>
>> If rebind_workers() is called from a work item, it is safe when there is
>> no CPU_INTENSIVE items. but we can't disable CPU_INTENSIVE items,
>> so it is still unsafe, we need non_manager_role_manager_mutex_unlock() too.
> 
> Can you please elaborate?  Why is it not safe if there are
> CPU_INTENSIVE items?
> 
> Thanks.
> 

Imaging there only two workers, they all have UNBOUND bit because the rebind_workers()
has not been called. The First one is processing work items, the second one is idle,
when the first one encounter the work item of rebind_workers() and handle it, at the same
the second one try to create workers and failed and go to process work items too.
but unlikely the second one encounters a CPU_INTENSIVE items, the nr_running is still
<=1 after the first one finish rebind_workers().

								nr_running.
first one:	process work item endless			+0 or +1
second one:	process the CPU_INTENSIVE item endless		+0

No one can service for manager role.

Thanks.
Lai

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2012-08-31  1:06 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
2012-08-29 16:51 ` [PATCH 1/9 V3] workqueue: ensure the wq_worker_sleeping() see the right flags Lai Jiangshan
2012-08-29 16:51 ` [PATCH 2/9 V3] workqueue: fix deadlock in rebind_workers() Lai Jiangshan
2012-08-29 16:51 ` [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS Lai Jiangshan
2012-08-29 18:21   ` Tejun Heo
2012-08-30  2:38     ` Lai Jiangshan
2012-08-29 16:51 ` [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock() Lai Jiangshan
2012-08-29 18:25   ` Tejun Heo
2012-08-30  9:16     ` Lai Jiangshan
2012-08-30  9:17       ` Tejun Heo
2012-08-31  1:08         ` Lai Jiangshan
2012-08-29 16:51 ` [PATCH 5/9 V3] workqueue: move rebind_hold to idle_rebind Lai Jiangshan
2012-08-29 16:51 ` [PATCH 6/9 V3] workqueue: simple clear WORKER_REBIND Lai Jiangshan
2012-08-29 16:51 ` [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish Lai Jiangshan
2012-08-29 18:34   ` Tejun Heo
2012-08-29 16:51 ` [PATCH 8/9 V3] workqueue: single pass rebind_workers Lai Jiangshan
2012-08-29 18:40   ` Tejun Heo
2012-08-29 16:52 ` [PATCH 9/9 V3] workqueue: merge the role of rebind_hold to idle_done Lai Jiangshan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).