* [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug
@ 2012-09-15 8:13 Lai Jiangshan
2012-09-15 8:13 ` [PATCH 1/6] workqueue: async idle rebinding Lai Jiangshan
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
Patch1 use async algorithm to replace the synchronous algorithm to rebind
idle workers.
The synchronous algorithm requires 3 hand shakes, it introduces much
complicated.
The new async algorithm does not do any synchronization. it removes
the idle workers from idle_list to ensure the local-wake-up is correct
instead.
Patch2-6 do simple cleanup
Lai Jiangshan (6):
workqueue: async idle rebinding
workqueue: new day don't need WORKER_REBIND for busy rebinding
workqueue: remove WORKER_REBIND
workqueue: rename manager_mutex to assoc_mutex
workqueue: use __cpuinit instead of __devinit for cpu callback
workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback()
kernel/workqueue.c | 202 ++++++++++++++-------------------------------------
1 files changed, 56 insertions(+), 146 deletions(-)
--
1.7.4.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/6] workqueue: async idle rebinding
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
@ 2012-09-15 8:13 ` Lai Jiangshan
2012-09-15 8:13 ` [PATCH 2/6] workqueue: new day don't need WORKER_REBIND for busy rebinding Lai Jiangshan
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
Current rebind_workers() uses synchronous algorithm for idle_workers,
becuase the workers in idle_list must be rebound before local-wake-up.
but synchronous algorithm adds a lot of complication. so we search
another way to handle local-wake-up and use async algrithm for it.
To avoid to wrongly do local-wake-up, we add an exile-operation for
idle-worker-rebinding. When a idle worker is exiled, it will not queued
on @idle_list until it is rebound.
When a cpu is online, rebind_workers() just
notifies the idle workers to do rebind(A). and
exile the idle wokers(remove the idle workers from idle_list)(B).
And every idle worker will rebind itself(^A) and add itself back to idle_list(^B).
A and ^A ensure all idle workers will do rebind after the CPU is up.
B and ^B ensure that all workers in idle_list are bound when !GCWQ_DISASSOCIATED
When we have exile-operation, any bound worker can safely to do
local-wake-up when idle_list is not empty, because all workers
can't add itself to idle_list until it has rebound.
After we have exile-operation, the @nr_idle is not only the count
of @idle_list, but also exiled idle workers. so I check all the code which
access to @nr_idle or @idle_list, and make them be aware to exile-operation.
(only change too_many_workers() at the result)
rebind_workers() become single pass and don't release gcwq->lock in between.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 126 +++++++++++-----------------------------------------
1 files changed, 26 insertions(+), 100 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7b91332..e8b28d6 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -126,7 +126,6 @@ enum {
struct global_cwq;
struct worker_pool;
-struct idle_rebind;
/*
* The poor guys doing the actual heavy lifting. All on-duty workers
@@ -150,7 +149,6 @@ struct worker {
int id; /* I: worker id */
/* for rebinding worker to CPU */
- struct idle_rebind *idle_rebind; /* L: for idle worker */
struct work_struct rebind_work; /* L: for busy worker */
};
@@ -160,7 +158,9 @@ struct worker_pool {
struct list_head worklist; /* L: list of pending works */
int nr_workers; /* L: total number of workers */
- int nr_idle; /* L: currently idle ones */
+ int nr_idle; /* L: currently idle ones,
+ include ones in idle_list
+ and in doing rebind. */
struct list_head idle_list; /* X: list of idle workers */
struct timer_list idle_timer; /* L: worker idle timeout */
@@ -186,8 +186,6 @@ struct global_cwq {
struct worker_pool pools[NR_WORKER_POOLS];
/* normal and highpri pools */
-
- wait_queue_head_t rebind_hold; /* rebind hold wait */
} ____cacheline_aligned_in_smp;
/*
@@ -687,6 +685,10 @@ static bool too_many_workers(struct worker_pool *pool)
int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
int nr_busy = pool->nr_workers - nr_idle;
+ /* Is any idle home ? */
+ if (unlikely(list_empty(&pool->idle_list)))
+ return false;
+
return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
}
@@ -1611,37 +1613,20 @@ __acquires(&gcwq->lock)
}
}
-struct idle_rebind {
- int cnt; /* # workers to be rebound */
- struct completion done; /* all workers rebound */
-};
-
/*
- * Rebind an idle @worker to its CPU. During CPU onlining, this has to
- * happen synchronously for idle workers. worker_thread() will test
+ * Rebind an idle @worker to its CPU. worker_thread() will test
* %WORKER_REBIND before leaving idle and call this function.
*/
static void idle_worker_rebind(struct worker *worker)
{
struct global_cwq *gcwq = worker->pool->gcwq;
- /* CPU must be online at this point */
- WARN_ON(!worker_maybe_bind_and_lock(worker));
- if (!--worker->idle_rebind->cnt)
- complete(&worker->idle_rebind->done);
- spin_unlock_irq(&worker->pool->gcwq->lock);
-
- /* we did our part, wait for rebind_workers() to finish up */
- wait_event(gcwq->rebind_hold, !(worker->flags & WORKER_REBIND));
+ if (worker_maybe_bind_and_lock(worker))
+ worker_clr_flags(worker, WORKER_UNBOUND);
- /*
- * rebind_workers() shouldn't finish until all workers passed the
- * above WORKER_REBIND wait. Tell it when done.
- */
- spin_lock_irq(&worker->pool->gcwq->lock);
- if (!--worker->idle_rebind->cnt)
- complete(&worker->idle_rebind->done);
- spin_unlock_irq(&worker->pool->gcwq->lock);
+ worker_clr_flags(worker, WORKER_REBIND);
+ list_add(&worker->entry, &worker->pool->idle_list);
+ spin_unlock_irq(&gcwq->lock);
}
/*
@@ -1668,29 +1653,21 @@ static void busy_worker_rebind_fn(struct work_struct *work)
* @gcwq->cpu is coming online. Rebind all workers to the CPU. Rebinding
* is different for idle and busy ones.
*
- * The idle ones should be rebound synchronously and idle rebinding should
- * be complete before any worker starts executing work items with
- * concurrency management enabled; otherwise, scheduler may oops trying to
- * wake up non-local idle worker from wq_worker_sleeping().
- *
- * This is achieved by repeatedly requesting rebinding until all idle
- * workers are known to have been rebound under @gcwq->lock and holding all
- * idle workers from becoming busy until idle rebinding is complete.
+ * The idle ones will be kicked out of the idle_list, and it will
+ * add itself back when it finish to rebind.
*
- * Once idle workers are rebound, busy workers can be rebound as they
- * finish executing their current work items. Queueing the rebind work at
- * the head of their scheduled lists is enough. Note that nr_running will
+ * The busy workers can be rebound as they finish executing
+ * their current work items. Queueing the rebind work at the head of
+ * their scheduled lists is enough. Note that nr_running will
* be properbly bumped as busy workers rebind.
*
- * On return, all workers are guaranteed to either be bound or have rebind
- * work item scheduled.
+ * On return, all workers are guaranteed to be rebound when they
+ * add themselves back to idle_list if the gcwq is still associated.
*/
static void rebind_workers(struct global_cwq *gcwq)
- __releases(&gcwq->lock) __acquires(&gcwq->lock)
{
- struct idle_rebind idle_rebind;
struct worker_pool *pool;
- struct worker *worker;
+ struct worker *worker, *n;
struct hlist_node *pos;
int i;
@@ -1699,46 +1676,25 @@ static void rebind_workers(struct global_cwq *gcwq)
for_each_worker_pool(pool, gcwq)
lockdep_assert_held(&pool->manager_mutex);
- /*
- * Rebind idle workers. Interlocked both ways. We wait for
- * workers to rebind via @idle_rebind.done. Workers will wait for
- * us to finish up by watching %WORKER_REBIND.
- */
- init_completion(&idle_rebind.done);
-retry:
- idle_rebind.cnt = 1;
- INIT_COMPLETION(idle_rebind.done);
-
- /* set REBIND and kick idle ones, we'll wait for these later */
+ /* set REBIND and kick idle ones */
for_each_worker_pool(pool, gcwq) {
- list_for_each_entry(worker, &pool->idle_list, entry) {
+ list_for_each_entry_safe(worker, n, &pool->idle_list, entry) {
unsigned long worker_flags = worker->flags;
- if (worker->flags & WORKER_REBIND)
- continue;
-
/* morph UNBOUND to REBIND atomically */
worker_flags &= ~WORKER_UNBOUND;
worker_flags |= WORKER_REBIND;
ACCESS_ONCE(worker->flags) = worker_flags;
- idle_rebind.cnt++;
- worker->idle_rebind = &idle_rebind;
+ /* exile idle workers */
+ list_del_init(&worker->entry);
/* worker_thread() will call idle_worker_rebind() */
wake_up_process(worker->task);
}
}
- if (--idle_rebind.cnt) {
- spin_unlock_irq(&gcwq->lock);
- wait_for_completion(&idle_rebind.done);
- spin_lock_irq(&gcwq->lock);
- /* busy ones might have become idle while waiting, retry */
- goto retry;
- }
-
- /* all idle workers are rebound, rebind busy workers */
+ /* rebind busy workers */
for_each_busy_worker(worker, i, pos, gcwq) {
unsigned long worker_flags = worker->flags;
struct work_struct *rebind_work = &worker->rebind_work;
@@ -1768,34 +1724,6 @@ retry:
worker->scheduled.next,
work_color_to_flags(WORK_NO_COLOR));
}
-
- /*
- * All idle workers are rebound and waiting for %WORKER_REBIND to
- * be cleared inside idle_worker_rebind(). Clear and release.
- * Clearing %WORKER_REBIND from this foreign context is safe
- * because these workers are still guaranteed to be idle.
- *
- * We need to make sure all idle workers passed WORKER_REBIND wait
- * in idle_worker_rebind() before returning; otherwise, workers can
- * get stuck at the wait if hotplug cycle repeats.
- */
- idle_rebind.cnt = 1;
- INIT_COMPLETION(idle_rebind.done);
-
- for_each_worker_pool(pool, gcwq) {
- list_for_each_entry(worker, &pool->idle_list, entry) {
- worker->flags &= ~WORKER_REBIND;
- idle_rebind.cnt++;
- }
- }
-
- wake_up_all(&gcwq->rebind_hold);
-
- if (--idle_rebind.cnt) {
- spin_unlock_irq(&gcwq->lock);
- wait_for_completion(&idle_rebind.done);
- spin_lock_irq(&gcwq->lock);
- }
}
static struct worker *alloc_worker(void)
@@ -3908,8 +3836,6 @@ static int __init init_workqueues(void)
mutex_init(&pool->manager_mutex);
ida_init(&pool->worker_ida);
}
-
- init_waitqueue_head(&gcwq->rebind_hold);
}
/* create the initial worker */
--
1.7.4.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/6] workqueue: new day don't need WORKER_REBIND for busy rebinding
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
2012-09-15 8:13 ` [PATCH 1/6] workqueue: async idle rebinding Lai Jiangshan
@ 2012-09-15 8:13 ` Lai Jiangshan
2012-09-15 8:13 ` [PATCH 3/6] workqueue: remove WORKER_REBIND Lai Jiangshan
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
because old busy_worker_rebind_fn() have to wait until all idle worker finish.
so we have to use two flags WORKER_UNBOUND and WORKER_REBIND to avoid
prematurely clear all NOT_RUNNING bit when highly frequent offline/online.
but current code don't need to wait idle workers nor release gcwq->lock
when it is doing rebind in rebind_workers(), so we don't need to use two flags,
just one is enough. remove WORKER_REBIND from busy rebinding.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 8 +-------
1 files changed, 1 insertions(+), 7 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e8b28d6..4b1ff46 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1641,7 +1641,7 @@ static void busy_worker_rebind_fn(struct work_struct *work)
struct global_cwq *gcwq = worker->pool->gcwq;
if (worker_maybe_bind_and_lock(worker))
- worker_clr_flags(worker, WORKER_REBIND);
+ worker_clr_flags(worker, WORKER_UNBOUND);
spin_unlock_irq(&gcwq->lock);
}
@@ -1696,15 +1696,9 @@ static void rebind_workers(struct global_cwq *gcwq)
/* rebind busy workers */
for_each_busy_worker(worker, i, pos, gcwq) {
- unsigned long worker_flags = worker->flags;
struct work_struct *rebind_work = &worker->rebind_work;
struct workqueue_struct *wq;
- /* morph UNBOUND to REBIND atomically */
- worker_flags &= ~WORKER_UNBOUND;
- worker_flags |= WORKER_REBIND;
- ACCESS_ONCE(worker->flags) = worker_flags;
-
if (test_and_set_bit(WORK_STRUCT_PENDING_BIT,
work_data_bits(rebind_work)))
continue;
--
1.7.4.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/6] workqueue: remove WORKER_REBIND
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
2012-09-15 8:13 ` [PATCH 1/6] workqueue: async idle rebinding Lai Jiangshan
2012-09-15 8:13 ` [PATCH 2/6] workqueue: new day don't need WORKER_REBIND for busy rebinding Lai Jiangshan
@ 2012-09-15 8:13 ` Lai Jiangshan
2012-09-15 8:13 ` [PATCH 4/6] workqueue: rename manager_mutex to assoc_mutex Lai Jiangshan
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
we use list_del_init(&worker->entry) when rebind(exile) idles
or destroy idles.
So we can use list_empty(&worker->entry) to know: does the worker
has been exiled or killed.
WORKER_REBIND is not need any more, remove it to reduce the states
of workers.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 26 ++++++++------------------
1 files changed, 8 insertions(+), 18 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4b1ff46..bc846b1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -73,11 +73,10 @@ enum {
WORKER_DIE = 1 << 1, /* die die die */
WORKER_IDLE = 1 << 2, /* is idle */
WORKER_PREP = 1 << 3, /* preparing to run works */
- WORKER_REBIND = 1 << 5, /* mom is home, come back */
WORKER_CPU_INTENSIVE = 1 << 6, /* cpu intensive */
WORKER_UNBOUND = 1 << 7, /* worker is unbound */
- WORKER_NOT_RUNNING = WORKER_PREP | WORKER_REBIND | WORKER_UNBOUND |
+ WORKER_NOT_RUNNING = WORKER_PREP | WORKER_UNBOUND |
WORKER_CPU_INTENSIVE,
NR_WORKER_POOLS = 2, /* # worker pools per gcwq */
@@ -1615,7 +1614,7 @@ __acquires(&gcwq->lock)
/*
* Rebind an idle @worker to its CPU. worker_thread() will test
- * %WORKER_REBIND before leaving idle and call this function.
+ * worker->entry before leaving idle and call this function.
*/
static void idle_worker_rebind(struct worker *worker)
{
@@ -1624,7 +1623,6 @@ static void idle_worker_rebind(struct worker *worker)
if (worker_maybe_bind_and_lock(worker))
worker_clr_flags(worker, WORKER_UNBOUND);
- worker_clr_flags(worker, WORKER_REBIND);
list_add(&worker->entry, &worker->pool->idle_list);
spin_unlock_irq(&gcwq->lock);
}
@@ -1676,16 +1674,9 @@ static void rebind_workers(struct global_cwq *gcwq)
for_each_worker_pool(pool, gcwq)
lockdep_assert_held(&pool->manager_mutex);
- /* set REBIND and kick idle ones */
+ /* exile and kick idle ones */
for_each_worker_pool(pool, gcwq) {
list_for_each_entry_safe(worker, n, &pool->idle_list, entry) {
- unsigned long worker_flags = worker->flags;
-
- /* morph UNBOUND to REBIND atomically */
- worker_flags &= ~WORKER_UNBOUND;
- worker_flags |= WORKER_REBIND;
- ACCESS_ONCE(worker->flags) = worker_flags;
-
/* exile idle workers */
list_del_init(&worker->entry);
@@ -2159,7 +2150,7 @@ __acquires(&gcwq->lock)
* necessary to avoid spurious warnings from rescuers servicing the
* unbound or a disassociated gcwq.
*/
- WARN_ON_ONCE(!(worker->flags & (WORKER_UNBOUND | WORKER_REBIND)) &&
+ WARN_ON_ONCE(!(worker->flags & WORKER_UNBOUND) &&
!(gcwq->flags & GCWQ_DISASSOCIATED) &&
raw_smp_processor_id() != gcwq->cpu);
@@ -2283,18 +2274,17 @@ static int worker_thread(void *__worker)
woke_up:
spin_lock_irq(&gcwq->lock);
- /*
- * DIE can be set only while idle and REBIND set while busy has
- * @worker->rebind_work scheduled. Checking here is enough.
- */
- if (unlikely(worker->flags & (WORKER_REBIND | WORKER_DIE))) {
+ /* Is it still in idle_list ? */
+ if (unlikely(list_empty(&worker->entry))) {
spin_unlock_irq(&gcwq->lock);
+ /* reason: killed */
if (worker->flags & WORKER_DIE) {
worker->task->flags &= ~PF_WQ_WORKER;
return 0;
}
+ /* reason: idle rebind */
idle_worker_rebind(worker);
goto woke_up;
}
--
1.7.4.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 4/6] workqueue: rename manager_mutex to assoc_mutex
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
` (2 preceding siblings ...)
2012-09-15 8:13 ` [PATCH 3/6] workqueue: remove WORKER_REBIND Lai Jiangshan
@ 2012-09-15 8:13 ` Lai Jiangshan
2012-09-15 8:13 ` [PATCH 5/6] workqueue: use __cpuinit instead of __devinit for cpu callback Lai Jiangshan
2012-09-15 8:13 ` [PATCH 6/6] workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback() Lai Jiangshan
5 siblings, 0 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
We can't known what is being protected from the name of
manager_mutex or be misled by the name.
Actually, it protects the CPU-association of the gcwq,
rename it to assoc_mutex will be better.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 40 ++++++++++++++++++++--------------------
1 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bc846b1..5438c0d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -58,7 +58,7 @@ enum {
* be executing on any CPU. The gcwq behaves as an unbound one.
*
* Note that DISASSOCIATED can be flipped only while holding
- * managership of all pools on the gcwq to avoid changing binding
+ * assoc_mutex of all pools on the gcwq to avoid changing binding
* state while create_worker() is in progress.
*/
GCWQ_DISASSOCIATED = 1 << 0, /* cpu can't serve workers */
@@ -165,7 +165,7 @@ struct worker_pool {
struct timer_list idle_timer; /* L: worker idle timeout */
struct timer_list mayday_timer; /* L: SOS timer for workers */
- struct mutex manager_mutex; /* mutex manager should hold */
+ struct mutex assoc_mutex; /* protect GCWQ_DISASSOCIATED */
struct ida worker_ida; /* L: for worker IDs */
};
@@ -1672,7 +1672,7 @@ static void rebind_workers(struct global_cwq *gcwq)
lockdep_assert_held(&gcwq->lock);
for_each_worker_pool(pool, gcwq)
- lockdep_assert_held(&pool->manager_mutex);
+ lockdep_assert_held(&pool->assoc_mutex);
/* exile and kick idle ones */
for_each_worker_pool(pool, gcwq) {
@@ -2061,26 +2061,26 @@ static bool manage_workers(struct worker *worker)
/*
* To simplify both worker management and CPU hotplug, hold off
- * management while hotplug is in progress. CPU hotplug path can't
+ * assoc_mutex while management is in progress. CPU hotplug path can't
* grab %POOL_MANAGING_WORKERS to achieve this because that can
* lead to idle worker depletion (all become busy thinking someone
* else is managing) which in turn can result in deadlock under
- * extreme circumstances. Use @pool->manager_mutex to synchronize
+ * extreme circumstances. Use @pool->assoc_mutex to synchronize
* manager against CPU hotplug.
*
- * manager_mutex would always be free unless CPU hotplug is in
+ * assoc_mutex would always be free unless CPU hotplug is in
* progress. trylock first without dropping @gcwq->lock.
*/
- if (unlikely(!mutex_trylock(&pool->manager_mutex))) {
+ if (unlikely(!mutex_trylock(&pool->assoc_mutex))) {
spin_unlock_irq(&pool->gcwq->lock);
- mutex_lock(&pool->manager_mutex);
+ mutex_lock(&pool->assoc_mutex);
/*
* CPU hotplug could have happened while we were waiting
- * for manager_mutex. Hotplug itself can't handle us
+ * for assoc_mutex. Hotplug itself can't handle us
* because manager isn't either on idle or busy list, and
* @gcwq's state and ours could have deviated.
*
- * As hotplug is now excluded via manager_mutex, we can
+ * As hotplug is now excluded via assoc_mutex, we can
* simply try to bind. It will succeed or fail depending
* on @gcwq's current state. Try it and adjust
* %WORKER_UNBOUND accordingly.
@@ -2103,7 +2103,7 @@ static bool manage_workers(struct worker *worker)
ret |= maybe_create_worker(pool);
pool->flags &= ~POOL_MANAGING_WORKERS;
- mutex_unlock(&pool->manager_mutex);
+ mutex_unlock(&pool->assoc_mutex);
return ret;
}
@@ -3458,23 +3458,23 @@ EXPORT_SYMBOL_GPL(work_busy);
*/
/* claim manager positions of all pools */
-static void gcwq_claim_management_and_lock(struct global_cwq *gcwq)
+static void gcwq_claim_assoc_and_lock(struct global_cwq *gcwq)
{
struct worker_pool *pool;
for_each_worker_pool(pool, gcwq)
- mutex_lock_nested(&pool->manager_mutex, pool - gcwq->pools);
+ mutex_lock_nested(&pool->assoc_mutex, pool - gcwq->pools);
spin_lock_irq(&gcwq->lock);
}
/* release manager positions */
-static void gcwq_release_management_and_unlock(struct global_cwq *gcwq)
+static void gcwq_release_assoc_and_unlock(struct global_cwq *gcwq)
{
struct worker_pool *pool;
spin_unlock_irq(&gcwq->lock);
for_each_worker_pool(pool, gcwq)
- mutex_unlock(&pool->manager_mutex);
+ mutex_unlock(&pool->assoc_mutex);
}
static void gcwq_unbind_fn(struct work_struct *work)
@@ -3487,7 +3487,7 @@ static void gcwq_unbind_fn(struct work_struct *work)
BUG_ON(gcwq->cpu != smp_processor_id());
- gcwq_claim_management_and_lock(gcwq);
+ gcwq_claim_assoc_and_lock(gcwq);
/*
* We've claimed all manager positions. Make all workers unbound
@@ -3504,7 +3504,7 @@ static void gcwq_unbind_fn(struct work_struct *work)
gcwq->flags |= GCWQ_DISASSOCIATED;
- gcwq_release_management_and_unlock(gcwq);
+ gcwq_release_assoc_and_unlock(gcwq);
/*
* Call schedule() so that we cross rq->lock and thus can guarantee
@@ -3560,10 +3560,10 @@ static int __devinit workqueue_cpu_up_callback(struct notifier_block *nfb,
case CPU_DOWN_FAILED:
case CPU_ONLINE:
- gcwq_claim_management_and_lock(gcwq);
+ gcwq_claim_assoc_and_lock(gcwq);
gcwq->flags &= ~GCWQ_DISASSOCIATED;
rebind_workers(gcwq);
- gcwq_release_management_and_unlock(gcwq);
+ gcwq_release_assoc_and_unlock(gcwq);
break;
}
return NOTIFY_OK;
@@ -3817,7 +3817,7 @@ static int __init init_workqueues(void)
setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
(unsigned long)pool);
- mutex_init(&pool->manager_mutex);
+ mutex_init(&pool->assoc_mutex);
ida_init(&pool->worker_ida);
}
}
--
1.7.4.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 5/6] workqueue: use __cpuinit instead of __devinit for cpu callback
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
` (3 preceding siblings ...)
2012-09-15 8:13 ` [PATCH 4/6] workqueue: rename manager_mutex to assoc_mutex Lai Jiangshan
@ 2012-09-15 8:13 ` Lai Jiangshan
2012-09-15 8:13 ` [PATCH 6/6] workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback() Lai Jiangshan
5 siblings, 0 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
It makes less sense to use __devinit(the memory will be discard
after boot when !HOTPLUG).
It will be more accurate to to use __cpuinit(the memory will be discard
after boot when !HOTPLUG_CPU).
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5438c0d..e8d3c95 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3532,7 +3532,7 @@ static void gcwq_unbind_fn(struct work_struct *work)
* Workqueues should be brought up before normal priority CPU notifiers.
* This will be registered high priority CPU notifier.
*/
-static int __devinit workqueue_cpu_up_callback(struct notifier_block *nfb,
+static int __cpuinit workqueue_cpu_up_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
{
@@ -3573,7 +3573,7 @@ static int __devinit workqueue_cpu_up_callback(struct notifier_block *nfb,
* Workqueues should be brought down after normal priority CPU notifiers.
* This will be registered as low priority CPU notifier.
*/
-static int __devinit workqueue_cpu_down_callback(struct notifier_block *nfb,
+static int __cpuinit workqueue_cpu_down_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
{
--
1.7.4.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 6/6] workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback()
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
` (4 preceding siblings ...)
2012-09-15 8:13 ` [PATCH 5/6] workqueue: use __cpuinit instead of __devinit for cpu callback Lai Jiangshan
@ 2012-09-15 8:13 ` Lai Jiangshan
5 siblings, 0 replies; 7+ messages in thread
From: Lai Jiangshan @ 2012-09-15 8:13 UTC (permalink / raw)
To: Tejun Heo, linux-kernel; +Cc: Lai Jiangshan
workqueue_cpu_down_callback() is only used when HOTPLUG_CPU=y.
so it will be better if we use hotcpu_notifier() to register it.
When HOTPLUG_CPU=y, hotcpu_notifier() and cpu_notifier() are the same.
When HOTPLUG_CPU=n, if we use cpu_notifier(), workqueue_cpu_down_callback()
will be called(no-op) when booting and the memory of workqueue_cpu_down_callback()
and gcwq_unbind_fn() will be discard after boot.
But if we use hotcpu_notifier(), we can avoid the no-op calls of
workqueue_cpu_down_callback(). And the memory of workqueue_cpu_down_callback()
and gcwq_unbind_fn() will be discard at build time:
$ ls -l kernel/workqueue.o.cpu_notifier kernel/workqueue.o.hotcpu_notifier
-rw-rw-r-- 1 laijs laijs 484080 Sep 15 11:31 kernel/workqueue.o.cpu_notifier
-rw-rw-r-- 1 laijs laijs 478240 Sep 15 11:31 kernel/workqueue.o.hotcpu_notifier
$ size kernel/workqueue.o.cpu_notifier kernel/workqueue.o.hotcpu_notifier
text data bss dec hex filename
18513 2387 1221 22121 5669 kernel/workqueue.o.cpu_notifier
18082 2355 1221 21658 549a kernel/workqueue.o.hotcpu_notifier
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e8d3c95..fc4208a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3791,7 +3791,7 @@ static int __init init_workqueues(void)
WORK_CPU_LAST);
cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_UP);
- cpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
+ hotcpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
/* initialize gcwqs */
for_each_gcwq_cpu(cpu) {
--
1.7.4.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-09-15 8:47 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-15 8:13 [PATCH 0/6] workqueue: async idle rebinding and cleanup for hotplug Lai Jiangshan
2012-09-15 8:13 ` [PATCH 1/6] workqueue: async idle rebinding Lai Jiangshan
2012-09-15 8:13 ` [PATCH 2/6] workqueue: new day don't need WORKER_REBIND for busy rebinding Lai Jiangshan
2012-09-15 8:13 ` [PATCH 3/6] workqueue: remove WORKER_REBIND Lai Jiangshan
2012-09-15 8:13 ` [PATCH 4/6] workqueue: rename manager_mutex to assoc_mutex Lai Jiangshan
2012-09-15 8:13 ` [PATCH 5/6] workqueue: use __cpuinit instead of __devinit for cpu callback Lai Jiangshan
2012-09-15 8:13 ` [PATCH 6/6] workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback() Lai Jiangshan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).