From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: Tejun Heo <tj@kernel.org>, linux-kernel@vger.kernel.org
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Subject: [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
Date: Thu, 30 Aug 2012 00:51:55 +0800 [thread overview]
Message-ID: <1346259120-6216-5-git-send-email-laijs@cn.fujitsu.com> (raw)
In-Reply-To: <1346259120-6216-1-git-send-email-laijs@cn.fujitsu.com>
If hotplug code grabbed the manager_mutex and worker_thread try to create
a worker, the manage_worker() will return false and worker_thread go to
process work items. Now, on the CPU, all workers are processing work items,
no idle_worker left/ready for managing. It breaks the concept of workqueue
and it is bug.
So when this case happens, the last idle should not go to process work,
it should go to sleep as usual and wait normal events. but it should
also be notified by the event that hotplug code release the manager_mutex.
So we add non_manager_role_manager_mutex_unlock() to do this notify.
By the way, if manager_mutex is grabbed by a real manager,
POOL_MANAGING_WORKERS will be set, the last idle can go to process work.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 42 ++++++++++++++++++++++++++++++++++--------
1 files changed, 34 insertions(+), 8 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0673598..e40898a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1305,6 +1305,24 @@ __acquires(&gcwq->lock)
}
}
+/*
+ * Release pool->manager_mutex which grabbed by current thread but manager
+ *
+ * Current thread has held the manager_mutex and it may caused
+ * the worker_thread who tried to create worker go to sleep,
+ * wake one and let it try to create worker again or proccess work.
+ *
+ * CONTEXT:
+ * spin_lock_irq(gcwq->lock).
+ */
+static void non_manager_role_manager_mutex_unlock(struct worker_pool *pool)
+{
+ mutex_unlock(&pool->manager_mutex);
+
+ if (need_more_worker(pool))
+ wake_up_worker(pool);
+}
+
struct idle_rebind {
int idle_cnt; /* # idle workers to be rebound */
struct completion idle_done; /* all idle workers rebound */
@@ -2136,11 +2154,12 @@ woke_up:
recheck:
/* no more worker necessary? */
if (!need_more_worker(pool))
- goto sleep;
+ goto manage;
/* do we need to manage? */
- if (unlikely(!may_start_working(pool)) && manage_workers(worker))
- goto recheck;
+ if (unlikely(!may_start_working(pool)) &&
+ !(pool->flags & POOL_MANAGING_WORKERS))
+ goto manage;
/*
* ->scheduled list can only be filled while a worker is
@@ -2173,13 +2192,20 @@ recheck:
} while (keep_working(pool));
worker_set_flags(worker, WORKER_PREP, false);
-sleep:
+manage:
if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
goto recheck;
/*
- * gcwq->lock is held and there's no work to process and no
- * need to manage, sleep. Workers are woken up only while
+ * gcwq->lock is held and it will leave when one of these cases:
+ * case1) there's no work to process and no need to manage, sleep.
+ * case2) there are works to process but it is the last idle and
+ * failed to grab manager_lock to create worker. also sleep!
+ * current manager_lock owner will wake up it to
+ * process work or do manage.
+ * See non_manager_role_manager_mutex_unlock().
+ *
+ * Workers are woken up only while
* holding gcwq->lock or from local cpu, so setting the
* current state before releasing gcwq->lock is enough to
* prevent losing any event.
@@ -3419,9 +3445,9 @@ static void gcwq_release_management_and_unlock(struct global_cwq *gcwq)
{
struct worker_pool *pool;
- spin_unlock_irq(&gcwq->lock);
for_each_worker_pool(pool, gcwq)
- mutex_unlock(&pool->manager_mutex);
+ non_manager_role_manager_mutex_unlock(pool);
+ spin_unlock_irq(&gcwq->lock);
}
static void gcwq_unbind_fn(struct work_struct *work)
--
1.7.4.4
next prev parent reply other threads:[~2012-08-29 16:50 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-29 16:51 [PATCH 0/9 V3] workqueue: fix and cleanup hotplug/rebind_workers() Lai Jiangshan
2012-08-29 16:51 ` [PATCH 1/9 V3] workqueue: ensure the wq_worker_sleeping() see the right flags Lai Jiangshan
2012-08-29 16:51 ` [PATCH 2/9 V3] workqueue: fix deadlock in rebind_workers() Lai Jiangshan
2012-08-29 16:51 ` [PATCH 3/9 V3] workqueue: add POOL_MANAGING_WORKERS Lai Jiangshan
2012-08-29 18:21 ` Tejun Heo
2012-08-30 2:38 ` Lai Jiangshan
2012-08-29 16:51 ` Lai Jiangshan [this message]
2012-08-29 18:25 ` [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock() Tejun Heo
2012-08-30 9:16 ` Lai Jiangshan
2012-08-30 9:17 ` Tejun Heo
2012-08-31 1:08 ` Lai Jiangshan
2012-08-29 16:51 ` [PATCH 5/9 V3] workqueue: move rebind_hold to idle_rebind Lai Jiangshan
2012-08-29 16:51 ` [PATCH 6/9 V3] workqueue: simple clear WORKER_REBIND Lai Jiangshan
2012-08-29 16:51 ` [PATCH 7/9 V3] workqueue: explicit way to wait for idles workers to finish Lai Jiangshan
2012-08-29 18:34 ` Tejun Heo
2012-08-29 16:51 ` [PATCH 8/9 V3] workqueue: single pass rebind_workers Lai Jiangshan
2012-08-29 18:40 ` Tejun Heo
2012-08-29 16:52 ` [PATCH 9/9 V3] workqueue: merge the role of rebind_hold to idle_done Lai Jiangshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1346259120-6216-5-git-send-email-laijs@cn.fujitsu.com \
--to=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).